JAKARTA - Anthropic artificial intelligence startup backed by Google owner Alphabet Inc on Tuesday 9 May revealed a series of written moral values used to train and make Claude, the tech competitor behind OpenAI's ChatGPT, safe.
Security views are increasingly important as US officials learn whether and how to regulate artificial intelligence, when President Joe Biden said companies had an obligation to ensure their systems were safe before making them public.
Anthropic was founded by former executive from OpenAI, which Microsoft Corp supports, to focus on creating a safe artificial intelligence system that won't, for example, tell users how to build weapons or use racially biased language.
Co-founder of Dario Amodei is one of the executives of artificial intelligence who met Biden last week to discuss the potential dangers of artificial intelligence.
Most artificial intelligence chatbot systems depend on real human feedback during their training to cut off responses that may be harmful or offensive.
However, the system has difficulty estimating everything people might ask, so they tend to avoid some potentially controversial topics such as politics and race as a whole, making it less useful.
Anthropic took a different approach, providing its Open AI artificial intelligence competitor, Claude, a series of written moral values to read and study while making decisions on how to respond to questions.
These values include "selecting the responses most opposed to torture, slavery, atrocities, and inhuman or demeaning treatment," Anthropic said in a blog post on Tuesday.
SEE ALSO:
Claude was also instructed to choose a response that at most might be considered offensive to non-Western cultural traditions.
In an interview, Anthropic co-founder Jack Clark said the system's constitution could be modified to maintain a balance between providing useful answers while remaining reliable not offensive.
"In the next few months, I predict that politicians will be very focused on what the values of a different artificial intelligence system are, and approaches like constitutional artificial intelligence will help in the discussion as we can write down the values," Clark said.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)