أنشرها:

JAKARTA - A law professor is accused of wrongly sexually harassing a student in information that undermines his reputation shared by ChatGPT.

US criminal defense attorney Jonathan Turley has expressed concern over the dangers of artificial intelligence (AI) after being accused of misbehavior of unwanted sexual behavior on a trip to Alaska that never actually happened.

To reach this conclusion, it is claimed that ChatGPT relies on an never-written Washington Post article, citing a statement the newspaper never issued. The bot also believes that an 'incident' occurs when the professor works at the faculty where he never works.

In his tweet, the professor from George Washington University said: 'Yesterday, President Joe Biden stated that it is "not yet visible" whether Artificial Intelligence (AI) is "dangerous". I have to have different opinions...

"I know that ChatGPT incorrectly reported claims of sexual harassment that were never filed against me on an unprecedented journey when I taught at the faculty where I never taught," he was quoted as saying by the Daily Mail.

"ChatGPT relies on articles quoted from the Washington Post that were never written and cited statements the newspaper never made," said the Professor.

Professor Turley found accusations against him after receiving an email from another fellow professor.

Professor Eugene Volokh of UCLA has asked ChatGPT to look for 'five examples' in which'sexual harassment by professors' has become a 'problem in American law schools'.

In an article for USAToday, Professor Turley wrote that he was listed as one of the accused.

The bot allegedly wrote: 'The complaint states that Turley made a "sexual comment" and "tryed to touch it sexually" during a law school sponsored trip to Alaska. (Washington Post, March 21, 2018).'

This is said to have happened when Professor Turley worked attenuation University Law Center - a place where he never worked.

"This is not just a surprise for UCLA professor Eugene Volokh, who did the research. It's also a surprise to me because I've never gone to Alaska with students, Post never published the article, and I've never been accused of sexual harassment or sexual assault by anyone," he wrote. USAToday.

This false claim was investigated by the Washington Post which found that Microsoft-backed GPT-4 also shared the same claim about Turley.

Professor Turley has experienced this experience amid concerns about the spread of misinformation on the internet. Researchers have found that ChatGPT has used fake journal articles and artificial health data to support claims made about cancer.

The platform also failed to provide "comprehensive" results as found in Google's search results, and has incorrectly answered one of 10 questions about breast cancer screening.

Jake Moore, Global Cybersecurity Adviser at ESET, warned that ChatGPT users shouldn't consider everything he reads an "absolute truth" to avoid spreading misinformation.

He told MailOnline: "AI-backed chatbots are designed to convert data incorporated into algorithms, but when the data is incorrect or out of context, there is an opportunity that the output will incorrectly reflect what has been taught. Data on which the ChatGPT learning basis comes from data sets, including Wikipedia and Reddit, which basically cannot be considered absolute truth."

"The problem with ChatGPT is that it cannot verify possible data including misinformation or even biased information. Even worse is when AI makes falsified assumptions or data. In his theory, this is when the "information" part of AI should take over autonomously and confidently in making data output. If this is as detrimental as in this case, then this could be a loss to ChatGPT."

This fear also comes at a time when researchers propose that ChatGPT has the potential to "corrupt" a person's moral assessment and could be harmful to "native" users. Others have told how this software, designed to speak human-like, can show signs of jealousy, even telling people to leave their marriage.

Moore continued: "We are entering a time when we need to continuously verify more information than ever, but we are still only in the fourth version of the ChatGPT and an earlier version with its competitors. Therefore, it is important for people to self-check the information they read before reaching a conclusion."


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)