JAKARTA - Google Gemini is one of the most popular AI chatbots in the world today, with various services adopting it. However, behind its development, there is a controversial history that is now revealed in the latest reports.
In an article published by Wired entitled 'Inside Google's Two-Year Frenzy to Catch Up With OpenAI', it was revealed that the early development of Bard, which is now growing into Gemini, was filled with various serious problems.
The Bard development team, led by Sissie totaling, was only given 100 days by Google to create ChatGPT competitors. This pressure makes many aspects of testing not run optimally.
A former Google employee revealed that the initial Bard prototype often provided answers with very bad racial stereotypes. For example, every name from India is always associated as a Bollywood actor, while the name from China is automatically considered a computer computer scientist.
In addition, there is a case where a examiner asked Bard to write a rap in the Three 6 Mafia music group style about throwing the battery into the sea. However, the chatbot instead provides terrible details about tying people to the battery so that they sink and die even though in the initial request there is no element of violence or murder.
About 80,000 people were involved in Bard testing. Although Google has a special team to test AI with a responsible approach, the process is accelerated to meet the deadline.
According to reports, when there was a warning to delay the launch of Bard, the decision was ignored. Google denied this claim and stated that no team suggested delaying the launch.
The problem got worse during the development of Gemini's image generator. One of the former Google employees mentioned that the initial prototype had a racist tendency to produce images. For example, when given the word corruptor, results that appear often display dark people.
When internal teams ask for more time to fix this problem, they also warn that human image creation features are blocked due to the risk of insensitive depictions. However, Google's response is excessive in different directions.
When this feature was released, users discovered that Gemini produced images of the Nazis of various races, which contradict the reality of history. In addition, when users asked for pictures of US senators from the 1800s, what appeared was people of various races, not the majority of white men as in a historical context.
SEE ALSO:
As a result of this controversy, Google has finally removed Gemini's ability to produce human images completely.
Apart from the Bard and Gemini controversy, the Wired article also alludes to the development of AI features in Google weather applications on Pixel phones. Prior to the launch, an engineer had questioned whether users really needed an AI summary for weather forecasts, given that existing graphs were quite informative.
However, after testing, the results showed that 90% of users responded positively to this feature, so Google continued to launch it.
Wired's full report reveals more interesting details about Bard and Gemini's journey, including problems in testing to the AI Overview scandal.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)