Intending to Show Off, the Google Bard Chatbot Gives Inaccurate Answers
JAKARTA - The emergence of Bard as Google's Artificial Intelligence (AI) chatbot plans to shift the ChatGPT OpenAI market, but when it is praised as superior and different, it displays inaccurate information.
Through a GIF designed to demonstrate the user experience of Bard, the AI answers the question, "What new discoveries from the James Webb Space Telescope can I tell my 9 year old son about?".
Trials
Bard offered three points in response, including stating that it was the Webb Telescope that had taken the first picture of a planet outside the Solar System.
This immediately got a response from a number of astronomers on Twitter, who pointed out that Bard's answer was incorrect and that the first picture of an exoplanet was actually taken in 2004 as NASA stated.
"Well actually I'm sure Bard will impress but for the record JWST did not take the first image of a planet outside our Solar System," tweeted astrophysicist Grant Tremblay.
"I really love and appreciate that one of the most powerful companies on the planet is using JWST search to advertise their LLM. Amazing! But ChatGPT etc., while very impressive, are often very confidently wrong. It will be interesting to see a future where LLM checks itself for errors."
It's not clear why the Bard could give the wrong answer. As Tremblay noted, the main problem with AI chatbots like ChatGPT and Bard is that they tend to confidently state misinformation as fact.
Systems often fabricate information because they are essentially auto-completion systems. Information in the language model is not pulled from a list of facts and read out, but spit out by a highly sophisticated system of completing sentences.
SEE ALSO:
Sentences about the very recent past can be more error-ridden than usual for an AI because the information in them hasn't been written down many times. And could be one of the reasons the ChatGPT model won't tell users much information after 2021.
"This highlights the importance of a rigorous testing process, something we started this week with our Trusted Testers program. We will combine external feedback with our own internal testing to ensure Bard's response meets high standards of quality, security, and grounding in real-world information ," said a Google spokesperson in response to Bard's mistake, to The Verge quoted on Thursday, February 9.
For your information, Microsoft yesterday demonstrated its new AI-powered Bing search engine, and said it was trying to overcome problems like Bard's by placing the onus on the user.
"Bing is powered by AI, so surprises and mistakes are possible. Be sure to check the facts, and share feedback so we can learn and grow!," said Microsoft.