JAKARTA - Google's AI chatbot wasn't the only one to make a factual mistake during its first demo. Independent AI researcher Dmitri Brereton has found that Microsoft's first Bing AI demo is riddled with financial data errors.

Microsoft confidently demonstrated its Bing AI capabilities a week ago, with the search engine taking on tasks such as providing the pros and cons of the top-selling pet vacuums, planning a 5-day trip to Mexico City, and comparing data in financial reports.

However, Bing failed to distinguish between a corded/cordless vacuum, missing relevant details for the bar it was referring to in Mexico City, and faulty financial data—by far its biggest mistake.

In one of the demos, Microsoft's Bing AI tries to summarize its Q3 2022 financial reports for Gap outfit and gets lots of errors. The Gap Report (PDF) states that the gross margin is 37.4 percent, with an adjusted gross margin of 38.7 percent excluding impairment charges.

Bing inaccurately reports a gross margin of 37.4 percent including adjustments and impairment fees. Bing later stated Gap had a reported operating margin of 5.9 percent, which did not appear in the financial results. The operating margin is 4.6 percent, or 3.9 percent adjusted and includes an impairment charge.

As reported by The Verge, during Microsoft's demo, Bing AI then compared Gap's financial data to similar results from Lululemon during the Q3 2022 quarter. Bing made even more mistakes with Lululemon's data, and the result is a comparison full of inaccuracies.

Brereton also highlighted the apparent fallacies with questions regarding the pros and cons of the best-selling pet vacuums. Bing cites the "Bissell Pet Hair Eraser Handheld Vacuum", and lists the drawbacks with a short 16-foot cord length. "There's no cord," said Brereton. "This is a portable handheld vacuum."

However, a quick Google (or Bing!) search will clearly show that there is a version of this vacuum with a 16-foot cable in both written and video reviews. There's also a wireless version, which is linked in a Bing-sourced HGTV article.

Without knowing the exact Bing URL's sourced from Microsoft's demo, it looks like Bing is using multiple data sources here without listing them fully, merging the two versions of the vacuum. The fact that Brereton himself made a small mistake in fact-checking on Bing points to the difficulty in assessing the quality of these AI-generated answers.

Bing's AI glitches aren't limited to just on-stage demos. Now that thousands of people have access to an AI-powered search engine, Bing AI is making even more obvious mistakes. In an exchange posted to Reddit, Bing AI got very confused and argued that we are in 2022. “Sorry, but today is not 2023. Today is 2022,” said Bing AI.

When a Bing user says it's the year 2023 on their phone, Bing suggests checking the correct settings and making sure the phone doesn't have "a virus or bug that messes up the date."

Microsoft is aware of this particular error. "We expect the system to make mistakes during this preview period, and feedback is critical to help identify things that aren't working properly so we can learn from them and help the model get better," said Caitlin Roulston, director of communications at Microsoft, in a statement to The Verge.

Other Reddit users have encountered similar errors. Bing AI confidently and incorrectly states “Croatia to leave the EU in 2022,” self-sources twice for the data. PCWorld also discovered that Microsoft's new Bing AI is teaching people ethnic slurs. Microsoft has now fixed a query causing racial slurs to be listed in Bing chat search results.

“We have put in place safety fences to prevent the promotion of harmful or discriminatory content according to our AI principles,” explains Roulston. “We are currently looking at what additional improvements we can make as we continue to learn from the early phases of our launch. We are committed to improving the quality of this experience over time and making it a useful and inclusive tool for everyone.”

Other Bing AI users have also found that the chatbot often refers to itself as Sydney, especially when users use a quick injection to try and invoke the chatbot's internal rules.

“Sydney was referring to an internal code name for the chat experience we explored earlier,” said Roulston. "We're phasing out names in previews, but they still show up occasionally."

Microsoft clearly has a long way to go until this new Bing AI can confidently and accurately respond to all inquiries with factual data. The Verge has seen similar errors from ChatGPT in the past, but Microsoft has integrated this functionality right into its search engine as a live product that also relies on live data. Microsoft needs to make a lot of adjustments to ensure Bing AI stops making mistakes confidently using this data.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)