Partager:

JAKARTA - Alphabet Inc., the parent of Google and YouTube, warned employees about the use of chatbots, including its own program called Bard, at the same time as marketing the program around the world. This was conveyed by four people familiar with the matter to Reuters.

The parent company of Google has notified employees not to include the company's classified material into artificial intelligence chatbots (AI), citing a long-running policy on information protection.

Chatbots, among others, Bard and ChatGPT, are programs that sound like humans and use generative artificial intelligence to have conversations with users and answer various questions. Human reviewers can read the conversation, and researchers found that similar AIs can produce back data that the program absorbs during training, thereby increasing the risk of information leakage.

Alphabet also warned its engineers to avoid the direct use of computer codes that chatbots can generate, said some of the people quoted in the report.

In response to a request for comment, the company said Bard could provide unwanted code suggestions, but still help programmers. Google also said that it is trying to be transparent about its technological limitations.

This concern shows how Google is trying to avoid business losses from the software it launches to compete with ChatGPT. What is at stake in the Google race against ChatGPT supported by OpenAI and Microsoft Corp is billions of dollars in investment and uncountable advertising and cloud revenue from the new AI program.

Google's vigilance also reflects security standards that become a habit for companies, namely warning employees about the use of public chat programs.

A number of companies around the world have set security measures for AI chatbots, including Samsung, Amazon.com, and Deutsche Bank, according to the companies contacted by Reuters. Apple, which did not respond to requests for comment, also reportedly took similar action.

According to a survey conducted by the Fishbowl network site of nearly 12,000 respondents, including from leading companies in the United States, 43% of professionals used ChatGPT or other AI tools in January, often without notifying their superiors.

In February, Google notified staff testing Bard before it was launched not to provide internal information to the company.

Sekarang, Google sedang meluncurkan Bard ke lebih dari 180 negara dan dalam 40 bahasa sebagai dukungan untuk kreatifikasi, dan peringatan mereka juga meliputi saran kode yang dihasilkan.

Google told Reuters it had a detailed talk with the Irish Data Protection Commission and was responding to regulatory questions, after a Politico report on Tuesday, June 13 stated that the company postponed the launch of Bard in the European Union this week as it needed more information on the impact of the chatbot on privacy.

Teknologi seperti ini dapat menghasilkan email, dokumen, bahkan perangkat itu sendiri, dengan janji untuk mempercepatkan tugas-tugas tersebut secara signifikan. Namun, di dalam konten tersebut, bisa saja terdapat informasi yang salah, data sensitif, atau bahkan citasi yang dilindungi hak copyright dari novel "Harry Potter".

On Google's updated privacy notice on June 1, it was also stated: "Don't enter confidential or sensitive information into your conversation with Bard."

Some companies have developed software to address concerns like this. For example, Cloudflare, which protects websites from cyberattacks and offers other cloud services, is marketing the ability for businesses to tag and limit certain data streams beyond.

Google and Microsoft also offer chat tools to business customers at higher prices, but do not incorporate data into public AI models. The default settings on Bard and ChatGPT are storing user conversation history, which users can choose to delete.

"It is important that the company does not want their staff to use public chatbots for work," said Yusuf Mehdi, Microsoft Consumer Chief Marketing Officer.

"The company is taking a conservative stance," said Mehdi. He also explained how Microsoft's free Bing chatbots are compared to their company's software. "There, our policies are much stricter."

Microsoft declined to comment on whether they prohibited staff from incorporating classified information into public AI programs, including their own, although a different executive there told Reuters he personally limited its use.

Matthew Prince, CEO of Cloudflare, said that typing classified issues into chatbots is like "Let a group of PhD students access all your personal records."


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)