JAKARTA Many workers across the United States (US) have switched to ChatGPT to assist with basic tasks. This can be seen in the findings of Reuters/Ipsos polls, despite concerns that have led companies like Microsoft and Google to reduce their use.
Companies around the world are considering how best to take advantage of ChatGPT, a chatbot program that uses generative artificial intelligence to carry out conversations with users and answers various requests. However, security companies and other companies have expressed concern that this could lead to intellectual property leakage and strategy.
Examples of anecdotals of people who use ChatGPT to help their daily work include compiling emails, compiling documents, and conducting initial research.
A total of 28% of respondents in an online poll on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT in their work, while only 22% said their employers strictly allowed the use of such external tools.
Reuters/Ipsos polls against 2,625 adults across the United States have a credibility interval, a measure of accuracy, about 2 percentage points.
VOIR éGALEMENT:
As many as 10% of those surveyed said their boss explicitly banned the use of external AI tools, while about 25% did not know whether their company allowed the use of the technology.
ChatGPT became the fastest growing app in history after its launch in November 2022. It has created excitement and fear, bringing its developer, OpenAI, to create conflict with regulators, especially in Europe, where the company's mass data collection has drawn criticism from privacy guards.
Human examiners from other companies can read one of the resulting conversations, and researchers found that similar artificial intelligence could reproduce absorbed data during training, creating potential risks for property information.
"People don't understand how data is used when they use generative artificial intelligence services," said Ben King, Vice President of customer trust in corporate security firm Okta.
"For this business it is very important, because users do not have contracts with many AIs - because services are free - so companies will not take risks through their usual assessment process," said King.
OpenAI declined to comment when asked about the implications of individual use of ChatGPT, but highlighted the company's recent blog post guaranteeing corporate partners that their data would not be used to train chatbots further unless they gave explicit permission.
When people use Google's Bard, data such as text, location, and other usage information are collected. The company allows users to delete past activity from their accounts and asks for content inserted into AI to be deleted. Alphabet Inc.'s Google, declined to comment when asked for more details. Microsoft also did not immediately respond to a request for comment.
A US-based employee from Tinder said workers on the dating app used ChatGPT to "no-dangerous tasks" such as writing emails even though the company officially did not allow it.
"This is an ordinary email. It's very harmless, such as making funny calendar invitations for team events, farewell emails when someone leaves... We also use it for public research," said the employee, who did not want to be named because he was not allowed to talk to reporters.
The employee said Tinder had a "rules without ChatGPT" but employees still used it with "in general that didn't reveal anything about us being on Tinder".
Reuters could not independently confirm how employees at Tinder used ChatGPT. Tinder said it had provided "regular guidance to employees about the best security and data practices".
In May, Samsung Electronics banned its worldwide staff from using ChatGPT and similar AI tools after discovering an employee had uploaded a sensitive code to the platform.
"We are reviewing measures to create a safe environment for the use of AI genratives that increases employee productivity and efficiency," Samsung said in a statement on August 3. "However, until these measures are ready, we are temporarily limiting the use of AI genratives through the company's devices."
Reuters reported in June that Alphabet had also warned employees of how they used chatbots including Google's Bard, at the same time they marketed the program globally.
Google says while Bard can provide unwanted code advice, it helps programmers. They also say that they aim to be transparent about the limitations of their technology.
GENERAL RESTRICTIONS
Some companies told Reuters they embrace ChatGPT and similar platforms while still paying attention to security.
"We have started testing and learning about how AI can improve operational effectiveness," a Coca-Cola spokesperson in Atlanta, Georgia, added that the data remained within their firewall.
"In an internal way, we have just launched a company version of our Coca-Cola ChatGPT for productivity," the spokesperson said, adding that Coca-Colam is planning to use AI to increase the effectiveness and productivity of their team.
Tate & airing Chief Financial Officer Dawn Allen, meanwhile, told Reuters that a global raw material manufacturer was testing ChatGPT, after "finding ways to use it safely".
"We have different teams that decide how they want to use them through a series of experiments. Should we use them in investor relationships? Should we use them in knowledge management? How can we use them to perform tasks more efficiently?" Allen said.
Some employees say they absolutely cannot access the platform on their company's computers.
"It's completely banned in office networks, such as not functioning," said an employee of Procter & Gamble, who wants to remain anonymous because it is not allowed to speak to the press.
P&G declined to comment. Reuters could not independently confirm whether employees at P&G could not use ChatGPT.
Paul Lewis, head of information security at cybersecurity firm Nominet, said the company was wise to be vigilant.
"Everyone benefits from improving their abilities, but the information is not completely secure and can be engineered out," he said, citing "an evil order" that could be used to make AI chatbots disclose information.
"A general ban isn't appropriate yet, but we need to walk cautiously," Lewis said.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)