IBM's Head Of Technology Underlines Risks Of Using ChatGPT In Business
JAKARTA - IBM Automation's Head of Technology, Jerry Cuomo, recently published a blog post outlining some of the risks he claims are related to the use of ChatGPT for companies.
According to the blog post, there are several key risk areas that businesses must consider before using ChatGPT. However, in the end, Cuomo concluded that only insensitive data is safe with ChatGPT:
"After your data goes to ChatGPT," Cuomo wrote, "You have no control or knowledge of how the data is used."
According to the post, this type of accidental data leak can also attract businesses to be legally responsible if partner data, customers, or clients are exposed to the public after leaking into ChatGPT training data.
Cuomo also mentions risks to intellectual property and the possibility that leaks could make businesses violate open source agreements.
According to IBM blog post: "If sensitive information from third parties or internal companies is included in the ChatGPT, then the information is part of the chatbot data model and can be shared with other people who ask related questions."
VOIR éGALEMENT:
Cointelegraph contacted OpenAI for comment on the above statement and received the following response from representatives of public relations via email: [T]data will not be shared with other people who ask relevant questions.
The representative also refers to existing documentation of the ChatGPT privacy feature, including a blog post explaining the ability of web users to turn off their chat history.API ChatGPT has a default data sharing feature, according to OpenAI.
However, critics have shown that conversations in the web version are stored by default. Users should also choose to exit their conversational storage a convenient feature to continue from where they stop and also opt out of their data usage to train the model. Currently, there is no option to save conversations without agreeing to share data.