Bad! ChatGPT Leaks Samsung Confidential Information
JAKARTA - Samsung has enough confidence in ChatGPT by OpenAI, using the tool to streamline processes and improve its chip business. However, this turned out to be disastrous for the company.
Three weeks after introducing ChatGPT to employees, reports surfaced that Samsung's confidential semiconductor information was leaked, causing concerns about data security and breaches of confidentiality.
According to reports, the first leak occurred when a Samsung employee in the Semiconductor and Device Solutions department discovered a problem with the semiconductor equipment measurement database.
They then looked for a quick solution by including the source code into ChatGPT. The second leak occurred when a different employee tried to better understand results and other information using the chatbot, he then entered code into ChatGPT and asked for it to be optimized.
Finally, the third leak occurred when another employee instructed ChatGPT to take meeting minutes. Unknowingly all the information Samsung employees enter into ChatGPT will or has become part of the AI chatbot's training data.
SEE ALSO:
Realizing this, Samsung quickly took steps to prevent further leaks, including instructing employees to be careful about the data they share with ChatGPT and limiting the capacity of each entry to a maximum of 1,024 bytes.
Once the information is entered into the AI chatbot, the company then sends that information to an external server where it cannot be recovered or deleted, as quoted from Gizmochina, Tuesday, April 4.
The incident highlighted the importance of data security and the need for companies to consider bringing AI chatbots into their workplaces.
While AI chatbots can increase efficiency and streamline processes, they also require proper protection and training to ensure the confidentiality and security of sensitive information.