أنشرها:

JAKARTA - Apple is starting to limit the use of ChatGPT from OpenAI and Copilot from Microsoft, The Wall Street Journal reported. According to Mark Gurman of Bloomberg, ChatGPT has been on the list for several months on several Big Tech companies.

Not only Apple, but also Samsung and Verizon in the world of technology, as well as a number of well-known banks such as Bank of America, Citi, Deutsche Bank, Goldman Sachs, Wells Fargo, and JPMorgan, have banned their use.

This is due to the possibility of leaked confidential data; in this case, the ChatGPT privacy policy explicitly states that your request can be used to train the model unless you choose not to participate. Fear of baseless data leaks: in March, bugs in ChatGPT revealed data from other users.

One of the obvious uses for this technology is customer service, where companies try to minimize costs. But in order for customer service to function, customers must provide their details "sometimes private, sometimes sensitive". How do companies plan to secure their customer service bots?

This is not just a problem for customer service. For example, let's say Disney has decided to use AI instead of its VFX department to write its Marvel movie. Is there a possibility Disney would give a chance to leak Marvel spoilers?

One of the things that is generally true about the tech industry is that early-stage companies like early versions of Facebook, for example, don't really pay attention to data security. In that case, it makes sense to limit sensitive material exposure, as suggested by OpenAI itself. This is not a specific problem for AI.

It's possible that big, sophisticated companies, who are very concerned about confidentiality, are just too worried and there's nothing to worry about. But let's say they're right. If so, you can think of some possibilities for the future of AI chatbots.

First, the AI wave turns out to be the same as the metaverse: total failure. Second, AI companies are forced to make major changes and clearly outline security practices. Third, every company wishing to use AI must build its own exclusive model or, at least, run its own processing, which sounds very expensive and difficult to improve. Fourth is an online privacy nightmare, where your airline (or debt collectors, pharmacies, or anyone else) regularly leaks your data.

But if companies that are very concerned about security are limiting their AI use, there may be good reasons for others to do so too.

In the end, however, data security will be an important factor in the development and use of AI chatbots. The problem of data leakage not only affects user privacy, but can also endanger companies that use this technology.

To overcome these challenges, AI companies need to step up their safety measures, adopt the best practices, and maintain customer data integrity very seriously.

In the future, stricter regulations and cooperation between industry, government, and supervisory agencies may be needed to ensure that the use of AI chatbots is safe and in accordance with user privacy needs.

In addition, companies must also invest in research and development to improve their AI security and educate users about risks and precautions that can be taken.

In a world that is increasingly digitally connected, it is important for all of us to consider the implications of security and privacy when using AI technology. We must seek a balance between the ease of use and protection of personal data, and continue to improve and strengthen data protection in the development of AI chatbots in the future.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)