JAKARTA - The head of the Federal Trade Commission (FTC) said that the agency is committed to using existing laws to tackle several dangers of artificial intelligence, such as increasing the power of dominant companies and "accelerating" fraud.

"Although these (AI) tools are new, they are no exception from existing rules, and the FTC will firmly enforce the laws we are burdening to manage them, even in this new market," said Chairwoman of the FTC, Lina Khan wrote in an opinion article in the New York Times on Wednesday, May 3.

“The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice,” writes @linakhanFTC, about the future of regulation and A.I. https://t.co/y4cgSj8PB3

The increasing popularity of ChatGPT from Microsoft-backed OpenAI this year has sparked global calls for regulation amid concerns about its possible use for wrongful action even though the company is trying to use it to increase efficiency.

He explained that the agency was "well-trained" to handle the work.

One of the risks he mentioned is that companies that dominate cloud and computing services will become stronger when they help startups and other companies launch their own AI. AI tools can also be used to facilitate collusion in raising prices.

Khan expressed concern that a genrative AI, which writes in English conversations, could be used to help fraudsters write more specific and effective phishing emails.

"When enforcing a legal ban on misleading practices, we will not only see fraudsters using these tools temporarily, but also in upstream companies that allow them," he wrote.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)