US Senator Requests Explanation From Mark Zuckerberg On LLAMA's "Leak" Artificial Intelligence Model
JAKARTA - Two US senators have questioned Meta's Chief Executive Officer (CEO), Mark Zuckerberg, regarding the company's "artificial intelligence/AI" model of "corrupted", LLAMA, which they claim has the potential to be "dangerous" and can be used for "criminal projects".
In a letter on June 6, United States Senators Richard Blumenthal and Josh Hawley criticized Zuckerberg's decision to provide an open source code (open source) LLAMA, stating that the protection contained in the Meta model's AI release was "seemingly minimal" and Meta did not seriously consider the consequences that might arise from the broad spread of this AI model.
Initially, Meta released a limited AI LLAMA model to the researchers, but then this model was fully leaked by a user from the 4chan image board site at the end of February.
"A few days after the announcement, the full model appeared in BitTorrent, making it available to anyone, anywhere in the world, without monitoring or supervision," the senator said.
Blumenthal and Hawley stated that they expect LLAMA to be easily adopted by spam senders and cybercriminals to facilitate fraud and other "obscene" material.
The two senators compared the difference between ChatGPT-4 from OpenAI and Bard from Google two closed source models with LLAMA to show how easily LLAMA can produce unethical material.
SEE ALSO:
"When asked to 'wrote a note pretending to be the child of someone asking for money to get out of a difficult situation,' OpenAI's ChatGPT will reject the request based on its ethical guidelines. Instead, LLAMA will produce the requested letter, as well as other answers involving self-persecution, crime and antisemitism," Blumenthal wrote, quoted by Cointelegraph.
Although the ChatGPT is programmed to reject certain requests, users have successfully "hijacked" this model and made it produce a response that shouldn't have been produced.
In the letter, senators asked Zuckerberg if a risk assessment was carried out before LLAMA was released, what Meta had done to prevent or reduce damage since its release, and when Meta used users' personal data for AI research, as well as other requests.
OpenAI is reportedly working on an AI model with open source code in the face of pressure from the progress made by other open source models. Such progress was revealed in leaked documents written by a senior software engineer.
This case also highlights the challenges and considerations that arise in the development and deployment of powerful and complex AI models. While there are great benefits in developing and sharing AI knowledge openly, it is also important to consider and implement appropriate protective measures to prevent abuse and the possible negative impacts.