ByteDance Researcher Accidentally Enters US AI Security Expert Group

JAKARTA - A researcher from ByteDance, the parent company of TikTok from China, was accidentally added to a discussion group of US artificial intelligence (AI) security experts last week.

The United States National Institute of Standards and Technology (NIST) on Monday 18 March explained that the researchers were added to the Slack forum used by members of the Consortium of the United States Artificial Intelligence Security Institute (US Artificial Intelligence Safety Institute Consortium/AISIC) managed by NIST.

According to sources familiar with the matter, ByteDance researchers were added by consortium members as volunteers.

"After NIST learned that the individual was an employee of ByteDance, they were immediately issued for violating the consortium's code of ethics on misrepresentation," NIST said by email.

Researchers whose LinkedIn profile shows she is in California, have not responded to the messages sent. ByteDance has also not responded to email requests for comment.

The emergence of the ByteDance researcher has alerted consortium members because his company is not a member and TikTok is becoming a center of national debate in the United States. Many fear the app has become an opening for the Chinese government to spy on or manipulate US citizens on a large scale.

Last week, the US House of Representatives passed a bill forcing ByteDance to give up TikTok ownership or face a national ban. However, the ultimatum is expected to meet a steep path in the Senate.

The AI Security Institute was formed to evaluate the risk of a state-of-the-art artificial intelligence program. Announced last year, the institute was founded under the NIST and its consortium founding members include hundreds of major American technology companies, universities, AI startups, non-governmental organizations, and others, including Thomson Reuters, the parent company of Reuters.

This consortium, among other things, seeks to develop guidelines for implementing secure AI programs and help AI researchers find and improve security vulnerabilities in their models. The NIST says the Slack forum for the consortium has about 850 users.