Meta Creates The Most Different AI, Can Quickly Detect Malicious Content

JAKARTA - Meta claims to have created artificial intelligence (AI) technology that can more quickly detect new types of harmful content, including uploads that prohibit COVID-19 vaccinations.

The company states that their AI system is quite different, where normally an AI system will learn new tasks from examples, but the process of collecting and labeling large amounts of data usually takes months.

However, using a Meta technology dubbed Few-Shot Learner (FSL), the new AI system requires very little training data so it can adapt to combat new types of malicious content within weeks instead of months.

This AI system will come in handy in cases where the social network has rules against uploading misinformation about a dangerous COVID-19 vaccine, including false claims that the vaccine alters DNA.

But users sometimes phrase their statements like "A vaccine or DNA modifier?" or even use other words and codes to try to evade detection. This new technology will help companies capture the content they may have missed.

"If we reacted more quickly, then we could launch intervention and content moderation in a more timely manner. Ultimately, the goal here is to keep users safe," said Meta Product Manager, Cornelia Carapcea, as quoted by CNET, Thursday, December 9.

Meta said it had tested the new system and was able to identify offensive content that conventional AI systems might not catch. After launching the new system on Facebook and Instagram's photo service, the percentage of views of malicious content that users see has decreased.

Few-Shot Learner will work in over 100 languages. The company doesn't list the languages included, but Carapcea says the new technology could strike a punch in combating harmful content in languages outside of English, which may have fewer samples for training AI systems.

As Facebook focuses more on building a metaverse, a virtual space where people can socialize and work, content moderation will become more complex. Carapcea said he thinks Few-Shot Learner could eventually be applied to virtual reality content.

"Ultimately, Few-Shot Learner is a piece of technology used specifically for integrity. But teaching machine learning systems by fewer examples is a topic being pushed at the forefront of research," said Carapcea.