Meta Launches A Series Of New Features For The Safety Of Teens On Its Social Media

JAKARTA - After delaying the rollout of end-to-end encryption across its messaging services until 2023, Facebook's parent company Meta is now introducing harm prevention tools on its various social media platforms.

The company plans to use artificial intelligence and machine learning to look at unencrypted parts of the platform, such as user profiles and photos, for other signals that could indicate malicious activity.

For example, if an adult user creates a new profile and continues to try to reach a minor they don't know, or if they start messaging a lot of strangers, Meta may step in to take action.

In addition, Meta has recently launched a series of enhanced protections for accounts belonging to minors, including setting their accounts to private or friends-only accounts on Facebook and Instagram. The company is also introducing a feature that will restrict adult Instagram users from being able to contact teens who don't follow them.

Citing TechCrunch, Thursday, December 2, later in Messenger, a security notification now appears that provides tips for finding suspicious activity and blocking, reporting, ignoring, or restricting other users, developed using machine learning.

Meta states such features can still work in end-to-end encrypted environments. There are also features that filter the user's inbox, allowing account owners to control, avoid spam, and limit who can send messages.

The reporting features are also improved, Meta makes it easier to report content that violates the exploit policy. As a result, the company claims to have seen a nearly 50 percent year-over-year increase in reporting.

When a message is reported, that part of the conversation is decrypted so the company can take action, such as reporting child exploitation attempts to law enforcement or NCMEC.