JAKARTA Meta, the company that oversees Instagram and Facebook, is accused of suppressing their internal research on children's safety. This was conveyed by the company's internal parties.

Two Meta employees and two former company employees submitted a document to Congress. The document shows evidence that Meta is indeed trying to hinder findings that are considered detrimental to the company.

The whistleblowers stated that Meta had changed its research policy. This change was made about six weeks after internal documents related to Instagram's adverse impact on teens' mental health leaked in 2021.

Dalam perubahan kebijakan yang dilakukan, Meta memberikan saran langsung kepada peneliti. Perusahaan teknologi itu juga melibatkan pengacara dan menulis temuan secara lebih karir agar tidak menontohkan uguk dari platformnya.

This is similar to the Washington Post report some time ago. Former Meta researcher Jason Sattizahn claims that he was asked to delete the interview footage. In fact, the content of the recording is very important.

In the tape, a teenager confessed that his 10-year-old brother was approached and sexually harassed on Meta's VR platform, Horizon Worlds. However, this data deletion was denied by a Meta spokesperson.

He said that this deletion was not done to eliminate evidence, but because of privacy concerns. In accordance with global privacy policy, Meta will delete data on children under the age of 13.

"Some of these examples are combined to match predetermined and false narratives," Meta representatives told TechCrunch, reported on Tuesday, September 9. "Since early 2022, Meta has approved nearly 180 studies related to Reality Labs on social issues, including youth safety and well-being."

Similar allegations also came from Kelly Stonelake, a former Meta employee for 15 years. He sued Meta for feeling the Horizon Worlds app was unable to prevent users under 13 years of age. He also highlighted the problems of racism that often occur on the platform.

"The leadership team realized that in one test, it took an average of 34 seconds to enter the platform before users with Black avatars were called racial insults," Stonelake accused the lawsuit.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)