JAKARTA - The Meta Supervisory Board on Thursday, July 25, criticized the company's rules for being "not clear enough" in banning explicit sexual images produced by AI. They called for changes to prevent the spread of such images on the Meta platform.

The Supervisory Board, which is funded by Meta but operates independently, issued the decision after reviewing two pornographic images that fake famous women using artificial intelligence and posted on Meta's Facebook and Instagram.

Meta confirmed that they would review the board's recommendations and provide updates on the changes received. In its report, the council only named the two women as female public figures from India and the United States, on privacy grounds.

The council found that the two images violated Meta's rules prohibiting "demeaning sexual photoshops," which the company classifies as a form of bullying and harassment. The board said Meta should immediately remove the image.

In the case of Indian women, Meta failed to review user reports of the image within 48 hours, so the ticket was automatically closed without action. Users filed an appeal, but the company again refused to act, and only reversed the decision after the council raised the case.

In the case of American celebrities, the Meta system automatically removes the image.

"Restrictions on this content are legitimate," the board said, quoted by VOI from Reuters. "Given the severity of the impact, removing content is the only effective way to protect the people affected."

The board recommends Meta to update their rules to clarify its scope, saying, for example, that the use of the word "photoshop" is too narrow and the ban must include various editing techniques, including a generative AI.

The council also criticized Meta for refusing to add images of Indian women to a database that would allow automatic deletion as happened in the case of American women.

According to the report, Meta told the council that it relies on media coverage to determine when to add images to the database, a practice the council considers "worry."

"Many victims of deepfake intimate images are not in the public spotlight and are forced to accept the spread of their images that are not consensual or seek and report every instance," the council said.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)