JAKARTA - Facebook is now taking a more aggressive approach to shutting down coordinated groups of user accounts that are actually engaging in malicious activity on its platform. This action, using the same strategy that the US social media company's security team took against campaigns using fake accounts.

The new approach, reported for the first time, uses a tactic usually taken by Facebook's security teams to massively shut down networks involved in influence operations that use fake accounts to manipulate public debate, such as Russian troll farms.

This could have major implications for how the social media giant handles political and other coordinated movements that violate its rules. Moreover, at a time when Facebook's approach to abuses on its platform is under intense scrutiny from global lawmakers and civil society groups.

Facebook said it now plans to take the same network-level approach as coordinated real account groups that systematically violate its rules, for example through mass reporting. So far many users have falsely reported target content or accounts to disable it, or brigade, a type of online harassment in which users might coordinate to target individuals through mass postings or comments.

In a related change, Facebook said on Thursday September 16 that it would take the same type of approach to real user campaigns that cause "coordinated social harm" on and off its platform. For example, when announcing the abolition of the Querdenken movement, a German anti-COVID restriction movement. .

This expansion, which a Facebook spokesperson said is still in its early stages, means Facebook's security team can identify the core movements driving the behavior and take broader action than companies to remove individual posts or accounts.

In April, BuzzFeed News published a leaked internal Facebook report about the company's role in the January 6 riots at the US Capitol and its challenges to curbing the fast-growing 'Stop the Steal' movement, one of which found that Facebook had "little policy around harm." coordinated authenticity."

Facebook's security experts, who are separate from the company's content moderators and deal with threats from adversaries who try to circumvent its rules, began cracking down on influence operations using fake accounts in 2017, following the 2016 US election in which US intelligence officials concluded that Russia had used social media or media platforms. as part of a campaign to influence cyberspace. But these claims have been denied by Moscow.

Facebook has dubbed the illicit activity by this group of fake accounts "coordinated inauthentic behavior" (CIB), and its security team has begun announcing the massive takedown in monthly reports. The security team also tackles some specific threats that may not use fake accounts, such as fraud or cyber espionage networks or open influence operations such as some state media campaigns.

Sources said the team at the company had long debated how it should intervene at the network level for the massive movement of real user accounts that systematically violated its rules.

In July, Reuters reported on the Vietnam army's online information warfare unit, which engages in acts such as mass reporting of accounts to Facebook, sometimes also using their real names. Facebook has removed several accounts due to this mass reporting effort.

Facebook is under increasing pressure from global regulators, lawmakers and employees to combat widespread abuses on its services. Others criticized the company for allegations of censorship, anti-conservative bias, or inconsistent enforcement.

The extension of Facebook's network disruption model to influence authentic accounts raises further questions about how change might affect the types of public debate, online movements, and campaign tactics across the political spectrum.

"A lot of times problematic behavior will appear very close to a social movement," said Evelyn Douek, a Harvard Law lecturer who studies platform governance. "It will depend on this definition of hazard ... but obviously people's definition of hazard can be very subjective and vague."

There are several well-known examples of coordinated activity around last year's US election. From teens and K-pop fans who claim they used TikTok to sabotage a rally for former President Donald Trump in Tulsa, Oklahoma, to political campaigns that pay online meme creators. It has also sparked debate about how platforms should define and approach coordinated campaigns.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)