Protecting Freedom of Speech, The UK Doesn't Force Social Media to Remove Harmful Content
British Digital Secretary, Michelle Donelan. (photo: twitter @michelledonelan)

JAKARTA - The British government will not force tech giants to remove "legal but harmful" content from their platforms. This comes after the kingdom's campaigners and lawmakers raised concerns that the move could curtail free speech on Monday, November 28.

"Online safety laws will instead focus on protecting children and ensuring companies remove content that is illegal or prohibited in their terms of service," said a British government source. He also added that the rule would not specify what legal content should be censored.

Owners of social media platforms, such as owners of Facebook Meta and Twitter, will be prohibited from removing or restricting user-generated content, or from suspending or banning users, if there is no violation of their terms of service or law.

The UK government previously said social media companies could be fined up to 10% of turnover or £18 million if they failed to remove harmful content such as abuse even if it fell below the criminal threshold. Meanwhile senior managers can also be subject to fines and even face charges of criminal acts.

The proposed law, which has experienced delays and disputes prior to the latest version, would remove state influence over how private companies manage legal representation. The law will also avoid the risk of platforms removing legitimate posts to avoid sanctions.

UK Digital Secretary Michelle Donelan says she hopes to stop unregulated social media platforms that harm children.

"I will be bringing a strengthened Online Safety Bill back to Parliament that will allow parents to see and act on the harm websites pose to young people," Donelan said.

"It is also free from any threat that future technology companies or governments could use the law as a license to censor legitimate views," he said.

The UK, like the European Union and other countries, has been grappling with the problem of creating laws to protect users, and especially children, from harmful user-generated content on social media platforms without undermining free speech.

The revised Online Safety Bill, which returns to parliament next month, places the onus on tech companies to remove material that violates their own terms of service and to enforce age limits on their users to stop children from circumventing authentication methods.

According to Donelan, if users are likely to encounter controversial content such as glorification of eating disorders, racism, anti-Semitism or misogyny that does not meet the threshold of being criminal, platforms should offer tools to help adult users avoid it.

Only if platforms fail to enforce their own rules or remove criminal content can they be fined up to 10% of annual turnover.

The UK said late Saturday, November 26, that new crimes of aiding or encouraging self-harm online will be included in the bill.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)