JAKARTA - Twitter Inc will test sending a notification when someone replies to a tweet with offensive and hurtful language. This step is aimed at cleaning up the negative conversations on social media. Twitter has long been under pressure to clean up hateful and abusive content.

Reporting from Reuters, when a user sends a reply to a tweet, they will get a notification that the words in their tweet are similar to the one in the post that has been reported. Then the user will be asked if they want to revise it or not.

"We are trying to encourage people to rethink their behavior and rethink their language before posting because they are often in a heated atmosphere, and they may say something they regret," said the Twitter website's global head of policy for trust and safety, Sunita Saligram.

Twitter says the policy, will start on Tuesday, May 5, and last at least a few weeks. This policy will be implemented globally but only for English-language tweets.

Twitter's policy, however, does not allow users to target individuals with abusive speech, racism or sexism, and other degrading content. Microblogging users can report, or flag, content that violates the rules.

In the last year, Twitter took action on nearly 396,000 accounts for policy violations, and more than 584,000 accounts for hate speech in the January to June period.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)