Twitter Launches New System For Reporting Harmful Content From Users, Similar To Doctor Analyzing Patients
JAKARTA - Twitter Inc will begin overhauling the way users report malicious tweets in a bid to make it easier for people to describe what is wrong with the content they see. This was confirmed by a source on the social networking site quoted by Reuters, Tuesday, December 7.
The move, which will begin with a small test of Twitter's Web users in the United States, comes amid widespread criticism that tech companies including Twitter, Meta Platforms Inc, and Alphabet Inc's YouTube are doing too little to protect users from content that is harmful or sexually abusive online.
Instead of requiring users to report how a tweet violates Twitter's rules, assuming knowledge of company policy, users will be asked if they feel they have been hatefully attacked, harassed, or violently intimidated, or featured in content related to self-harm.
Users will also be allowed to explain in their own words why they flagged such content as harmful or offensive to them, Twitter said.
The process is similar to a doctor asking a patient about the symptoms they are experiencing and "where does it hurt?" rather than questions like "did you break your leg?" said the source on Twitter.
SEE ALSO:
"In times of urgency, people need to be heard and feel supported," said Brian Waismeyer, a data scientist on Twitter's health user experience team, in a statement.
Twitter added that the new process will allow it to collect more detailed information about tweets that don't explicitly violate its rules, but that users may still find problematic or annoying, which will help the company update its policies in the future.
This is the latest change Twitter has made to improve user security. Last month, the company said it would begin banning sharing of "private media" such as photos and videos without the person's consent.