Twitter Issues New Policies Related To Violence Speech And Threats Of Violence
JAKARTA - Twitter announced that it has "officially launched" a new policy related to "Violence speech" which confirms its "not tolerating acts of violence" approach. The content of this policy is similar to previous Twitter violence threats, although this new policy is more specific but also more vague.
Both policies prohibit users from threatening or praising violence in most scenarios (each has exceptions for "hyperbolic" words between friends). However, these new rules seem to expand some concepts while cutting some other concepts. For example, the old policy states:
"A statement expressing the desire or hope that a person has a physical injury, makes threats vague or indirect, or threatening actions that are unlikely to cause serious or long-lasting injury cannot be followed up on the basis of this policy, but can be reviewed and followed up on the basis of the policy."
However, expecting someone injured covered by a new policy, which reads:
You should not expect, hope, or express the desire to be harmful. This includes (but is not limited to) hoping that others die, suffer from illness, tragic events, or experience dangerous physical consequences.
We’ve made a few changes to our policies around violent content and similar language. Today, we’ve officially launched our Violent Speech policy, which prohibits violent threats, wishes of harm, glorification of violence, and incitement of violence. 🧵
— Twitter Safety (@TwitterSafety) February 28, 2023
However, the term "new" is a bit wrong here because the policy was already in the previous rule of abusive behavior - the only significant change is that it has been moved and Twitter is no longer giving an example. What feels like a significant change is the inequality of the new policy in protecting who is aiming for.
The long policy clearly states from the start: "You must not threaten violence against individuals or groups of people."
The new policy does not include the word "individual" or "group" and chooses to refer to "other people." While it can be interpreted as protecting marginalized groups, there is nothing specific you can show that proves that.
There are several other changes that should be highlighted: new policies prohibit threats to "home and civil protection, or infrastructure" and include exceptions to speech related to video games and sporting events, as well as "satire, or artistic expressions when the context expresses a point of view instead of provoking violence or injury that can be followed up."
Twitter also says that penalties - which are usually permanent blocking or account lockdowns that force you to remove infringing content - may be less severe if you act out of "angry" in conversations "regarding individuals accused of credible committing severe violence."
Twitter doesn't set an example of how it really is, but the understanding is that if you, for example, request that a well-known serial killer be executed, you may not get permanent tires for that. But the real decision will be made by the remaining Twitter moderation team.
At some point, before Musk actually had Twitter and had to deal with keeping advertisers happy, he said the platform "must comply with state law" and buy as an attempt to save free speech. And even though he kept commenting on it, Twitter still doesn't allow much legally allowed. This updated rule is just the latest example of it.