Follow Twitter to update its code of conduct to protect users from dehumanization


Facebook Twitter has updated its rules regarding hateful behavior to create a more inclusive environment for its users.

We are constantly developing our rules to ensure people’s safety. Today, we are expanding our rules for hateful behavior to include languages that dehumanize people based on their race, ethnicity or national origin.

– Twitter Security (@TwitterSafety) December 2, 2020

The company announced the update in a tweet from its Twitter security account. The update appears to give priority to dehumanizing language about large groups of people.

“Our rules are constantly evolving to ensure people’s safety. Today we are expanding our hateful behavior guidelines to address language that dehumanizes people based on their race, ethnicity or national origin,” the tweet says. A follow-up tweet pointed out that tweets that dehumanize people based on “religion, caste, age, disability or disease” are already prohibited.

This policy already prohibits language that dehumanizes people on the basis of religion, caste, age, disability or disease.

Research shows that dehumanizing language can cause harm in the real world, and we want to ensure that more people – globally – are protected.

– Twitter Security (@TwitterSafety) December 2, 2020

Examples of tweets that violate the new policy are messages such as “There are too many maggots [of national origin/race/ethnicity]in our country and they must go” or “People with [disease]are rats that infect everyone around them.

In the blog post, first published in July 2019, the company said that when discussing hateful behavior, it took into account public feedback, expert opinions and internal ideas.

As part of its update, Twitter said that the group of third party experts from around the world The experts had “[Twitter] helped to better understand the challenges we would face” and answered questions such as “How can – or should – we take into account in our assessment of the severity of the harm whether a particular protected group has been marginalized in the past and/or is currently targeted” or “How do we protect conversations that people have had within marginalized groups, including those that use rediscovered terminology”?

Twitter stated on its blog that tweets that violate this policy will be removed when they are reported. “We will continue to bring potentially offensive content to the surface through proactive detection and automation,” the post said.

As tweets are deleted, repeat offenders may be punished more severely. “If an account repeatedly violates the Twitter rules, we can temporarily suspend or block the account,” the blog said. The blog also refers to the help center’s enforcement page, which explains that its “strictest enforcement measure” is a permanent suspension, where accounts cannot create new accounts; however, they can appeal the suspension.

The article ended with links to studies by Nick Haslam and Michelle Stratemeyer and Dr. Susan Benesch on the link between dehumanizing language and the damage it can cause offline.

The update is the latest step in expanding the Twitter policy on hateful behavior. The company’s policy already prohibits violent threats.

The Help Center’s hateful behavior policy page warns of violent threats against individuals or groups, offensive language, hateful images, and more.


Leave A Reply