In a tweet storm last week, Twitter Inc. (TWTR) CEO Jack Dorsey agreed that the company’s anti-harassment rules weren’t good enough, and promised that more changes would be on the way soon.
The new rules leaked late Tuesday in an internal email obtained by Wired, in which the company said it will be changing how it polices content dealing with “unwanted sexual advances, non-consensual nudity, hate symbols, violent groups and tweets that glorify violence.” In short, Twitter is taking a different stance on how inappropriate content is reported, while cracking down on account suspensions and bans.
We have a Trust & Safety Council which reviews and gives feedback on our policy and product changes. We emailed them our proposed changes in order to get critical feedback. Email found in this article: https://t.co/i4dMsGv5rm
— jack (@jack) October 18, 2017
Twitter was notably vague, however, in the sections centered on violent groups and hate symbols — areas that are consistently referenced when users complain of widespread harassment on the platform. Twitter promised that more details will be coming soon on those topics, and since this was a leaked email, it’s likely the details given were not an official or final update. Some additional updates are expected to come in the next few weeks, the company noted.
Rules surrounding non-consensual nudity and unwanted sexual advances do seem to be more well-rounded as a result of the update. Twitter has expanded what kinds of content fall under the category of non-consensual nudity and now considers things such as “creep shots” and “upskirt imagery” to be in that group. On top of that, Twitter will no longer require targets of non-consensual nudity to report the offending content and will permanently suspend any accounts that post it.
“While we recognize there’s an entire genre of pornography dedicated to this type of content, it’s nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually,” the email states. “We would rather error on the side of protecting victims and removing this type of content when we become aware of it.”
The update comes after Twitter users have criticized the company for having inadequate anti-harassment policies. Last week, Twitter temporarily suspended the account of actress Rose McGowan, who tweeted about sexual violence against women, specifically accusing Hollywood producer Harvey Weinstein of sexual abuse and harassment. The action prompted a #WomenBoycottTwitter protest, which rose to become the number one global trend on Twitter on Friday.
On the topic of violent content, Twitter said it’s now going after tweets that glorify violence. The company already takes enforcement action against tweets that make direct threats, but it’s expanding that effort to include tweets that glorify or condone violence. For example, a tweet that says ‘Praise be to for shooting up. He’s a hero!’ would now be subject to enforcement action, according to the leaked email.
It appears that users will have to wait a bit longer to learn how Twitter is taking a tougher stance on hate symbols and violent groups. Twitter isn’t yet planning to ban hate groups from the service, or removing content with hate symbols, such as swastikas.
Hateful imagery or hate symbols will now be considered “sensitive media,” meaning that it will be replaced by a banner and require users to click through to see it. Otherwise, the company is still determining what is considered a hate symbol or a violent group.
“We are still defining the exact scope of what will be covered by this policy,” the company wrote under both the hate symbol and violent group headings.
Some have argued that Twitter doesn’t need an entirely new set of rules; instead, it needs to be more consistent and diligent in how it enforces them. It’s helpful that Twitter is working to respond more quickly to user reports, said Carolina Milanesi, an analyst for Creative Strategies, but Twitter also needs to be more proactive about monitoring user activity. Yet, with the sheer volume of tweets posted on a daily basis, monitoring them all could end up being quite difficult.
Effective monitoring also requires some sophisticated decision-making from Twitter’s staff (not artificial intelligence, Milanesi noted), by paying close attention to the language and tone used in a conversation. For example, an algorithm may not be able to detect if someone saying ‘I hope you get cancer’ is actually making a threat, Milanesi said.
Ultimately, as Twitter users have pointed out, the company does need to look at the shortcomings of its current policies, but it also needs to get serious about enforcing them on a more uniform basis.
“Rules are good but implementation is key and that is what failed in the past,” Milanesi explained. “Unless there are consequences for the repeat offenders, nothing will ever change and that is what I think most people want to see.”
More of What’s Trending on TheStreet: