Twitter has announced a new rule to crack down on content it considers to have been deliberately altered to cause “serious harm”.
We know that some Tweets include manipulated photos or videos that can cause people harm. Today we’re introducing a new rule and a label that will address this and give people more context around these Tweets pic.twitter.com/P1ThCsirZ4February 4, 2020
While stopping short of banning manipulated images, videos and audio outright, Twitter said it was very likely to remove content that has been “deceptively altered” and is likely to cause users harm. The new rules come into effect on March 5.
Twitter added that fabricated media which is shared on the platform but isn’t deemed harmful may get a warning label and have its visibility reduced on the platform. The company will also provide a link to a Twitter Moment or landing page that, the platform hopes, will provide additional context and clarification around why the tweet was flagged.
In a blog post, Twitter admitted that putting the new rule into place will be a learning process, suggesting changes could be made to how the social media platform polices the spread of fake news.
“This will be a challenge and we will make errors along the way — we appreciate the patience. However, we’re committed to doing this right,” the statement read.
- How to delete a Twitter account
- Twitter finally lets you control who can reply to your tweets
- How to get Twitter dark mode
When assessing whether doctored content is likely to cause serious harm, Twitter says it considers risk of mass violence or widespread civil unrest, and threats to physical safety to a person or group to be in breach of the new rule.
The company will also consider threats to privacy and freedom of expression – such as stalking, voter suppression or intimidation – under the new rule.
The new guidelines have been announced after Twitter received feedback from over 6,500 users from around the world about how the platform should respond in an era of fake news.
Twitter found that more than 70% of people who use the social networking site considered “taking no action” on misleading media would be unacceptable, though respondents were less supportive when it came to removing tweets that contained misleading or altered media.
While Twitter is taking a stronger stance on the spread of fake photos and videos, it’s not the only social media platform to do so. At the start of January, it was discovered that Instagram had begun flagging digitally altered images as “false information”, although unlike Twitter, the Facebook-owned company hasn’t laid down specifics of what it considers to be fake.