Twitter has been under a lot of pressure to fight the problem it has with the spread of malicious content across its microblogging platform. In March, chief executive Jack Dorsey admitted that Twitter had underestimated the far-reaching “negative real world consequences” of having a global conversation on a public forum.
And while Twitter has never shied away from recognizing that it has a huge problem, it has admittedly not been able to combat the issue as quickly as it spreads. Until now.
“One of the most important parts of our focus on improving the health of conversations on Twitter is ensuring people have access to credible, relevant, and high-quality information on Twitter,” the company said via a blog post (opens in new tab).
“To help move towards this goal, we’ve introduced new measures to fight abuse and trolls, new policies on hateful conduct and violent extremism, and are bringing in new technology and staff to fight spam and abuse.”
Fighting the good fight
Historically, it’s only taken a simple registration process to sign up for a Twitter account, making it easy for trolls to create multiple spam accounts. To curb such malicious activity, Twitter will now require new users to verify a phone number and an email address if they want to sign up for an account.
Twitter will also reduce the visibility of accounts it considers suspicious by removing them from follower lists and engagement counts. It will also keep new users from following any spammy accounts by putting a warning on it.
Once an account has been verified as spam or malicious in nature, it will be locked and made read-only until it is able to pass a verification test, like confirming a phone number.
In addition to that, any accounts displaying high-volume activity using same hashtags without receiving any replies will also be put through a verification test. This could be in the form of a reCAPTCHA challenge or a password reset request.
Carry on fighting
These changes have been a work in progress for a while. Twitter has been doing much over the last few months to make the site a safer place.
The company began working with experts to figure out how to improve the “health” of the conversations on its platform.
It also began suspending accounts that its machine learning tool considered spam, and it has acquired a company called Smyte, which “specializes in safety, spam and security issues”.