The way commenters write can be used to detect if they'll end up getting banned, according to US researchers working on troll-busting algorithms (opens in new tab).
A team from Cornell and Stanford Universities scanned the comment threads on three news sites - CNN, Breitbart and IGN - over the course of a year and a half. That totalled up to 35 million comments, sent by almost two million users. 50,000 of those users went on to be banned from the sites.
The researchers found that those banned users wrote in a different way to others. Their comments were generally harder to read and used fewer words that indicated positive emotion. They also behaved slightly differently in how they moved around the site - spending more time focused in individual threads than users who weren't banned.
From that data, the researchers built a model that could guess with 80% accuracy whether or not a user would go on to be banned from the content of their first five posts. Examining the first ten comments raised the accuracy level further - by two percentage points.
The team hopes that the work can be used to develop moderation tools that automatically highlight users that may go on to be disruptive, saving moderators time. But they also warned that their findings showed overly harsh moderation tended to exacerbate antisocial behaviour, and that a light touch was more effective than throwing out the banhammer.