How AI is struggling to keep a lid on the social media tinderbox

Teenagers using social media apps
(Image credit: Getty Images)

PR-wise, social media has really had a rough few years. After it was somewhat naively triumphed as an unambiguous force for good in the wake of the Arab Spring, people are waking up to its dangers. We’ve already covered the inconvenient truth that our brains may not be evolved enough to cope with it, and the awkward realisation that fake news and trolling could be a feature rather than a bug – but it’s hard not to have some sympathy for the companies struggling with the scale of a sociological experiment that’s unprecedented in human history. 

Every day, over 65 years’ worth of video footage is uploaded to YouTube. Over 350 million photos are posted on Facebook. “Hundreds of millions” of tweets are sent, the majority of which are ignored.

There was one we knew was a terrorist – he was on the most wanted list, If you followed him on Twitter, Twitter would recommend other terrorists

Clint Watts, FBI

All of these statistics are at least a year out of date – the companies have broadly come to the collective conclusion that transparency isn’t actually an asset – so it’s almost certain that the numbers are actually much higher. But even with these lower figures, employing the number of humans required to moderate all this content effectively would be impossible, so artificial intelligence does the heavy lifting. And that can spell trouble.  

If you’re skeptical about the amount of work AI now does for social media, this anecdote from former FBI agent Clint Watts should give you pause for thought. Watts and his team were tracking terrorists on Twitter. “There was one we knew was a terrorist – he was on the most wanted list,” Watts explained during a panel discussion at Mozilla’s Mozfest. “If you followed him on Twitter, Twitter would recommend other terrorists.”

When Watts and his team highlighted the number of terrorists on the platform to Twitter, the company was evasive. “They'd be, 'you don't know that,'” Watts said. “Actually, your algorithm told me they're on your platform – that's how we figured it out. They know the location and behind the scenes they know you're communicating with people who look like you and sound like you."

At its heart, this is the problem with all recommendation algorithms for social media: because most of us don’t use social media like the FBI, it’s a fairly safe bet that you follow things because you like them, and if you like them it follows that you would also enjoy things that are similar.

Tracking the wrong metrics

This reaches its unfortunate end state with YouTube: a company that measures success largely on the number of videos consumed and the time spent watching. It doesn’t really matter what you’re absorbing, just that you are.

YouTube’s algorithms exploit this mercilessly, and there are coal-mine canaries raising the alarm on this. Guillaume Chaslot is a former YouTube software engineer who founded AlgoTransparency: a bot that follows 1,000 channels on YouTube every day to see how its choices affect the site’s recommended content. It’s an imperfect solution, but in the absence of actual transparency from Google, it does a pretty good job of shining a light on how the company is influencing young minds. And it’s not always pretty.

“The day before the [Pittsburgh] synagogue attack, the video that was most recommended was a David Icke video about George Soros controlling the world's money, shared to 40 channels, despite having only 800 views,” Chaslot told an audience on the Mozfest AI panel.

We checked later, and he’s right: here’s the day on AlgoTransparency, although clicking through now shows that its been watched over 75,000 times. While it would be a pretty big leap to associate a synagogue attack with YouTube pushing a conspiracy theory about a prominent Jewish billionaire – especially a video that appears to have, comparatively speaking, bombed at the time – it’s not a good look for Google.

YouTube recommendations

Albotransparency is a bot that attempts to unpick YouTube's recommendation algorithm

“It makes sense from from the algorithmic point of view, but from the society point of view, to have like an algorithm deciding what's important or not? It doesn't make any sense,” Chaslot told us in an interview after the panel. Indeed, the algorithm is hugely successful in terms of growth, but as others have reported, it has a tendency to push people to the extremes as this New York Times experiment demonstrates.

It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons

Zeynep Tufekci

“It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm,” wrote the author Zeynep Tufekci in the piece. “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.”

Some of us have the willpower to walk away, but an algorithm trained on billions of people has gotten pretty good at keeping others on the hook for one last video. “For me,YouTube tries to push plane landing videos because they have a history of me watching plane landing videos,” says Chaslot. “I don't want to watch plane landing videos, but when I see one I can't restrain myself from clicking on it,” he laughs.

Provoking division

Exploiting human attention isn’t just good for lining the pockets of social media giants and the YouTube stars who seem to have stumbled upon the secret formula of viral success. It’s also proved a handy tool for terrorists spreading propaganda and nation states looking to sow discord throughout the world. The Russian political adverts exposed in the wake of the Cambridge Analytica scandal were curiously non-partisan in nature, seeking to stir conflict between groups, rather than clearly siding with one party or another.

And just as YouTube’s algorithm finds divisive extremes get results, so have nation states. “It's one part human, one part tech,” Watts told TechRadar after the panel discussion was over. “You have to understand the humans in order to be duping them, you know, if you're trying to influence them with disinformation or misinformation.”

You have to understand the humans in order to be duping them, you know, if you're trying to influence them with disinformation or misinformation.

Clint Watts, FBI

Russia has been particularly big on this: its infamous St Petersburg ‘troll factory’ grew from 25 to over 1,000 employees in two years. Does Watts think that nation states have been surprised at just how effective social media has been at pushing political goals?

“I mean, Russia was best at it,” he says. “They've always understood that sort of information warfare and they used it on their own populations. I think it was more successful than they even anticipated. 

“Look, it plays to or authoritarians and it's used either to suppress in repressive regimes or to mess with liberal democracies. So, yeah, I mean cost to benefit its it's the next extension of of cyberwarfare.”

Exploiting the algorithms

Although the algorithms that explain why posts, tweets and videos sink or swim are kept completely under wraps (Chaslot says that even his fellow YouTube programmers couldn’t explain why one video may be exploding), nation states have the time and resources to figure it out in a way that regular users just don’t.

“Big state actors – the usual suspects – they know how the algorithms works, so they're able to impact it much better than individual YouTubers or people who watch YouTube,” Chaslot says. For that reason, he would like to see YouTube make its algorithm a lot more clear: after all, if nation states are already gaming it effectively, then what’s the harm in giving regular users a fairer roll of the dice?

A lot of alt-right conspiracy theories get extremely amplified by the algorithm, but they still complain about being censored, so reality doesn't matter to them

Guillaume Chaslot, AlgoTransparency

It’s not just YouTube, either. Russian and Iranian trouble makers have proved effective at gaming Facebook’s algorithms, according to Chaslot, particularly taking advantage of its preference for pushing posts from smaller groups. “You had an artificial intelligence that says, 'Hey, when you have a small group you're very likely to be interested in what it posts.' So they created these hundreds of thousands of very tiny groups that grew really fast.” 

Why have social media companies been reluctant to tackle their algorithmic issues? Firstly, as anybody who has worked for a website will tell you, problems are prioritised according to size, and in pure numbers, these are small fry. As Chaslot explains, if for example 1% of users get radicalized by extreme content, or made to believe conspiracy theories, well, it’s just 1%. That’s a position it’s very easy to empathise with – until you remember that 1% of two billion is 20 million.

Censorship and oppression can be powerful tools in the hands of propagandists

Censorship and oppression can be powerful tools in the hands of propagandists

But more than that, how can you measure mental impact? Video watch time is easy, but how can you tell if a video is influencing somebody for the worse until they act upon it? Even then, how can you prove that it was that video, that post, that tweet that pushed them over the edge? “When I talk to some of the Googlers, they were like 'some people having fun watching flat Earth conspiracy theories, they find them hilarious', and that's true,” says Chaslot. “But some of them are also in Nigeria where Boko Haram uses a flat Earth conspiracy to go and shoot geography teachers.”

Aside from that, there’s also the difficulty of how much social media companies intervene. One of the most powerful weapons in the propagandist’s arsenal is to claim that they’re being censored, and doing so would play directly into their hands.

“We see alt-right conspiracy theorists saying that they are being decreased on YouTube, which is absolutely not true,” says Chaslot. “You can see it on AlgoTransparency: a lot of alt-right conspiracy theories get extremely amplified by the algorithm, but they still complain about being censored, so reality doesn't matter to them.

They can change their terms of service all they want, [but] the manipulators are always going to dance inside whatever the changes are

Clint Watt, FBI

Despite this, the narrative of censorship and oppression has even been picked up by the President of the United States, so how can companies rein in their algorithms in such a way that isn’t seen to be disguising a hidden agenda? 

“They're in a tough spot,” concedes Watt.They can't really screen news without being seen as biased, and their terms of service is really only focused around violence or threats of violence. A lot of this is like mobilising to violence, maybe, but it's not specifically like ‘go attack this person’. They can change their terms of service all they want, [but] the manipulators are always going to dance inside whatever the changes are.” 

This last point is important, and social networks are constantly amending their terms of service to catch out new issues as they arise, but inevitably they can’t catch everything. “You can't flag a video because it's untrue,” says Chaslot. “I mean they had to make a specific rule in the terms of service saying 'you can't harass survivors of mass shootings'. It doesn't make sense. You have to make rules for everything and then take down things.”

Can we fix it?

Despite this, Watts believes that social media companies are beginning to take the various problems seriously. “I think Facebook's moved a long way in a very short time,” he says, although he believes companies may be reaching the limits of what can be done unilaterally.

“They'll hit a point where they can't do much more unless you have governments and intelligence services cooperating with the social media companies saying ‘we know this account is not who they say they are’ and you're having a little bit of that in the US, but it'll have to grow just like we did against terrorism. This is exactly what we did against terrorism.” 

From the regulators' perspective, they don't understand tech as well as they understand donuts and tobacco

Clint Watts, FBI

Watts doesn’t exactly seem optimistic of regulators’ ability to get on top of the problem, though. “From the regulators' perspective, they don't understand tech as well as they understand donuts and tobacco,” he says. “We saw that when Mark Zuckerberg testified to the senate of the United States. There were very few that really understood how to ask him questions.

“They really don't know what to do to not kill the industry. And certain parties want the industry killed so they can move their audiences to apps, so they can use artificial intelligence to better control the minds of their supporters."

FBI agent Clint Watts says the US Senate's questioning of Mark Zuckerberg showed how little regulators understand about technology

FBI agent Clint Watts says the US Senate's questioning of Mark Zuckerberg showed how little regulators understand about technology

Not that this is all on government: far from it. “What was Facebook's thing? 'Move fast and break things?' And they did, they broke the most important thing: trust. If you move so fast that you break trust, you don't have an industry. Any industry you see take off like a rocket, I'm always waiting to see it come down like a rocket too.”

There is one positive to take from this article though, and it’s that the current tech and governmental elite are being replaced by younger generations that seem more aware of internet pitfalls. As Watts says, young people are better at spotting fake information than their parents, and they give privacy a far higher priority than those of us taken in by the early social movers and shakers.

“Anecdotally, I mostly talk to old people in the US and I give them briefings,” says Watts. “Their immediate reaction is 'we've got to tell our kids about this.' I say: 'no, no – your kids have to tell you about this.'”

Alan Martin

Alan Martin is a freelance writer in London. He have bylines in Wired, CNET, Gizmodo UK (RIP), ShortList, TechRadar, The Evening Standard, City Metric, Macworld, Pocket Gamer, Expert Reviews, Coach, The Inquirer (RIP), Rock Paper Shotgun, Tom's Guide, T3, PC Pro, IT Pro, Stuff, Wareable and Trusted Reviews amongst others. He is no stranger to commercial work and have created content for brands such as Microsoft, OnePlus, Currys, Tesco, Merrell, Red Bull, ESET, LG and Timberland.