Update: On Monday, October 2, Facebook announced (opens in new tab) it has turned over the more than 3,000 ads linked to the Russia-tied Internet Research Agency to congressional investigators.
In a blog post, the social media giant also further detailed the steps it's taking to ensure greater transparency and authenticity of ads on the platform.
One such step is to hire more than 1,000 people over the next year who will review ads. Facebook also plans to invest more in machine learning to better identify and take down ads that violate its policies.
Original story continues below...
Facebook has found itself playing a prominent role in the investigation into US 2016 presidential election meddling, a position it may not have expected to be in just a few years ago.
Last week, Facebook general counsel Colin Stretch announced (opens in new tab) the social media giant would release 3,000 Russia-linked political ads to the House and Senate Intelligence Committees, after previously refusing to do so due to cited privacy concerns.
This followed the revelation that at least 470 fake Pages and accounts were identified by Facebook to have spent approximately $100,000 (opens in new tab) on promoted ads from 2015 to 2016. According to The Washington Post (opens in new tab), at least some of these accounts were linked to the Internet Research Agency, a so-called "troll farm," operated out of Russia.
Facebook CEO Mark Zuckerberg, who previously called (opens in new tab) the notion that fake news on Facebook influenced the election a “pretty crazy idea,” released a video (opens in new tab) last week outlining “the steps [Facebook is] taking to protect election integrity.”
In this piece, we’ll lay out we know so far, what Facebook has promised to do in the future to ensure the integrity of elections around the world, and what questions we still don't have the answers to.
Playing the system
On September 6, Facebook Chief Security Officer Alex Stamos revealed the company's findings: 470 Pages and accounts that purchased $100,000-worth of ads were “affiliated with one another and likely operated out of Russia.”
Stamos also noted that another $50,000-worth of ads were purchased by “accounts with US IP addresses but with the language set to Russian,” which “didn’t necessarily violate any policy or law” but raised red flags in hindsight.
The New York Times (opens in new tab) recently detailed how some fake accounts came to be, and the information - or, misinformation - they spread. One profiled account was for a Melvin Redick, ”of Harrisburg, Pa, a friendly-looking American with a backward baseball cap and a young daughter,” someone it seems doesn't exist. This account, as with others like it, were used to spread divisive messages and start trending topics through promoted advertisements.
None of these ads received any scrutiny from Facebook. The company uses a self-service advertising interface that lets users promote posts without any employee oversight. Only major ad campaigns from companies receive human attention. “Individual” users working en masse avoid this problem.
“[T]here was nothing necessarily noteworthy at the time about a foreign actor running an ad involving a social issue,” said Elliot Schrage (opens in new tab), Vice President of Policy and Communications at Facebook. International NGOs, for example, might run an ad addressing women’s rights or encouraging charity donations. Only after the election, Schrange claims, did Faceook notice some auto-approved ads might be “problematic.”
The “vast majority” of the ads, Stamos’ post stressed, “didn’t specifically reference the US presidential election, voting or a particular candidate.” Instead, the ads covered “topics from LGBT matters to race issues to immigration to gun rights,” focusing on “divisive social and political messages.”
But the New York Times (opens in new tab) says that some ads did mention President Trump and Democratic candidate Hillary Clinton by name, mostly “attacking” Clinton and “praising” Trump.
While Facebook has released the ads to Congress, it has refused to make the content of the ads public. Stretch says this is due to federal law, which “places strict limitations on the disclosure of account information.”
But Special Counsel Robert Mueller is reportedly taking a “red-hot” (opens in new tab) interest in this scandal, as he investigates Russia’s election meddling and the Trump campaign’s alleged communication with Russian government agents during the election. This gives some idea as to the content and political bent of the ads.
Stamos further revealed that a quarter of the ads were “geographically targeted.” Without further information, it’s impossible to know the US regions or communities where the ads were served. Election swing states like Michigan or Pennsylvania would be potential targets, but only if those posting them had inside information on which states, districts, or registered voters could be most susceptible to “divisive social messages.”
Adam Schiff, senior Democrat on the House Intelligence Committee, expressed this same concern (opens in new tab). “Left unanswered in what we received from Facebook...is whether there was any coordination between these social media trolls and the [Trump] campaign. We have to get to the bottom of that.” Now that Facebook has released its data to the Committees, Schiff and his colleagues will investigate for any evidence of collusion.
While the ads themselves remain a mystery, journalists have linked some right-wing Facebook events directly to Russian-created accounts.
The Daily Beast (opens in new tab) discovered that Russian operatives remotely organized an “anti-immigrant, anti-Muslim rally” in August 2016 in Idaho through a Page called “SecuredBorders.” Business Insider (opens in new tab) reported that Heart of Texas, a Russia-backed group with about 225,000 followers, sponsored an “anti-Hillary” rally three days before the election. And Politico revealed that Russian operatives have promoted pro-secessionist propaganda in Texas since 2015.
What most experts agree on is that just about everyone, from tech companies to the United States intelligence community, was caught completely unaware by a foreign power's ability to “manipulate and influence elections” through social media.
“The surprise was the integration into a whole campaign,” said (opens in new tab) former NSA director Richard Ledgett. “It’s the amplification of some stories and the suppression of other stories to bias you. That’s really hard to fight against.”
Ledgett believes it highly unlikely that social media companies like Facebook had the capacity to discover the plot, considering the US government couldn’t.
Facebook has also come under fire for its so-called dark ads, as The Verge (opens in new tab) reports. These ads are created without permanent links, and vanish once users scroll by them in their News Feeds. This makes it difficult to track what kind of campaign messages advertisers are sending to voters.
Yet if Facebook was taken by surprise once, it’s now attempting to ensure it doesn’t happen again during future elections.
Earlier this year, Facebook released a fact-check tool (opens in new tab) to allow users to check if an advertised post came from a reputable source ahead of the German elections, and joined forces (opens in new tab) with other tech companies like Google to cut down on fake news during the French election. The company reportedly deleted tens of thousands of fake accounts during the French election alone.
Chief Security Officer Stamos outlined other new policies the tech giant is implementing. Facebook uses machine learning to limit posts from low-quality web pages or links that disguise a post’s true destination through rerouting. It’s also using deprioritization to limit the exposure of posts with clickbait headlines or from Pages with news consistently marked as false.
Moving forward, Facebook plans to “make political advertising more transparent,” as detailed by Zuckerberg in a post (opens in new tab). While TV ads are required by law to be publicly available and to source whoever footed the bill, internet ads have no such restrictions.
But Zuckerberg said that Facebook will “disclose which Page paid for an ad” and “make it so you can visit an advertiser's page and see the ads they're currently running to any audience on Facebook.”
Beyond providing 3,000 ads to Congress and the special committee, the CEO said Facebook will continue its own investigation “into foreign actors, including additional Russian groups and other former Soviet states.” This includes doubling Facebook’s election integrity team to 250 members.
Zuckerberg was somewhat less enthusiastic when discussing censoring posts and reducing automation. “We don't check what people say before they say it, and frankly, I don't think our society should want us to,” he said. “Freedom means you don't have to ask permission first, and that by default you can say what you want.”
So, his solution to reducing false ads may rely more on catching illegal content after the fact, rather than moderating material before publication.
Facebook will work with a number of organizations to bolster the democratic process and counter trolls and bots, from “election commissions around the world” to crowd-sourced security software ThreatExchange (opens in new tab).
While Zuckerberg stressed that it isn’t “realistic” to think Facebook will “be able to stop all interference” in the future, he certainly emphasized that it won't be blind to the problem any longer.
Of course, at the end of the day Facebook makes its money from ad revenue, and as The New York Times (opens in new tab) points out, tech companies are worried the government will use these revelations as a pretense to add more restrictions to anonymous online advertising.
Thus, Facebook is focused on self-regulation: conducting their own Russia investigation outside of Congress’, policing their own ads while preserving the anonymous ad program, and stressing that the ads weren’t as influential on the election as you might assume. We’ll have to wait and see if Congress and Mueller agree.