The internet, propaganda’s new frontier: social media companies under fire in election meddling investigations
126 million―that’s the approximate number of American Facebook users the social media giant estimated were exposed to advertisements posted on the site by the Internet Research Agency (IRA), a Russian marketing firm. 139 million―the total number of Americans who voted in the 2016 presidential election. These figures are just one element of a much larger campaign to disinform the American public in the leadup to the election. Investigative reports and congressional inquiries continue to reveal the depth and scale of both foreign and domestic efforts to manipulate social media for political gains.
The Russia-linked advertisements were targeted at swing voters in key states and contained inflammatory political rhetoric and attempts to stoke racial resentment. However, Russia was not the only source of targeted online political advertisements during the campaign cycle.
In June 2016, the Trump campaign hired Cambridge Analytica, a London-based firm owned by Robert Mercer, a billionaire hedge-fund manager and co-owner of the alt-right news organization Breitbart. One-time Trump advisor Steve Bannon had, in the past, served as the firm’s vice president. Cambridge Analytica uses information purchased from social media companies and other data farms to build “psychographic profiles” of users’ personalities, which are then used to tailor ad content to a specific person or group of people, as well as to predict voting and consumer habits. Before working with the Trump campaign, Cambridge Analytica had offered its services, for free, to the right-wing proponents of the United Kingdom’s (UK) departure from the European Union (EU).
“We have somewhere close to four or five-thousand data points on every adult in the United States,” Cambridge Analytica CEO Alexander Nix said at a presentation in 2016.
The Computational Propaganda project at the University of Oxford’s Internet Institute discovered that a significant amount of pro-Trump content was spread by automated social media accounts, or bots. According to the Oxford researchers, Trump’s bots outnumbered Democratic Party nominee Hillary Clinton’s five to one at the time of the election. These bots did not simply flood the internet with pro-Trump messaging, but were coordinated and actively responded to developments in the campaign by rotating variations of ads and multiplying the ads that received the most attention from social media users.
In early January 2017, the Office of the Director of National Intelligence presented a report to the then-President-Elect Trump. The report concluded that Russian Federation President Vladimir V. Putin had directed a widespread online disinformation campaign with the goal of denigrating Hillary Clinton and electing Donald Trump. The public report did not claim that Russia’s interference had a significant outcome on the results of the election.
Facebook, Google and Twitter now face an investigation from the Senate Judiciary Committee, and Cambridge Analytica is being questioned by House Permanent Select Committee on Intelligence regarding its role in the election.
Oxford’s Computational Propaganda project found that social media firms display a general disinterest in regulating how their networks are being used, leaving most of the work to external fact-checkers like Snopes.com and the Associated Press. Facebook has provoked criticism for its dismissal of outside political influences in the past, with founder and CEO Mark Zuckerberg waving off the threat of fake news and disinformation just after the election.
“Personally I think the idea that fake news on Facebook, of which it’s a very small amount of the content, influenced the election in any way is a pretty crazy idea,” Zuckerberg said at a conference in November 2017.
Facebook has since tucked its tail between its legs and promised to hire more than 1,000 new employees to manually review political advertisements and to publicly show, on the ads, who has paid for them. Facebook and Twitter have turned over advertisements, fake accounts and financial records to investigators. The goal of future legislation will be to limit the ease with which large networks of bots are able to manufacture consensus by showering dubious content with likes and reposts, making it appear reputable.
“The abuse of our platform to attempt state-sponsored manipulation of elections is a new challenge for us—and one that we are determined to meet,” Twitter’s acting general counsel, Sean Edgett told the Senate subcommittee.
Social media companies are businesses. They collect data shared by their users and use that data to make money by selling targeted advertisements. This fact is perhaps the greatest obstacle to effectively regulating the way that political advertisements are handled by Facebook and Twitter. Twitter has been struggling financially in recent years, and so the recent revelation that the company pitched a $1.5 million ad buy to Kremlin-backed media company RT comes as no surprise. Facebook and Twitter should not be passive non-actors, they have a responsibility to limit the abuse of their platforms, even if it means leaving money on the table.
A bipartisan bill introduced in October is one of the first legislative attempts to tackle these problems. The Honest Ads Act would require social media companies to disclose the content and purchasers of political ads.
“Online political advertising represents an enormous marketplace, and today there is almost no transparency,” Sen. Mark Warner D-Va., a sponsor of the bill, said in a statement. “The Russians realized this, and took advantage in 2016 to spread disinformation and misinformation in an organized effort to divide and distract us.”
To prevent your browser activity from being tracked, install one or all of these plug-ins: Ublock Origin, Ghostery, Disconnect.me
Information for this article was provided by the New York Times, Fortune, CBS News, the Independent, Fox News, the Guardian, Vox and Wired Magazine.