WASHINGTON — Earlier this week, representatives from Facebook, Google, and Twitter appeared before the U.S. Senate Committee on the Judiciary to answer questions about the use of their platforms in the dissemination of Russian disinformation before and after the 2016 U.S. presidential election. It was a predictable enough performance by the Silicon Valley giants, with moments of scripted contrition carefully couched between data that was framed to minimize the overall impact of Kremlin influence operations on their respective sites. After weathering two days of testimony and at times blistering questioning from committee members, the tech sector left doubts as to whether it is ready or willing to embrace the challenge of combatting disinformation, outside of rather vague assurances that it is “committed to working with others.”
It has now been almost exactly a year since the election. While we still do not know the full extent of Russia’s attempts to influence voters and manipulate and divide the American public, each day brings about new revelations of the scale, scope, and sophistication of the Kremlin’s information operations. The notion that our social media networks have been weaponized and turned against us — once dismissed by Facebook’s chief executive, Mark Zuckerberg, as a “crazy idea” — now seems not only possible but plausible, particularly in light of Facebook’s recent disclosure that more than 146 million users potentially saw divisive ads purchased by the Kremlin. It is clear that the tools we all hoped would be democratizing forces are being used by authoritarian countries to undermine the very freedoms they sought to enable. What is less clear, is what can and should be done to ensure that these platforms cannot continue to be abused, especially as technologies change and tactics evolve. Of course, those questions lead to a more fundamental one: do policymakers and tech executives even fully understand the problem?
To date, much of the public focus has been on tech companies’ admission that targeted political ads purchased by the Kremlin-connected Internet Research Agency were widely distributed on their platforms. But ads were and are just one small component of Kremlin influence operations online, especially on sites like Twitter and YouTube. So why have they received the most attention?
For starters, ads are easy to spot, relatively easy to trace, and they can be regulated through conventional legislation, such as the bipartisan Honest Ads Act, which would bring digital political advertising in line with the rules that apply to political ads on other forums. Perhaps more importantly, though, their function and role in the online ecosystem is easy to understand, particularly for non-tech savvy lawmakers.
Social media companies’ wiliness to divulge information on this front also suggests that they are more than happy to take their lumps on the sale of ads to foreign actors so long as it keeps the government’s focus away from issues that more directly threaten their business models. Additionally, the comparatively small amount of money spent by Kremlin affiliates on divisive ads reinforces the notion that Russian interference on their sites was minimal. In short, ads are the low-hanging fruit on the disinformation tree; they are easy for Congress to attack, and relatively painless for the tech companies to address.
Stopping the flow of disinformation, however, will require a more robust approach that takes into account the entire arsenal of tools used by the Kremlin. This will require more candor from social media companies, who thus far have been less than forthcoming when it comes to addressing the thornier issue of “organic” content that appears on their sites — not to mention the role of social bots and fake accounts. Tackling these challenges will require that resources be invested and changes be made to a culture that deifies new technologies without anticipating their exploitation by bad actors; avoiding these challenges will invite far greater government intrusion and potential overreach.
Social media companies must also move from a siloed approach to one that embraces cross-platform solutions. Disinformation does not exist in isolation. It propagates not only within platforms but across them. As Clint Watts, a non-resident fellow with the Alliance for Securing Democracy who appeared before the Senate Judiciary Committee on Tuesday, testified, “Each social media platform serves a function, a role in an interlocking social media ecosystem where Russia infiltrates, engages, influences, and manipulates targeted American audiences.”
Just as financial criminals use a variety of tools and services to launder ill-gotten gains, purveyors of disinformation use different sites and techniques to move misleading content from questionable to credible sources. That process makes use of the full-spectrum of social media tools to place, spread, and integrate disinformation into traditional media sources or trusted social circles. Any attempt to focus on an individual platform or one disinformation tool is therefore likely to miss the totality of Russian efforts.
Moving forward, there needs to be greater cooperation between national security officials and tech executives to address potential vulnerabilities with new technologies before they are identified by our adversaries. This will also require more information sharing among tech companies to understand not only their individual platform’s vulnerabilities but also how their platform fits into the broader ecosystem of disinformation. These changes need to happen rapidly because, in the words of Clint Watts, “The U.S. government, social media companies, and democracies around the world don’t have any more time to wait.”
The views expressed in GMF publications and commentary are the views of the author alone.