What Congress Got Right and What it Overlooked at Last Week’s Deepfakes Hearing

2019-07-26T10:08:11-04:00
June 20, 2019
Fellow for Emerging Technologies
|

The House Permanent Select Committee on Intelligence held its first hearing devoted to the national security implications of artificial intelligence in the information space last week. “For more than five centuries, authors have used variations of the phrase, ‘seeing is believing.’ But in just the past half-decade, we’ve come to realize that that’s no longer always true, ” Dr. Doermman, founder of a Defense Department research program on media forensics (MediFor) testified. He’s right: Deepfakes and synthetic media risk blurring the relationship between fact and fiction, undermining a key foundation of democracy: objective reality.  Democratic governance relies on an informed citizenry to choose political leaders and an evidence-based justice system to uphold the rule of law.  Both require a strict separation between what is true and what is not to be effective and fair. 

The hearing highlighted three important considerations. First, research advances in AI are democratizing access to deepfakes. Manipulated content follows a trend that is true of dual-use technology writ large — what was previously only tested in the AI labs is now increasingly available to the average person. For example, software developed by researchers at Adobe, Princeton, Stanford, and Germany’s Max Planck Institute allows users to input their own text for the subject of a deepfake video.  While this particular software is not public yet, some tools are, and it will only be a matter of time before editable software like this is available to anyone. Even cruder tools are already being deployed to influence public discourse and political debate. “Shallow fakes” (also known as “cheapfakes”) such as the Nancy Pelosi video, which was slowed to make her appear incapacitated, or the Jim Acosta video, sped up to make him appear more aggressive toward a young White House aide, can themselves go just as viral and be just as damaging as sophisticated deepfakes.   

Second, the adage, “a lie travels halfway around the world while the truth is getting its shoes on” is apt. Both academic research and social media evidence indicate that debunking manipulated media content after the fact is an uphill battle.  According to Brown University cognitive psychologist Steve Sloman, “We know that the primary determinant of whether people believe a message is whether or not they agree with it.”  Even then, fewer people will view the truth than the lie. For example, in 2017, content debunking the top 50 false stories on Facebook received 0.5% of the engagement that the original stories did. 

Third, removing manipulated content wholesale is not an option, given fraught speech implications. Parody and satire are central to democratic discourse.  Social media companies — which have enormous adjudicatory power in this domain — may need to evaluate how their terms of service stack up against a proliferation of manipulated media.  Regardless, they must enforce those terms uniformly and make them accessible and understandable to the average user, so that determinations on what stays up, what is removed, and what is labelled as false are based on policy, not politics.

Three things were missing from the hearing. First while most of the hype focuses on deepfake videos, synthetic text —  especially combined with precise microtargeting — may be equally damaging.  Generative machine learning systems like non-profit research organization Open AI’s GPT-2 produce realistic-sounding stories from a supplied news lead, complete with spoofed quotations from government officials and journalistic prose.  They will enable malign actors to strike just the right tone with just the right audience to convey just the right political message to influence behavior.  As a completely open platform, Twitter is a laboratory for information operators to assess the success of specific narratives and refine them. 

Authoritarian actors have already shown a propensity to use AI to control and influence the information arena.  Russia sees AI as a means of winning information wars.  China uses AI for sophisticated censorship of its own citizens.  The prospects for autocrats to use synthetic media to undermine democratic systems and squash discourse in their own societies has been relatively unexplored.

Also absent was a thorough discussion of technical options to counter the rise of synthetic content.  This attention matters because Congress can allocate funds towards research and development of these solutions.  It can also pressure social media companies, often AI powerhouses, to invest in them.  Ideas to consider range from automatic digital signatures and watermarks on all content, to blockchain-based verification, to restricting the availability of audio sample training data by holding telecom companies accountable for robocalls, among others.  Given the near-impossibility of stopping compelling fakes once they go viral, research into better “digital prophylactics” (like authenticity architectures using the blockchain) needs to be on the table. Lawmakers should call for creativity, as well as an assessment of the technical attractiveness and feasibility of options. 

Lastly, lawmakers should give public hearing to the need for best practices in the handling of AI-enabled deceptive information for media outlets, campaigns, and even voters in 2020.  As it does with cyberthreats, the technical cat-and-mouse game of creating and debunking manipulated media favors the attacker; ultimately advanced manipulators will outpace our ability to stop them. Therefore, all pillars of democratic society — government officials, lawmakers, political campaigns and candidates, the private sector, and media — need to focus now on anticipating and addressing what will increasingly become a threat to the global information environment.  Traditional media’s reporting on weaponized information during the 2016 presidential election offered lessons that may be relevant to covering new forms of synthetic video and audio.  To avoid a race to the bottom, campaigns should pledge not to produce or share deepfakes. Voters and social media users too bear responsibility for the content they share. Cross-societal public appeals on matters that concern the foundations of our democracy fall on the shoulders of elected officials.

If the Senate Select Committee on Intelligence takes up these threats, and before Congress puts pen to paper, it should explore these issues in depth to ensure its actions address the full range of challenges AI poses to our information space.

The views expressed in GMF publications and commentary are the views of the author alone.