Deepfake technology that uses artificial intelligence to create and modify videos, images, and audio recordings to place individuals in situations they were never in, or put false words in their mouths, has burst onto the scene as the latest technological threat to the democratic political process.

Amidst a swell of public interest in and concern over deepfakes, Twitter and Facebook have both recently announced policies for handling synthetic and manipulated media content on their platforms.  And in the last week, Reddit and TikTok have also amended community standards to mention manipulated media, in potential preface to larger policy changes.

Our side-by-side comparison and analysis of Twitter and Facebook’s policies highlights that Facebook focuses on a narrow, technical type of manipulation, while Twitter’s approach contemplates the broader context and impact of manipulated media.  Both platforms’ policies would benefit from greater transparency and accountability around challenging and unenviable decision-making in this space.

Key points of comparison are highlighted in the following chart:

 

 

Three key takeaways follow:

First, Facebook’s policy is extremely narrow in scope. The policy only applies to videos, and among them, only those that have been generated or altered through artificial intelligence or machine learning.  So-called “cheapfakes,” videos misleadingly edited without AI, that have targeted U.S. politicians in recent months, are not covered under Facebook’s policy. In addition, the Facebook policy only applies to audio manipulations to videos — and specifically manipulations that portray a subject of a video speaking words that in reality they did not. The policy fails to account for deepfakes that spoof actions rather than words.  But synthetic or manipulated actions can be just as misleading: the Jim Acosta video sped up to present the reporter as aggressively pushing aside a female aide is a case in point. Finally, Facebook also provides an exception for edits that only “omit or change the order of words.”  While any manipulated media policy is likely to struggle with false positives given the scope of the challenge, this exception allows manipulators that misleadingly edit out key context for statements or splice together spoken words in a different order to continue to wreak havoc.

Second, Twitter assesses whether manipulated media is likely to cause “serious harm” in its criteria for removing content, while Facebook does not.  The types of serious harm Twitter identifies include: a) threats to the physical safety or privacy of a person or group; b) the risk of mass violence or civil unrest; and c) threats to the ability of a person or group to freely express themselves or participate in civic events. These criteria are subjective and would be improved through a more transparent determination process. However, they speak to underlying concerns around the impact of manipulated media on democratic political and social life.  In particular, identifying a specific interest in protecting civic participation is noteworthy for upholding the health of democratic processes in response to foreign interference. For example, in 2016, the Russian election interference effort included targeted attempts to suppress voter turnout among African Americans — some of which used manipulated media. The “text-to-vote” campaign ran fake ads to fool Hillary Clinton supporters into thinking they could vote early by texting a five-digit number instead of appearing at the polls.

Third, Twitter outlines an option to annotate manipulated media that falls under less extreme sections of its policy, while Facebook will simply remove all content that falls under its policy. Namely, Twitter’s policy gives the option of labeling manipulated media, warning users who interact with it, or adding clarifications to it in cases where the media does not have deceptive context and the potential to cause serious harm. While Facebook has said it may use warning labels in specific cases of false information, it has not spelled out a similar, tiered system in policy.  And when it comes to deepfakes, removal is the only stated action — it’s all or nothing.

 

 

The views expressed in GMF publications and commentary are the views of the author alone.