On Thursday, the House Permanent Select Committee on Intelligence will host a panel of experts to discuss the national security challenges posed by artificial intelligence (AI), manipulated media, and “deepfake” technology. The committee will examine current and future capabilities, as well as ongoing and proposed detection and policing measures.
The risks and effects of reality-distorting content will be central to the discussion, but questions over the respective roles of the private and public sectors, as well as those related to the normative issues inherent in any content moderation discussion, should be probed and discussed in detail.
The panel will include Jack Clark, policy director of OpenAI; David Doermann, director of the Artificial Intelligence Institute at the University of Buffalo; Danielle Citron, professor of law at the University of Maryland; and Clint Watts, Senior Fellow at the Alliance for Securing Democracy, German Marshall Fund.
ASD’s emerging technology fellow Lindsay Gorman, disinformation fellow Bret Schafer and non-resident fellow and deep fake expert Aviv Ovadya outlined questions the committee might ask the panelists.
Questions from Emerging Technologies Fellow, Lindsay Gorman, whose work focuses on the national security and democracy implications of emerging technologies including artificial intelligence, 5G, and blockchain.
- Can you comment on the landscape of technical solutions to counter deepfakes and/or verify the authenticity of online content (e.g., blockchain-based solutions like Truepic)? Which do you view as the most promising, and what investment in resources or R&D would be necessary to build resilience in our information space?
- Are our adversaries using deep fakes to fuel disinformation yet? What evidence or early indicators do we have of the misuse of manipulated content?
- Is it possible to police or detect manipulated or deep fake-ed media on an end-to-end encrypted platform? As we think about regulation of tech firms to counter disinformation and shore up user data protections, will Facebook’s “move to privacy” make it easier, harder, or impossible to address manipulated content?
Questions from Non-Resident Fellow Aviv Ovadya, an independent researcher and technologist who has been involved in sounding the alarm and addressing online and AI driven misinformation from before the 2016 US election. His work to improve the information ecosystem touches on misinformation, synthetic media (e.g. deep fakes), content moderation, recommendation engines, distributed sensemaking, contextualization, platform design, governance, and media literacy.
- How relevant are the measures of success used by deep fake forensics research for real-world use cases? What are the obstacles to applying these tools in real-world environments (e.g. in platforms) in time for the primaries?
- It is clearly important that ensure that the tech platforms are less vulnerable to manipulated media–but what about telecoms providers and infrastructure? How far along are we in ensuring that phone numbers can no longer be spoofed so that fake voices are less effective? How can the government help or move faster here? What are the obstacles to ensuring this is also possible internationally, where fakery and hacking can have significant geopolitical implications for the United States?
- Platforms already serve ads before, and sometimes even during and after videos. Is there any reason they cannot replace those ads with disclaimers about manipulation of a video?
Questions from Media and Digital Disinformation Fellow, Bret Schafer, who concentrates on ways in which authoritarian and foreign actors hijack online social movements by using inauthentic accounts, fake personas, and other tools to interfere in democratic processes and institutions.
- What responsibility do the creators of synthetic media technologies (e.g., Adobe Cloak) have in terms of creating detection/forensics tools that can identify manipulated content that their technology helped to create?
- Do social media platforms need updated terms of service to directly address deep fakes; i.e., should manipulated content be moderated differently than opinions or news articles that are false or potentially misleading?
- If we ask internet companies to take on an expanded role in policing fake media, do we run the risk of moderating everything from Hollywood films to fashion photography? More precisely, how should they (and society in general) differentiate between deep fakes and content that has been edited for creative or persuasive purposes?
The views expressed in GMF publications and commentary are the views of the author alone.