Today, representatives from Facebook, Twitter, and Google, as well as several government officials, will testify before the House Committee on Oversight and Reform. The hearing will serve as an opportunity for congressional leaders to gauge the government’s response to ongoing efforts of foreign powers to interfere in U.S. elections, examine the role of the private sector in ensuring election security, and assess avenues of cooperation between the government and the tech companies to mitigate foreign influence on digital platforms.

The latter topic may be the most consequential – the promotion of private-public partnerships to flag and neutralize foreign-backed information operations is central to election security and the health of our democracy in the digital age.  With advancements in technology and an increase in the number of state and non-state actors possessing the capabilities to penetrate critical infrastructure and adversely influence public discourse,  foreign interference operations will continue to evolve. In the run-up to the 2020 presidential election, for example, foreign actors may employ methods of interference that are entirely novel. As such, the most important step the United States can take to secure future elections is to proactively identify and mitigate new threats quickly. This starts by building a more cooperative relationship with big tech.

The committee should also assess whether the federal government can better cooperate with state and local officials. As recent election security issues in Florida and Maryland have demonstrated, foreign interference operations target all levels of government, so an active whole-of-government approach is necessary. The committee should inquire about the current avenues of  cooperation on threats to election between federal, state, and local governments and address ways by which this cooperation can be enhanced.

Finally, the committee should evaluate ways to promote transparency online, for example by mandating that political advertisements on online platforms display their sponsors and the cost of the advertisement, or detailing to a consumer why they were selected to see a targeted ad. By showing this information to a consumer, the misleading effects of malicious content could be mitigated, deepening faith in democracy and the media and removing an avenue of interference foreign adversaries have exploited in previous election cycles.

With this in mind, here are a few questions the committee might pose to the panel, compiled by ASD experts:

Overarching question:

  • In the lead-up to the 2018 midterm elections, there was improved cooperation between tech platform companies and the U.S. government on disinformation threats aimed at influencing American voters. What steps are private companies, the Department of Homeland Security, and other U.S. government agencies taking now – 18 months before the 2020 presidential election – to ensure better and more comprehensive cooperation to defend against election-related threats well in advance of the vote?

Question for Mr. Bill Galvin, Secretary of the Commonwealth, Commonwealth of Massachusetts:

  • Most of the focus on election security has been on preventing cyber-attacks by foreign actors seeking to undermine the integrity of the vote. Guarding against disinformation targeting American voters and raising awareness about the types of information operations foreign actors may be waging to influence the electorate are equally important. Do you consider awareness raising of this type part of your mandate as Secretary of State? How much attention is the state government paying to the threat of disinformation and what cooperation, if any, has the state had with the tech platform companies?

Questions for Mr. Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook:

  • In March, Facebook announced a shift to a “privacy-focused” model and called it the digital equivalent of the living room.  Two tenants of that vision were encryption and reducing permanence of communications.  That model risks exacerbating the spread of disinformation.  End-to-end encryption makes it more difficult to police information operations, to measure the scope and evolution of the problem, and to provide transparency and accountability in how we address it.  As we’ve seen with WhatsApp in India, there encrypted misinformation is viral, leading brutally in one instance to real-life mob killings.  When Facebook decided to shift towards privacy, were the increased difficulties in dealing with disinformation considered?  What are your plans under this new model to police malign actors, understand how foreign actors are attempting to manipulate our domestic political discourse, and allow input and accountability from the public?
  • Facebook has shifted its focus from policing content to inauthentic behavior, allowing the company to go after bad actors, without infringing speech. Once Facebook has taken down inauthentic accounts, is there a process in place that includes sharing information about them with other social media companies and appropriate government agencies? When accounts are taken down, the public, research institutions and non-profits are not given many details about the actors. Why not? Do you see the value in preserving the record for researchers and oversight committees like this one so that we can have better insight into the actions of foreign governments on your platform?

Question for Mr. Kevin Kane, Public Policy Manager, Twitter:

  • Public debate over the role that social media companies should play in curbing the spread of false or harmful content online has focused heavily on whether or not we want tech companies to be the arbiters of truth. But the spread of disinformation online is often achieved through what Facebook has termed “coordinated, inauthentic behavior,” meaning that the problem is often less about the content than the behavior of those disseminating the content. What has Twitter done to identify and disrupt networks of bad actors on the platform, who either misrepresent themselves or use automation or other malicious tools to amplify certain messages? Does Twitter currently have plans to implement more stringent standards to ensure that real users are behind accounts on its platform?

Question for Mr. Richard Salgado, Director of Law Enforcement & Information Security, Google:

  • Conspiracy theories and disinformation narratives from Russian propaganda outlets on topics ranging from the Nordstream 2 pipeline to the Assad regime’s use of chemical weapons have regularly found a foothold among Google’s Top Stories. How does Google plan to tackle this challenge?
  • Adversarial actors understand that search engine prioritization is critical to their efforts to shape the information environment. Technologist warn that search results are particularly vulnerable when search engines have little natural content to return for a particular query. These “data voids” are more likely to return problematic, biased, or manipulated content because there is little high-quality content for search algorithms to return. Malign actors use data voids to shape narratives around topics of interest by guiding users to search terms that they have effectively coopted, meaning that users are almost assured to see results curated by those actors. How does Google plan to address this threat?

Question for Mr. Christopher Krebs, Director, Cybersecurity and Infrastructure security Agency, on behalf of U.S. Department of Homeland Security:

  • The recent news about Russian hackers penetrating the voter registration databases of two Florida counties is particularly troubling. It also speaks to the need for better information sharing, particularly on election threats, between the federal government and state and local governments. What is CISA doing – and what is your parent agency DHS doing – to ensure better coordination among the levels of government? Would additional resources, appropriated through legislation, help?
  • In addition to the lack of communication between federal and local law enforcement and officials, the lack of consistent public messaging and clarity around the facts prompted confusion among the public. Our law enforcement agencies need to be responding and sharing information based on the public’s need to know. Should there be required reporting from law enforcement agencies to depoliticize such intrusions?
  • DHS reportedly reduced the size of its task forces on election security and countering foreign influence after the midterms, despite the threat of foreign interference continuing after election day. Is it your plan to increase the size of the task force prior to the 2020 election? Or does this mean you believe the threat has diminished?
  • In Maryland, one of the key election vendors that provides a range of services to the state, including voter registration infrastructure, was acquired by a parent company with links to a Russian Oligarch. Is there a vetting process that companies involved in U.S. election must go through? How do we know we are secure from a cyberphysical standpoint?

The views expressed in GMF publications and commentary are the views of the author alone.