In April 2021, ASD held a security and geopolitics workshop as part of the Good Web project entitled “Tensions: Authoritarian Internet Threats and Democratic Responses,” in which participants debated the merits of different responses democracies have taken to address authoritarian Internet threats, the methodologies used to inform these decisions, and the implications thereof. Participants included experts in cybersecurity, cyber policy, foreign policy, defense policy, and emerging technology, with regional expertise on Europe, the United States, Japan, India, Brazil, Russia, and China.
Key takeaways: A key factor in determining appropriate responses to authoritarian technology threats is the extent to which they intrude in or have the potential to disrupt critical infrastructure in democratic societies—be it voting, communications, Internet, or next-generation foundational technologies. But what constitutes critical infrastructure is changing, and it’s hard to predict and determine the point at which an emerging technology, app, or information-based service inflects from being something interesting or useful to becoming critical infrastructure.
As democracies seek to build joint and allied responses to these threats, they cannot escape geopolitics. The strategic landscape beyond the Internet and technology space will always matter—particularly if democracies seek a model with attractive power globally. But democracies can build stronger support through evidence sharing and a common threat picture. One concrete starting place is developing a common understanding of critical information and technology infrastructure.
Context: Squaring China’s authoritarian rise with its deep integration in the global economy has raised questions in democracies over how to address the threats posed by the presence or predominance of technology developed in and ultimately accountable to an authoritarian regime in democratic societies. In recent years, many democracies have decided to exclude Chinese telecommunications provider Huawei from the build-out of 5G networks on national security grounds. Some have also raised concerns over video-sharing app TikTok for its opaque suppression of content and ability to be leveraged by an authoritarian government as an intelligence tool. Open societies face inherent tensions between broad allowance of participation and the weaponization of openness by adversaries. There’s also a risk that responses absent clear process mimic the very closed and authoritarian systems they seek to guard against. At present, there is little consensus on standards for evaluating threats and risk; and evidence for action against authoritarian threats can be unclear because the risks are ill-defined.
Why it matters: We cannot build a “democratic Internet” or a joint vision for emerging technologies without understanding and reckoning with the differences among democracies themselves. It will be impossible even for democracies to agree on everything, particularly at the tactical level of economic interests, industrial policies, and regulatory cultures. But despite different understandings of shared principles and values, democracies have a shared commitment to the rules-based international order and should be able to agree on common rules of the road for others to play by when doing business with them. Having these rules regarding technology would place democracies in a stronger position to deal with countries that have a different vision of the Internet and digital space.
Points of tension and roadblocks:
- Insufficiency of evidence. When it comes to many technological threats, the uncomfortable reality is that public evidence of malfeasance alone cannot be the only factor driving decisions. Evidence-based rubrics are important, but much of the evidence democracies would rely upon to make national security determinations is considered proprietary corporate secrets, such as algorithms and code. The challenge is how democracies make decisions without transparent access to evidence when the structures of many corporate systems may not support this transparency.
- Economic and security tensions. In some democracies, decisions on technologies and the digital space have suffered from the problem of being “under-securitized”—where security concerns have been elided over or given insufficient attention. Instead, these decisions have been primarily, if not exclusively, driven by economic concerns.
- Private sector competition among democracies. Although democracies from the United States and EU to Japan and Australia are “likeminded” and share common security interests, the private sector largely drives technology development—in global competition. Companies will not cooperate against their interests, and countries and regions want to build and champion their own technologies. Recent moves towards independence in the semiconductor industry are a case in point. Market fragmentation can roadblock the pursuit of larger strategic goals.
What we agree on: The authoritarian Internet and emerging technologies pose real threats to democracies. As seen in how the TikTok discussion played out publicly, effective democratic responses can be hamstrung by a lack of precision when talking about risks and a lack of transparent process. To build a common operating picture around emerging technologies, democracies need improved coordination and information and evidence sharing on risks and threats.
What we disagree on: At issue are both the evidence of the severity of authoritarian technology threats, as well as how high the bar of evidence needs to be to justify action. In some democracies, like India and Taiwan, a country of origin provides a strong enough rationale for action. Others argue for a more country-neutral approach. There is also disagreement on whether a country-neutral approach can adequately capture and address threats and at what point country of origin matters when it comes to authoritarian technologies.
Where we place the burden of proof and the benefit of the doubt also varies. In many democracies, the burden of proof lies with governments to show bad behavior, such as data exfiltration or malign influence. Under some circumstances, perhaps the onus should be on platforms to prove their trustworthiness.
Bottom line: Despite disagreements at a tactical level, democracies retain shared common interests. As such, coherent democratic Internet policy must allow for a range of implementations across democracies based on a shared threat picture and within certain bounds. As democracies consider specific evidence, they should be on the same page regarding what constitutes critical infrastructure and what the corresponding security risks may be.
The framing of the Prague Security Conference—which outlined vendor-neutral (but not governance-neutral) criteria for evaluating the security of 5G equipment and formed the basis for the EU’s 5G Toolkit—could be extended to all critical information infrastructure.
Finally, risk-based evidence should be a strong factor in technology decision-making, but it is not the only factor. For many countries, strategic and geopolitical factors will be the ultimate deciders regarding banning or preferring technology from one country over another.
The views expressed in GMF publications and commentary are the views of the author alone.