Below is an unedited transcript of the proceedings provided by the Australian Parliament. Read the full transcript here.
GORMAN, Ms Lindsay, Senior Fellow for Emerging Technologies and Head, Technology and Geopolitics Team, Alliance for Securing Democracy, German Marshall Fund [by video link]
[09:12]
CHAIR: Welcome. You are joining us from Washington DC. As was noted in your invitation, people giving evidence outside of Australia are not protected by parliamentary privilege. I now invite you to make a short opening statement, after which the committee members will ask you questions.
Ms Gorman : Thank you, distinguished members of the committee, for inviting me to address you on the evolution of foreign interference on social media and, specifically, the PRC’s influence operations.
Social media remains a key front of authoritarian asymmetric influence operations and democracies. Today, at the Alliance for Securing Democracy, we track over 1,400 overt official diplomatic and state media accounts from China, Russia, and Iran. At the same time, social media is best understood as one element in a broader toolkit to influence and interfere in democratic institutions and to control and weaponise the information environment. These low-cost operations exploit a fundamental asymmetry in the way democracies and autocracies view and engage with information. Democracies depend on a free exchange of information that empowers citizens to make decisions, select their representatives and engage in political discourse. Authoritarian regimes, by contrast, view information as a threat to state authority if allowed to flow freely and as an instrument of social control if weaponised deftly.
I will focus my remarks today on three key trends: the evolution of PRC tactics and engagement on social media; new platforms and new threats; and new technologies and risks. First, on the evolution of China’s influence operations: over the last five years, PRC diplomats and state media outlets have flocked to social media to spread China’s message to the world, increase its discourse power, denigrate the United States and the international world order, and position itself as an alternative democracy.
Three key geopolitical events have accelerated the development of Chinese influence on social media. First, the 2019 Hong Kong pro-democracy protests promoted a significant influx of Chinese diplomats to Twitter. For months they worked to discredit the protests and to frame the CCP’s National Security Law and subsequent arrests as proportionate.
Second, starting in early 2020, Chinese diplomats and state media went into overdrive to shape global discourse on the COVID-19 pandemic. At this time, China’s ‘wolf warrior’ diplomacy grew, replete with conspiracy theories over the pandemic’s origins as well as attacks on politicians perceived as threatening China’s core interests, such as the future of Taiwan or human rights abuses in Xinjiang. AUKUS is one such issue that draws consistently negative PRC messaging on social media, accusing the agreement of destabilising regional security and encouraging nuclear proliferation.
Third, Russia’s war in Ukraine and the growing rapprochement between Beijing and Moscow have seen a mirroring of narratives and a two-way amplification between the countries on social media. As tech platforms have sought to limit the reach of Russian state media, Chinese counterparts have acted as a conduit for Russian narratives, blaming NATO aggression, hyping up Russia’s resistance to Western sanctions and minimising Russia’s invasion. This Sino-Russian convergence in the information space is indicative of a more expansive cooperation, including on technology and internet governance, in the ‘no-limits partnership’.
On new platforms and new threats, mainly TikTok, the global rise of TikTok presents two intersecting and overlapping national security concerns: those of data security and algorithmic manipulation. The United States, the UK and Australia have all banned TikTok on government devices for reasons of the former, and debates continue over the prospect of foreign influence on the platform. Our research has found that, despite the professed separation between the CCP and TikTok, PRC diplomats and state media accounts have gone to bat to TikTok, with messaging campaigns designed to paint the app in a benign light, hype up TikTok’s popularity and the consequences of the ban, denigrate the US political system as hostile to business, and portray criticism of TikTok or China as xenophobic, often drawing on familiar tropes. We also found that 78 Russian state media accounts on TikTok had accumulated more than 319 million likes. In short, they were more popular on TikTok than the New York Times.
The scope of PRC influence operations raises foreign influence concerns as TikTok becomes more and more a site of political discourse in democracies. TikTok has reportedly suppressed content unfavourable to the Chinese Communist Party and spied on journalists reporting on the company. Social media recommendation algorithms are inherently opaque in how they promote or demote content to users. This lack of transparency can provide an avenue for foreign influence on social media to go undetected. TikTok’s heating button, for example, reportedly allows employees to manually make specific content go viral. If the CCP were to direct TikTok to heat particular content, it would be extremely difficult to discern this manipulation.
Lastly, on new technologies and new risks with generative AI, disinformation and distrust, the rise of ChatGPT and large language models, as well as image generation services such as Midjourney and DALL-E, provides new avenues for the automation of disinformation on social media. Deepfake images, video and text can enable automated propaganda. The rise of content that looks plausible but is not necessarily true also runs the risk of undermining broader trust in the information environment, which is critical for quality information to flourish in democracies. As democracies contemplate regulation on AI, restoring trust in the information environment and guarding against authoritarian information manipulation should be at the forefront of these conversations.
I offer the committee three recommendations. The first is to join with democratic allies and partners to develop a comprehensive framework for addressing the threats posed by authoritarian internet apps and critical information infrastructure. Given Australia’s strong work on foreign interference, it is naturally poised to take a leading role in these efforts. The second is to develop legislative or policy frameworks for AI that promote trust and target harms, including the threat of authoritarian interference using AI. The third is to invest in the adoption of content authenticity frameworks that allow non-manipulated information to look qualitatively different from manipulated information, as well as other democracy-affirming technologies such as privacy-preserving AI. I look forward to your questions and to continuing engagement with the committee on this matter.
CHAIR: Thank you very much for that opening statement, which covered a lot of ground very relevant to the committee and issues we’ve been canvassing with other witnesses so far. I just want to highlight—I know you’re not appearing in this capacity—your professional background, which includes working for the Biden White House and for Senator Warner, a Democrat in the Senate. The reason I mention that is to demonstrate what I think you can speak to, which is that in the United States this is a non-partisan or bipartisan issue. There’s concern across the political spectrum about these security issues, particularly, in recent times, in relation to TikTok. Is that a fair summary?
Ms Gorman : It absolutely is. I would add that I was honoured to work with Australian colleagues during my government service. But, yes, in Washington DC, we’ve absolutely seen a bipartisan agreement on the threats of TikTok, both from the data security perspective and from the CCP influence concerns.
CHAIR: I’m really interested in your perspective, therefore, on the potential or likely pathways in US policy to tackle these challenges. There are obviously a number of competing pieces of legislation now before the congress: Senator Warner’s RESTRICT Act, a house Republican bill to ban TikTok outright, initiatives by the administration to potentially force ByteDance to divest TikTok and, of course, Project Texas. I’m interested in your assessment of the likelihood of any of those particular policies being adopted and your view on which is going to be the most effective in dealing with that problem.
Ms Gorman : I’ll start with your second question on which would be the most effective. I definitely think that a forced divestiture is the way to go here. That would be the most effective. We can’t ignore the fact that TikTok is incredibly popular, not just in the United States but around the world. We have 150 million users on TikTok. In some way the TikTok problem, which has become something of a debacle, is the result of the fact that we were late to the game in addressing the national security concerns around TikTok when it burst onto the scene in 2018 and 2019 and we started raising concerns about data security, propaganda and algorithmic influence. As a result, TikTok has grown its user base and become a part of our information environment. That makes it very, very difficult to contemplate something like a ban. That is why it’s my view that a forced divestiture, whereby the concerns could be partially mitigated by the removal of TikTok’s Chinese ownership by ByteDance, would allow TikTok to keep operating in democracies without that overwhelming threat of foreign influence and interference.
I’m less sanguine about our ability to achieve that alone as the United States, because this process has been ongoing for three or four years now. We have significant concerns about the ability to actually force ByteDance to sell TikTok. That’s why we’ve seen a number of legislative proposals that would give more authority—for example, to the commerce department in the RESTRICT Act—to potentially ban TikTok or other apps like it. We’ve seen action even at the state level of the United States because of this really delayed process at the federal level. One of the things we would definitely stress is that we need this comprehensive framework, whether it’s the RESTRICT Act or something that’s done at a multilateral level, so that we can get ahead of these threats before they become one of the most popular apps in the country and in the world, and we can address them head-on before getting into the scenario that is not a great one that we find ourselves in today.
CHAIR : I think that’s really well put. Obviously the biggest roadblock to forcing ByteDance to sell TikTok is that the Chinese government is unlikely to agree to permit it and may, in fact, attempt to prevent it. If we get to that stalemate, what do you think is most likely to happen there and what do you think would be most preferable to happen there?
Ms Gorman : The most preferable scenario, I think, would be if it’s not just the United States acting alone in trying to force this kind of action on TikTok. When we get to the level of nation-state actors, it should not be the United States versus China over TikTok. I think there’s a role for international pressure here. If the United States is forced into a position of banning TikTok because they can’t sell it, or they just allow it to keep operating, that’s not a good position for any country to be in. But if more and more countries start raising the same concerns then I think a divestiture becomes less deleterious to ByteDance, and there’s a chance that China could accede to it. Without that international pressure, I certainly hope that a divestiture would be accomplished, but I also recognise the reality that China has said that it is not something that it will support or potentially even allow. We may be faced with a situation where the United States says, ‘Here are the conditions for operating in the United States,’ and TikTok can decide to continue operations or not. Some of these bans that we’ve seen are definitely going to be challenged on first amendment grounds in the United States, and it’s anyone’s guess as to whether they will be upheld and go through the courts.
CHAIR: Yes. The Chinese government’s objection to the sale of TikTok kind of proves the point that it is recognised as an asset or a tool of the Chinese Communist Party. It would be hard to understand why TikTok itself as a company would object to being sold to a non-Chinese owner, given that they already argue that they’re not controlled by the Chinese government or the Chinese Communist Party, that they’re independent and that they only operate in Western countries. It would be hard to understand why they would object to it unless, of course, they do value connection with their Chinese parent company and all the benefits that flow from that. I’ll share the call with other members of the committee and I might come back if there is some time at the end. I’ll hand over to the deputy chair, Senator Walsh.
Senator WALSH: Thank you very much, Dr Gorman, for being with us. Over the last day, questions that we’ve been considering and getting evidence about are: to what extent should we be looking at solutions that are, effectively, platform neutral and futureproofed for future apps and platforms that may emerge, and to what extent should we be focused on specific solutions for individual platforms like TikTok, as you’re dealing with in the US? Can you speak to that? Is it a bit from column A and a bit from column B? How do you view that challenge?
Ms Gorman : I would be very much in favour of column A—that is, having futureproofed solutions that could apply to multiple platforms—as opposed to just a ‘whack a mole’ approach on specific platforms. But I would give a caveat to that, which is that having futureproofed solutions does not mean having solutions that apply equally to every platform, no matter their country of origin. This is something that we have to, I think, recognise. Internet infrastructure and technology infrastructure—and that includes popular social media platforms—that are based in authoritarian countries have a degree of influence and a risk of foreign influence, particularly given the motivations of autocratic actors to interfere and influence democracies. That puts them in another category. So, while we absolutely need broader regulations and policies that apply to all social media platforms no matter their country of origin, there is another category where there is a higher degree of suspicion when a social media company has ties to an authoritarian state because of the history of information warfare and weaponisation of information and propaganda deliberately aimed at democracies.
Senator WALSH: I will ask you to go through the recommendations that you offered the committee again because you went through them very quickly and I literally couldn’t write them all down. I’ll ask you to expand on those in a second but, before I do, I’ll ask this: if the recommendations that you’ve given us were in place in Australia, in the US and around the world, would they deal with the special problem that you’re identifying in relation to platforms that are prone to influence from authoritarian regimes?
Ms Gorman : Yes, they would. My first recommendation was to come up with this overarching framework that includes the threat of platforms from authoritarian regimes and really spells out what the concerns are. Is it scale? Is it the degree of ownership and influence? Is it the type of platform itself? There may be a different kind of risk for an ecommerce app, where the primary risk might be on data security, versus a content generation and select platform, such as TikTok, where there is this algorithmic influence concern. So I think there needs to be a much broader framework and a much clearer framework so that when authoritarian internet apps do come to democracies, as they inevitably will in open societies, we have a framework for addressing it, and we don’t have to wait until it becomes this behemoth of an issue to deal with a specific platform.
Senator WALSH: Your second recommendation, I think, was in relation to AI, which we’ve also had evidence on over the past day and a half. Is it the case that AI just makes everything exponentially more of a problem—we have more sources of misinformation and disinformation—and that that can be dealt with within the kind of framework that you’ve outlined in recommendation 1, or do we also need to be looking at AI regulation separately?
Ms Gorman : I think the latter. We need to be looking at AI regulation separately. I would say there are two areas where AI-generated text, image and video content in particular pose unique problems. The first is the democratisation of the ability to create extremely realistic but completely fake content. Before, it took computer science labs and more technical expertise to generate this kind of completely fake material; now anyone can do it. That increases avenues for propaganda. The second concern that should not be understated is that the way these large language models—ChatGPT et cetera—are working is they’re designed to spoof and generate content that looks plausible, and sounds like it might be written by a human, but isn’t necessarily true. It can be completely fabricated. I’m concerned that, in a world where we increasingly rely on these tools, our knowledge base of what is actually true becomes at risk and our very trust in information becomes downgraded. That’s why we really need systems that preserve that knowledge base and public record—our libraries, for example—in this new AI era, as well as the ability to distinguish what’s manipulative from what’s not.
Senator WALSH: Your third recommendation was about investing in the adoption of content transparency regulations and systems. Can you expand on that for us? With a lens towards to the issue of data localisation, is that in itself important in being able to have effective transparency measures in place, or is there some other way in which we can potentially compel platforms to provide information that is consistent with what nation-state-specific regulation might be?
Ms Gorman : One of the pieces of this puzzle that we’re really interested in is the affirmative vision. There’s an effort to control, regulate and mitigate against threats, but there should also be an effort to build better systems for tomorrow—the technology platforms and the structures that we use today are not the ones that we’re going to use forever, and every generation of technology presents a new opportunity to redesign and recreate and build our values in a little bit better. For example, with content authenticity frameworks, we need to invest in the adoption of these tools. They already exist. There are standards that are being created that allow for digital watermarking of images so you can tell when an image has been manipulated and when it hasn’t, and you can track the history of that image. Those need to be commonplace. They should be on all social media platforms so that we know when an image has been modified and when it hasn’t.
Similarly, with AI technologies such as privacy-preserving AI, we should be building privacy—by design—into these AI systems. It would make these questions of data a little bit less of a concern—not fully, but a little bit less so—if our business models are not quite as reliant on the full-scale aggregation of data because we have these privacy-protective solutions.
In terms of data localisation, the Japanese model of a free flow of data with trust is a great one. The data localisation model is perhaps an inherently authoritarian model of controlling—that’s what China and Russia do in not allowing data to leave the country—but we have to strike a balance between open commerce, especially between and among democracies, and this recognition that data in today’s world is a strategic asset. That means allowing countries access to that. When we think about the global south, their data is being exfiltrated, and developing countries should have the ability to derive economic value from their own data.
Senator WALSH: I’ll just ask a follow-up before going back to the chair to reallocate the call. You talked about things like requiring platforms to do watermarking of images. We’ve heard evidence that it would be incredibly useful if platforms were required to tell consumers and governments what trends and examples of misinformation or disinformation they’re seeing and what the sources of those are, which they’re starting to do on a voluntary basis. Do you see that there’s a need for global approaches to this? Can individual countries deal with this by themselves, or does there need to be a global partnership approach to achieving these sorts of reforms?
Ms Gorman : Yes, I would definitely advocate for a global partnership approach, because these threats are not unique to one country or another. Many times, they’re shared among democracies. In the United States in 2016, when we were the subject of Russian election interference, one of the things we found when our organisation looked into this was that it was not a new problem and that many countries had experienced similar things before it came to the United States, and have since. So addressing these threats holistically is definitely the way to go. It’s also easier for interoperability, I think. Especially when we’re talking about AI regulation, having more commonality among democratic countries in particular can only be beneficial. Which is not to say that there aren’t already existing efforts to share information on threat actors across nations, and those should absolutely continue.
Senator WALSH: Thank you so much, Ms Gorman. That was really useful evidence.
CHAIR: We will go to Senator Shoebridge.
Senator SHOEBRIDGE: Thank you very much for your evidence today. I find it really useful. Of course, when we’re talking about the Russian interference in the US election, that was the pre-TikTok era, wasn’t it? It was primarily Facebook, but also Instagram and other platforms, that was being misused for the purposes of disinformation in the 2016 election. Is that right?
Ms Gorman : That’s right, yes.
Senator SHOEBRIDGE: The moves that we’re hearing with Project Texas and others focused on TikTok are not going to address those existing and, I would say, in some ways since that time, even aggravated risks from those platforms, will they?
Ms Gorman : It’s true that that focus on TikTok—the concerns that have been raised on TikTok, particularly around the Chinese ownership, and questions of a forced divestiture of TikTok or proposals that have come to ban TikTok entirely—would have no impact in that regard on other social media platforms. I do think it’s important to note, though, that we often hear that authoritarian actors can do damage, even without owning a social media platform like TikTok, because that is what they have done on Facebook and Twitter and other platforms. But I really think that the social media platforms, for all their faults, have certainly taken steps since 2016 to get better at tracking these kinds of networks. Giving that ownership, particularly over the algorithm and the ability to influence the algorithm, into the hands of a company based in an authoritarian regime is, I think, just too great a degree of influence to leave to the goodwill of another state.
Senator SHOEBRIDGE: If we do it platform by platform, we’re going to be playing an endless digital game of whac-a-mole. As they pop up, we’ll whack them on the head and, inevitably, through that process, we’ll be years behind the actual, emerging threats. Also, I would imagine, a platform-by-platform approach is going to give you constitutional problems in the United States. Do you agree that a platform-by-platform approach is, at best, suboptimal?
Ms Gorman : Wholeheartedly, I do not think a platform-by-platform approach is remotely effective, as we are seeing with TikTok today. If we had this comprehensive framework that we recommend in place years ago, we would have addressed TikTok back in 2019 or 2020 and we would be ready for the next one. It is absolutely a game of whack-a-mole if we are taking it platform by platform. Already, ByteDance is now pushing a new app, Lemon8, with perhaps some of the same issues as TikTok. So we can never get out of this by just doing platform by platform. We absolutely need a much more holistic framework that really gets at: what is the risk here?
Senator SHOEBRIDGE: Do you agree with the recommendation from other submitters like Human Rights Watch that we need a platform neutral policy of data minimisation to prevent this accretion of large data pools that reach into our online behaviour and breach what many would suggest should be our core privacy rights? Do you agree that there should be a platform neutral data minimisation approach as a significant part of the response?
Ms Gorman : I do, but I think it is a ‘yes and’. There are some facets of technology regulation that should be absolutely platform neutral, and I would say that data privacy and minimisation is one of them. That would shore up a lot of the vulnerabilities that we already have. Then there are going to be some that have to deal with in an overarching framework kind of way the country in which a technology platform originates, and that should also be across the board. So maybe two tiers of regulatory action, none of them platform specific. But I would definitely agree that, on the data side, having something that is platform neutral that would apply to all technology platforms, no matter which country they came from, would be an effective way of bolstering our defences.
Senator SHOEBRIDGE: Have you raised concerns about the fact that Twitter’s second-largest shareholder is an investment company part owned by a member of the Saudi royal family and part owned by the Saudi Investment Fund, given the fact—and I hope we agree on this—that Saudi Arabia is a deeply repressive authoritarian state?
Ms Gorman : I think that is a concern. That is exactly the ownership and influence question that we really need to get at. If Twitter or any other platform ended up being bought by an authoritarian country, I think some of the same concerns that we have with TikTok would apply.
Senator SHOEBRIDGE: That is not an academic question; that is the reality. The second-largest investor—
Ms Gorman : I think the problem—
Senator SHOEBRIDGE: It is not an academic question. What is happening in the United States to address that threat?
Ms Gorman : I haven’t seen that much action on it in the United States specifically. There may be some action that I am unaware of, but I agree that that kind of influence is incredibly problematic and it raises questions over whether such a platform will take aggressive action against inappropriate activity, information operations and interference on the platform.
Senator SHOEBRIDGE: I think that strategic investment upped at the end of last year. Since then, Twitter has removed its Human Rights Watch teams globally—defunded and removed them. Does that add to your concerns or does it not add to your concerns?
Ms Gorman : It does, yes.
Senator SHOEBRIDGE: I come back to a concern I have, which is that we have a very focused political debate on just one jurisdiction, China, and just one platform, TikTok; whereas we actually have global threats that are across the board when it comes to these large, multinational social media platforms. So I come back to this question that, surely, we should be having platform neutral responses to this.
Ms Gorman : I think, yes, we should have platform neutral responses, especially if we can get at what the core issues are. I think you have highlighted a really important one, which is the ownership and influence by authoritarian states. There are different ways in which that is mediated out in the case of Twitter or in the case of TikTok and maybe other social media platforms would have another one. But, yes, I think that when there is an adversarial investment or interest in an ownership stake in a social media platform that we would reconsider critical information infrastructure in democracies, that absolutely should be part of this framework.
Senator SHOEBRIDGE: We’re probably never going to remove it entirely, but if we want to limit the ability of aggressive third parties, foreign nations that may want to harm our democracies, one of the best ways of doing that is to limit their capacity to do that by defanging the platforms—removing the data retention and focusing on privacy controls on the platform. If we do that well it has an a priori effect of limiting the capacity of noxious players to influence our democracies, doesn’t it?
Ms Gorman : It does, yes.
Senator SHOEBRIDGE: Thanks, Chair.
CHAIR: I have one quick follow-up question arising from Senator Shoebridge’s questions. I agree with him that part of the solution is going to be a comprehensive response that applies to all platforms, but I think it’s instructive to look at what has happened in Europe in recent times in relation to TikTok. The European Union GDPR is generally recognised as being the most advanced, or most robust, privacy protection frameworks in the world, yet despite already having had that in place for many years, they’ve still had to take specific action in relation to TikTok in the case of government users’ devices. The European Commission has banned it and many national governments covered by the EU GDPR have also banned it—France, Denmark and various others. That does demonstrate that this is, as you say, not an either/or but a both. We need a robust framework which applies to everyone, but we also need to recognise the heightened risk posed by some applications, particularly those headquartered in authoritarian states, that also need direct action targeted to them as well.
Ms Gorman : That’s exactly right. On the government bans, that was really born out of my concern No. 1 on data security. We know that China and Russia—especially China—have very active espionage campaigns against liberal democracies to steal information, both in the traditional national security sense and also in the corporate espionage sense. Frankly, it seems like a no-brainer to ban not just TikTok but any app that’s based in an authoritarian country from government devices. That seems like a huge security risk from a cybersecurity and data security perspective. That’s a separate question, though a related one, to these broader concerns of foreign influence, propaganda and algorithmic selection and information that apply to the broader public. That is precisely one example of where we can’t just treat all platforms always equivalently, despite the need for this broader framework that can address all platforms.
CHAIR: I also take Senator Shoebridge’s point that a strategic stake or investment from someone associated with the royal family in Saudi Arabia on Twitter, or from their national pension fund, does provide an avenue or an opportunity for potential influence, and if that was borne out and proven, that would be very concerning and we’d have to address it. Nonetheless, I don’t think we should draw a false moral equivalence between WeChat or TikTok on the one hand and Twitter on the other, because it’s not just a potential avenue for influence. It’s an actual avenue for influence that we’ve seen on those platforms which are headquartered in authoritarian states versus Twitter, which is headquartered in a liberal democracy, subject to the rule of law and all the checks and balances that come with that, in the case of Twitter.
Ms Gorman : I think that’s right. There are different aspects of influence and ownership. One could argue that they apply in different ways to different circumstances. For example, if you’re concerned about foreign election interference, then you might be much more concerned about a platform headquartered in a country that has demonstrated an interest in influencing elections in democracies. If you were more concerned about data espionage, you might also be concerned about that former category. But just being headquartered in a liberal democracy does not mean that there couldn’t be an overwhelming ownership influence. I don’t know that we’ve seen that yet, but that’s something that I would be concerned about too.
CHAIR: I agree with that. It is a potential risk we need to be alive to. One other difference is that Twitter, because it is headquartered in the United States, is not subject to, for example, the 2017 National Intelligence Law of China, which TikTok and WeChat are subject to. That’s a very direct means of control that the Chinese government has over those applications that the Saudi Arabian government does not have over Twitter. It might have an indirect opportunity for control, but it doesn’t have that direct control legislatively.
Ms Gorman : Certainly from the data security perspective, that’s exactly right, and also the protection of courts in a democracy. In China there isn’t this independent judiciary that can say, ‘Well, no, that isn’t a valid request for data from a social media platform.’ Whereas, yes, a company that’s based in a liberal democracy has that independent judiciary check when it comes to social media platforms. So, yes, these are definitely different scenarios, and that’s why we really need something that will spell out some of these and get at what the risk is, whether it’s in the ownership, influence, or the place it is headquartered. The Prague proposals on 5G internet security are a fantastic starting point for this kind of legislative development which does identify countries of origin while also trying to promote standards that apply to any internet platform.
CHAIR: Ms Gorman, thank you so much for your time this evening—Washington, DC, time. We’re grateful for you joining the committee and assisting us with your expertise.
Ms Gorman : Thank you for having me.