Meta’s recent report on the company’s takedown of a Chinese government information campaign that was the “largest cross-platform operation” ever, and of several smaller influence campaigns that originated in Turkey, Iran, and Russia, named hundreds of fake domains and accounts scattered across 16 social media platforms. One week after the report’s release, GMF’s Alliance for Securing Democracy team reviewed those and found that 11 platforms did not remove any of them. X (formerly Twitter) and took down 10% of them, Medium 40%, and TikTok 62%, though that number was lower before critical reporting in The Guardian prompted the video-sharing app to suspend accounts linked to China that Meta identified in its report. In contrast, Reddit removed 95% of the accounts in the report.

Many of the platforms that Meta listed are hosting only a handful of covert accounts, and most of these never attracted any sizable audiences. Nevertheless, online platforms’ general disregard for Meta’s findings shows that Big Tech companies lack the ability or the will to cooperate on disrupting coordinated inauthentic behavior. Bad actors are going to continue to coordinate spreading disinformation across multiple platforms, and social media sites must work together to confront the challenge.

Platform Inaction

Reddit hosted the largest number of inauthentic accounts, according to the sample data in Meta’s report, but was also the platform most likely to remove them. Of 62 accounts in question, two remain active, as does one subreddit, but they have virtually no audience. The two accounts each have one karma, a score that reflect their low contribution to the platform; the subreddit has one user.

Medium hosted the second-largest number of accounts (50) that Meta identified and has deactivated or 20 of them. Many of the 30 remaining active accounts feature only a few posts that cover a wide range of topics. They push claims, for example, that the United States bombed the Nord Stream II pipeline and Pfizer produces COVID-19 variants.

Meta found that information campaigns originating in Iran, Turkey, and China spread messages on X. The report named 48 X accounts across those three networks.

Meta’s report also showed that threat actors exploit other platforms including YouTube, Tumbler, Venmo, SoundCloud, Pinterest, Flickr, and LinkedIn. While none of these platforms hosted more than 20 inauthentic accounts, they also took no measures to address the limited influence operations supposedly happening on their sites. The Guardian prodded TikTok to act against identified Chinese government accounts, but the platform left up three accounts linked to campaigns based in Turkey.

Why It Matters

Meta’s report highlighted covert networks that generally boasted a large audience but were ineffective or attracted a small but growing audience. The lack of cross-platform coordination described in this blog is not meant to show an immediate threat. However, it suggests that social media platforms are not collectively prepared to address potential future dangers. It also indicates that many sites are insufficiently incentivized to monitor coordinated inauthentic behavior. Meta and Reddit stand out for their efforts in this area, but other platforms largely failed to act against purported inauthentic accounts. TikTok is under considerable regulatory pressure to distance itself from Chinese information operations. But even it did the minimum, acting only after a media inquiry about Chinese accounts. The platform did nothing about other suspect accounts.

Social media platforms with large user bases should be required, as the EU’s Digital Service Act mandates, to monitor and regularly report on occurring threats. But if technology companies limit their monitoring to their own platforms without incorporating intelligence from industry peers, they, collectively, remain ill-prepared for coordinated, widespread influence campaigns.

 

 

 

 

 

 

 

 

 

 

 

 

The views expressed in GMF publications and commentary are the views of the author alone.