Last week, the Democrats of the House Intelligence Committee released the trove of over 3,500 Facebook ads purchased by the St. Petersburg-based Internet Research Agency (IRA) from 2015 to 2017. For the most part, the release confirms what we already knew: Accounts based in Russia exploited America’s societal fissures to sow chaos in the United States in order to weaken our democratic structures, force us to turn inward, and thereby increase Russia’s standing in the world.

Any short news piece on the release of the ads will report that the so-called “troll factory” pushed divisive content along racial, sexual, religious, and other societal fault lines. It might focus on the ads’ often-nonsensical use of English. But taken holistically, three trends emerge that are not evident when only highlighting the most divisive content:

1. Russia understood the power of the positive.

Among the posts inciting hatred toward immigrants, police, homophobes, and Hillary Clinton, the largest and most successful communities cultivated by the IRA utilized positive messaging to build community and trust among their subscribers. Pages targeting African Americans highlighted accomplishments of black scientists, activists, and other prominent people of color, in addition to “raising awareness” about negative issues such as police brutality. A page billed as an LGBT community shared heartwarming stories of same sex marriages and tolerance as well as stories about injustices faced by gay Americans. And the famous “Being Patriotic” page aimed at American conservatives shared feel-good images of dogs in American flag bandanas and Reagan-era nostalgia alongside commentary about illegal immigration, gun rights, and Secretary Clinton’s purported role in the Benghazi attacks. As is visible from the top performing posts, all sought to promote pride in whatever affinity group they targeted.

Through these positive messages, the IRA built credibility and engagement within each of their target audiences, establishing significant followings of between 100 and 300 thousand “likes.” The IRA understood that, for the same reason users are likely to see their friends’ job and birth announcements more than their less positive posts, more upbeat content performs well on Facebook. It also engenders a sense of community, priming followers to be more likely to respond to content and asks that came with a higher cost.

2. After building its audience, Russia used progressively bigger asks to mobilize support.

After building credibility, community, and engagement, the IRA pages were in a prime position to ask more of their subscribers. They began with low-level asks, such as applying profile picture frames in shows of solidarity. Gradually, the asks grew from symbolic gestures to more active ones, including signing petitions on and to attending in-person protests, some of which had hundreds of RSVPs.  

In several confirmed instances, including two “dueling” rallies in Texas, online engagement translated into offline action. While it is difficult to track precisely how regularly this happened, what is clear is that IRA-sponsored content did affect discourse surrounding key issues within its target communities, whether manifesting itself in the guise of a profile picture frame, a petition signature, a comment or a share.

3. The ads released are a small portion of the troll factory’s content, much of which traveled far of its own accord.

Though scrolling through the ads one by one seems an endless exercise, the 3,500 posts that the IRA promoted with Facebook’s advertising services is a sliver of about 200,000 pieces of content the IRA targeted at Americans between 2015-2017. We will likely never see much of that content, but the ads give us a window into how it performed. While some ads themselves performed poorly, generating few impressions and fewer clicks, the content associated with them at times generated hundreds or even thousands of engagements without an ad buy behind it.

These trends bring up two conundrums as the social media companies and the United States, allied, and partner governments attempt to mount a response to Russia and other bad actors’ online influence. First, Facebook and others are now regulating the paid promotion of “political” content, but much of the content in the Russian ads — promoting pride in target audiences’ ethnicity, religion, sexuality — was not strictly political. By building community, it primed audiences for wider political engagement. Further, while the social media companies have focused on paid content in their responses to disinformation and sought to downplay its reach and effect on discourse, this week’s release shows us that the organic reach of IRA content was more effective than they would like to admit. No amount of ad transparency will eliminate the problem of high-performing, dis-informing organic content.

As we have progressively been given a window into more IRA content, its “sophistication” (or lack thereof) has been repeatedly called into question. Many have asked, “How could people fall for that?” But the ad dump makes clear that they did, and that social media companies must embrace their roles as gatekeepers and content curators, ensuring that platforms’ own algorithms do not unwittingly contribute to the spread of malicious content. Finally, both social media companies and governments have a duty to users and constituents to begin further investment in the critical thinking skills and public awareness of the Internet’s darker side. Empowering individuals with information about how their personal data is used, how malign actors — whether foreign or domestic — can and do target them, and how users can protect themselves is a public good. It may not contribute to profit, but it is a safeguard for civil discourse and democracy, and as the Russian ads confirm, they are institutions that are very much under threat.

Nina Jankowicz is a global fellow at the Wilson Center’s Kennan Institute.

The views expressed in GMF publications and commentary are the views of the author alone.