Clint Watts is a Non-Resident Fellow at the Alliance for Securing Democracy and a Distinguished Research Fellow at the Foreign Policy Research Institute. He is the author of Messing with the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians and Fake News.

This brief is part three of a three-part series. It outlines the stages in a social media kill chain for analyzing and mitigating the efforts of Advanced Persistent Manipulators.

A Wall Street Journal report last year described how foreign manipulators targeted American military veterans via an imposter Facebook page called “Vietnam Vets of America.” The page, administered by accounts as far away as the Philippines, not only stole the trademark and name of the genuine veterans’ service organization, it also had more followers than its authentic namesake. Kris Goldsmith, the Army veteran who spotted the bogus social media front, has flagged dozens of other dubious looking pages, many of which trace back to Russia, Eastern Europe, and the Middle East. If an online sleuth can spot hundreds of pages from home, why is it that the social media companies, which now claim to employ tens of thousands of moderators, cannot see the persistent manipulation corroding their platforms?

Technology companies need an intelligence-led approach, but it must be systematic, detecting the technical and human characteristics of repeat offenders. What they need is a social media kill chain.

Cybersecurity experts’ battles with Advanced Persistent Threats (APTs) – well-resourced, ongoing threats in cyberspace – led them to seek new approaches for detecting and mitigating swarms of network intruders. Diagnosing a cyber-attack’s progression through seven stages, Lockheed Martin’s cybersecurity team created a framework called the “cyber kill chain,” a systematic model of an APT’s cyber intrusion, in order to better understand where defensive measures should be applied. The kill chain helps cybersecurity specialists to rapidly detect network intruders, reduce response times to breaches, and ultimately raise costs for attackers seeking to penetrate their systems. Over the past decade, it has helped outnumbered and outgunned cybersecurity specialists to close the gap between themselves and their attackers.

Social media platform defenders, outnumbered by malicious actors, face similar challenges from “Advanced Persistent Manipulators” (APMs), whose nefarious influence efforts pose an existential threat to the industry. Like APTs, APMs are well-funded, highly-skilled actors pursuing goals in the cyber arena. Instead of hacking and subverting secure systems, APMs exploit vulnerabilities in social media platforms and the human psyche to manipulate public discourse.

Social media companies have achieved varying degrees of success in countering APMs, but the industry as a whole continues to trail some of the worst offending extremists and nation-states. This failure is now begetting calls for government regulation around the world. Escalating threats, innovative new tactics, and a growing number of new APMs make it necessary for the social media industry to develop a kill chain of its own. Some social media companies pursue some steps of this kill chain workflow within their companies. Independent experts, like Bruce Schneier, also have written about kill chains from an information operations perspective.  All of these kill chain type discussions are valuable and serve different defensive purposes. The kill chain approach outlined here offers my perspective on the manipulation process pursued by APMs I’ve studied in recent years. 

The Advanced Persistent Manipulator Kill Chain

The APM kill chain begins with staging. Conducting a social media influence campaign requires the constant generation of social media personas that appear to be a natural part of the community a manipulator seeks to influence. These accounts need to have the right technical signatures – for example, they should appear to come from the target country – and provide the right visual cues to look real to other users, at least at first glance. New accounts have to be consistently created and maintained on different social media platforms. Older accounts appear more believable and thus hold more value. Russia’s Internet Research Agency (IRA), for example, spent substantial resources on proxy servers and SIM cards for the purpose of staging social media accounts. 

Staging occurs routinely, regardless of an APM’s influence objectives, but reconnaissance starts at the decision to infiltrate a targeted audience. APM’s must assess the size, preferences, and potential vulnerabilities of an audience. The most sophisticated actors conduct their own internal social media audience assessments, seeking to understand which platforms their target audiences use, the behavior of users on these platforms, users’ aggregate preferences, and the factions and issues that define sub-groups within the audience. Many openly available social media analytics packages can assist in this target audience assessment; prior to 2016 social media companies sought to enhance online political actors’ – and by extension, APMs’ – understanding of their target audiences. Cambridge Analytica’s services offer the best example of sophisticated reconnaissance by an APM. There are dozens, if not hundreds of firms capable of providing such social media insights for a fee.

Armed with deep audience insights, an APM’s next phase is targeting – identifying and classifying the audience segments to be manipulated. APMs generally have a message they want elevated within the audience they seek to engage. Targeting assessments identify natural supporters of the APM’s position, detractors of the APM’s position, and decisive segments that the APM may be able to win over. Known as agents-of-influence in Kremlin speak, APMs identify two varieties of supporters: unwitting, commonly referred to as “Useful Idiots“; and witting, known by the moniker “Fellow Travelers.” Smart APMs target social media personas residing in the target audience with outsized influence amongst those likely to support the manipulator’s preferred agenda, namely celebrities, media personalities, government officials, and activists.  Finally, a complete targeting package includes the themes, narratives, hashtags, and phrases to be advanced within the audience space to elevate supporters and divide opponents. Cambridge Analytica whistleblower Chris Wiley talked at length about how these packages are created and field tested.

After the planning stages, the first operational step in the kill chain is mimicry. Successfully infiltrating an audience requires staged personas that look and talk like the audience they are emulating. Manipulators cannot simply create accounts and push their preferred message on the targeted audience – a common failure of overt American counterterrorism influence efforts. Instead, APMs must invest months replicating organic target audience conversations by retweeting key influencers, re-sharing posts, commenting on audience posts, and possibly trafficking local news items to appear as if they are geographically and linguistically part of the audience. Social media analytics packages and social bots can help automate some of the mimicry so that APMs can concentrate time and resources on more intensive manipulation efforts.

Advancing an APM position within a targeted audience requires the creation of organic-looking, engaging content pushing the manipulator’s desired position. Smart manipulators tend to use organic content from the target audience that supports their desired narratives, but legitimate content rarely satisfies all of the manipulator’s objectives. Content placement occurs in many forms and with a range of sophistication. Options for content placement include news items from state-sponsored news services, social media memes, digital forgeries, or even specially selected research presenting manipulated findings or completely bogus conclusions from sponsored or fringe think tanks and research centers. During the Soviet era, the Kremlin dedicated an entire compartment of their Active Measures cadre to creating forgeries. In 2016, the IRA designated operators to re-writing news articles and creating American-looking memes for placement on social media sites. This content is then placed through staged social media accounts, alternative and mainstream news outlets, and anonymous posting sites. Examples of how this occurs today include “Alice Donovan,” a pseudonym used by a Kremlin troll to publish articles in Western media that advanced Russian interests in the Middle East. Most recently, Michael Isikoff of Yahoo News reported that the Seth Rich murder conspiracy arose from a fake Russian foreign intelligence service bulletin circulated on whatdoesitmean.com.

The amplification of APM narratives is essential for changing target audience perceptions, sowing doubt about counterarguments, and distorting reality to the manipulator’s advantage. Computational propaganda has also significantly enhanced APMs’ ability to amplify placed content, organic content supporting the manipulator’s narrative, and key influencers from afar. Manipulators frequently use networks of social bots to like or share favorable content, creating the illusion of consensus on the issue. More advanced actors might use their network to harass key users with opposing views or to amplify disagreement among opponents to create infighting. The most well-resourced APMs invest in the deployment of fringe news sites, television and radio broadcasts, and pundits to spread their messages and positions from multiple points. As machine learning develops in this area, the list of tasks that can be automated is likely to expand.

Finally, enduring APMs that have successfully infiltrated an audience through mimicry, placement, and amplification can mobilize targets to undertake larger actions toward the achievement of the manipulator’s ultimate goal. Signs an audience stands ready for mobilization include organic audience members reposting and sharing staged content without prompting, repeating key phrases, willingly using promoted hashtags, engaging via direct messages with manipulator accounts, and showing interests in physical events created by the APM or other targets. Mobilization is the culmination of a full-scale social media influence effort. At this stage, manipulators are able to create a movement around injected foreign narratives, push staged content using key influencers, create and steer dialogue around chosen narratives, diffuse compromising information throughout the larger audience space, stage physical rallies, and shape the real-world actions of their supporters.

Two Models for the Social Media Kill Chain

There are clear parallels between the Advanced Persistent Threats (APT) of the hacking world and the Advanced Persistent Manipulators (APM) currently souring social media platforms. But the kill-chains for APTs and APMs also differ in key ways. A single soldier inside one of China’s APTs or Russia’s GRU, working alone, could potentially execute each step of the kill chain without collaborating with others. Effective APMs generally cannot operate in the same way. A lone individual could conceivably execute the APM kill chain, but the process would require someone with a wide range of skills and a good deal of time. And if pinpointed by a social media platform’s defenses, lone operators can be substantially disrupted.

The APT and APM kill chains also differ in that the latter requires significant scale to be effective. APMs seek to change audience perspectives, not simply to violate the confidentiality, integrity, or availability of computer networks. APMs can exert a force multiplier effect for APTs to more effectively coordinate and distribute the pilfered secrets unearthed by hackers (e.g., GRU and FSB) and amplify the content of sponsored propaganda outlets (e.g., RT and Sputnik News). Hackers, either in an anonymous collective or within a state intelligence service, often work through all steps of the kill chain in sequence. APMs instead function as more of an assembly line, delivering layered, integrated activities to alter audience perceptions via one of two distinct models.

Russia advanced APM tactics with the introduction of a social media disinformation fusion-center known commonly as the Internet Research Agency (IRA). Rather than train propagandists to perform every skill in the APM kill chain, the IRA built a disinformation workflow across different units to perform each phase. Interviews with former IRA employees describe a partitioned building where employees with different skills work together in a deliberate, repetitive process to target foreign audiences. Those with journalism degrees authored blog posts, creating paraphrased, stripped down versions of other news outlets’ stories (placement). Social media specialists generated hundreds of accounts inside the target audience, using multiple SIM cards and other evasive techniques to avoid detection (staging). Other specialists acquired computational propaganda skills (reconnaissance, mimicry, mobilization). Graphic artists crafted memes to layer into target audience environments (placement). In sum, the IRA’s assembly line approach to disinformation increased their collective output and accelerated and sustained their influence.

Reporting suggests several other nation-states have followed Russia’s lead, creating their own disinformation centers. Libya’s “keyboard warriors,” the Philippines’ Facebook teams for trolling political opponents of the Duterte regime, and Iran’s growing operation attempting to influence U.S. Middle East politics all provide examples of manpower-intensive methods of disinformation based on the Kremlin’s playbook. While these efforts are less efficient than Russia’s blend of technicians, they demonstrate the power of centralized disinformation strategies for achieving influence objectives.

In contrast, disinformation specialists may now recognize a second emergent disinformation model – the “hub and services method.” Nation-states with lower capability and manpower, oligarchs, or even public relations firms may not need sustained, high output disinformation centers and instead conduct tactical influence campaigns on an as-needed basis. For these APMs, a central person or team might conduct campaign planning and then acquire trolling-as-a-service from a number of firms to execute individual steps of the social media kill chain. The point of contact, or “hub” in this model, might contract with a computational propaganda specialist from the dark web for amplification, a content generating lobbying group for placement, and a social media analytics firm for reconnaissance. Each service acquired by the APM’s hub acts as a discreet spoke in wheel, allowing an APM to hide their hand, mitigate risk, lower their overall operating costs, and nimbly pivot to the strengths of different vendors.

The deployment of social bots on Twitter after Jamal Khashoggi’s murder, promotion of the #Russiagate hashtag after the release of the Mueller report, and the amplification of political feuds in the gun control debate suggest that various actors are already making substantive use of this model. Understanding the distinctions between the fusion-center approach and the hub and services method will be crucial for social media companies seeking to pursue APMs across the kill chain. Actor attribution will be more essential for dampening disinformation centers seeking longer-term strategic influence objectives, whereas APM method mitigation and disruption will be more valuable against a hub and services disinformation model pursuing shorter-term and more tactical influence objectives.

Defending social media platforms will prove easier and less resource-intensive over time as companies invest in strategies for disrupting links in the APM kill chain. Platform specialists should evaluate and conduct red teaming to identify what each stage of the kill chain looks like on their platform – the technical and behavioral signatures suggesting manipulative behavior. Once identified, platforms can develop indicators for detecting and understanding those signatures, which represent areas where platforms could invest in research, collaborate with each other, or pursue additional information sources. Social media companies must come together as an industry to solve the problem of nefarious manipulation destroying the integrity of their systems. If they limit themselves to individual approaches that ignore the broader ecosystem, bad actors will adapt their approaches, improve their methods, and strike back from a different vector. 

The views expressed in GMF publications and commentary are the views of the author alone.