Clint Watts is Non-Resident Fellow at the Alliance for Securing Democracy and Distinguished Research Fellow at the Foreign Policy Research Institute. He is the author of Messing With The Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians and Fake News.
This brief is Part Two of a three-part series. It outlines an approach for social media companies and governments to counter the new era of Advanced Persistent Manipulators discussed in Part One of this series. A forthcoming, final brief on “The Social Media Kill Chain” will finish this series.

 

Nearly every social media company has instituted measures to counter disinformation. These approaches differ in scale and scope based on the company’s views of the problem and its severity. But all of these corporate approaches to disinformation appear similar to the strategies they deployed in response to terrorists over the last decade—namely, to incrementally push bad actors off their platform after they have done harm.

With terrorists, this approach worked fairly well over time as al Qaeda, Islamic State, and other groups were almost universally disliked. But with disinformation, the battlefield is different. Rather than extremists simply advancing a violent ideology online, new Advanced Persistent Manipulators (APMs)—whether single actors or combination of actors—perpetrate a multi-platform influence campaign, pursue their objectives over an extended period, use technology to advance computational propaganda (commonly referred to as “bots”), and know how to operate within and between social media platforms without violating their terms of service. The breadth of APMs and their adaptive techniques have caught social media platforms in a devastating crossfire. Today’s social media giants are besieged by free-speech advocates alleging censorship, hammered by legislators seeking retribution for violence and electoral interference, scorned by users who fear their privacy has been violated, and haunted by authoritarians who see platforms as a tool for domestic oppression.

The current reactive approach of social media companies will fail in this APM era. They should draw lessons from the cybersecurity and counterterrorism communities—two fields where initially anonymous, nefarious actors committed cyber-enabled violations to pursue their objectives. These two fields now employ a collaborative approach informed by intelligence analysis to implement proactive strategies that thwart the tactics and techniques of threat actors and raise the costs of their operations. Social media platforms, individually and collectively, have much to gain by adapting key features of these approaches.1

The Challenges of Social Media Defense

Major social media companies have invested significant resources since the 2016 U.S. presidential election into thwarting disinformation on their platforms. Their policing attempts have had mixed results, however. Facebook frequently shuts down accounts for coordinated inauthentic behavior, and Twitter has sporadically culled herds of false bot accounts. Despite these efforts, social media companies continue to lag behind the most advanced manipulators on their platforms.

That is because computational propaganda, the patchwork nature of terms of service rules, and the sheer number of actors give the advantage to offenders. Russia’s successful social media operations created a blueprint for online manipulation that has been embraced not only by other authoritarian regimes but also by lobbyists, political campaigns, and other actors. Coupled with for-profit clickbait operations and ad fraudsters, there are now far more peddlers of disinformation than “policers.” The sizeable gap between the number of offenders and of defenders is exacerbated by the breadth and scale of the countries, languages, and regulatory environments in which social media platforms operate. Companies, as a result, have created an elaborate, uneven, and haphazard set of guidelines for content takedown and account termination. These convoluted procedures, along with nearly instantaneous user content uploads, provide offenders with a decisive advantage over those seeking to thwart disinformation, violent content, or fraudulent advertising.

These external pressures are amplified by social media companies’ internal processes, which tend to focus on mitigating an immediate problem without holistically identifying the systemic weaknesses of their products and services that manipulators repeatedly exploit. Meanwhile, rapid corporate growth—which has encompassed new products and services as well as the acquisition of companies with different business models and protocols—has slowed information sharing. This has at times resulted in complete communication partitions that blind social media companies to the full picture of nefarious activity on their platforms. Even companies with relatively good internal communication lag behind the most advanced manipulators, which operate seamlessly across all social media platforms using the unique features of each to achieve their objectives. Social media companies fall far short in sharing APM threat information with one another, leaving the industry collectively blind to threat methods occurring across the entire social media ecosystem. This results in bad actors migrating to the least-defended platforms and damaging the reputation of the entire industry. The move by various extremists to exploit Telegram a few years back and the spreading of disinformation and misinformation on 4Chan and 8Chan today offer relevant examples of this phenomenon.

Until recently, social media companies have followed a reactive, issue-by-issue or threat-by-threat approach to defending their platforms. The rise of new threats—from hackers and criminals at their inception of platforms to terrorists and disinformation peddlers today—has resulted in a patchwork of internal responses and disparate groups tackling similar problems through the lens of different actors.

Artificial intelligence (AI) has already become a useful tool for detecting and policing bad behavior on platforms, but it is not sufficient. For example, in the immediate aftermath of the New Zealand mosque attacks last March, Facebook removed 1.2 million copies of the perpetrator’s livestream video and its variants at the point of upload. This was a remarkable achievement of automation, but it only caught 80 percent of such uploads, leaving 300,000 others online. A copy of the original video of the attack was posted on 8chan, a notorious anonymous posting site for content banned from mainstream sites, which enabled copies to proliferate elsewhere. Because their communication apparatuses rely on human networks rather than technical engineering, states pushing disinformation and extremists advancing violence have proven highly adaptable to platform modifications and terms of service changes. Adaptive actors and new manipulative methods, essentially “zero day” threats in cybersecurity speak,2 will always have the advantage over technical detection alone. For the foreseeable future, therefore, success in social media defense requires including humans alongside tailored technological support, a proactive strategy, and adaptive policies anticipating the next threat to platforms.

The Need for Intelligence-led Social Media Defense

The social media industry faces challenges similar to those facing counterterrorism and information security specialists more than a decade ago. In response to terrorist threats, the FBI underwent a major domestic-intelligence reform process in the late 2000s. It reorganized its entire workforce to generate finished intelligence that its leadership could use to make rapid strategic decisions for thwarting transnational terrorism and crime. The goal was to take an approach that could anticipate changes in threat behavior and proactively disrupt nefarious activity rather than reactively respond to it. Over the past decade, information security teams inside companies, and collectively through associations, have adopted similar approaches to protect their institutions and customers from the rise in the volume and complexity of cyber threats.

An intelligence-led social media defense should not be confused with spying. This strategic, top-down approach does not condone or employ covert monitoring, illegal data gathering, or false personas. Instead, it seeks to conserve resources and focus policing efforts by efficiently applying talent and technology where they can be most effective. Analysts generate intelligence to identify necessary policy changes and content controls to disrupt manipulation, reduce aggregate platform abuse, and increase user safety. Effective intelligence-led defense will raise the time and resource costs of nefarious activity, disincentivizing the pursuit of manipulation. Critical to this model’s success is rapid information sharing between products and platforms, allowing defenders to more quickly address threat activity before it becomes an epidemic eroding trust in social media platforms.

Key Principles of Intelligence-led Social Media Defense

Intelligence-led social media defense relies on a few key principles to guide operations. To begin with, it sees everyone as a potential provider of critical information—content moderators, intelligence vendors, information security professionals, government regulators, and even users can help inform on bad actor behavior across platforms and the industry. Producing insightful intelligence requires utilizing many diverse information sources, internal and external. In the case of Russian disinformation during the 2016 U.S. presidential election, social media platforms either did not have an effective way to process warnings from outside researchers or were dismissive of them. This must change.

Information sharing within and among social media companies is just as crucial. After the 2016 election, companies slowly gained insights from peers that led them to shut down accounts on their own platforms. Facebook significantly increased its internal communication and coordination, for example. But companies will continue to struggle if it cannot learn from each other. The Global Internet Forum to Counter Terrorism, launched in 2011, was a step in the right direction, but it is limited in scope to disrupting terrorists and has been anemic in execution.

Legislators have seized on content-removal time barriers or the complete elimination of hate speech and extremist content as the appropriate method for countering the manipulation of social media. However, not only will the approach fail, it is counterproductive for thwarting APMs. Complete, instantaneous detection and removal of disallowed content is excessively costly and unattainable, particularly when users can upload content in near-real time and there is no global agreement on what constitutes prohibited speech. That was already the case with extremist content and the challenge is even greater with disinformation. Social media platforms, as neutral content providers, will never be able to effectively police content, but they can police inauthentic and manipulative behavior. Intelligence-led social media defense, through deliberate planning, focuses on the most prolific offenders and the most exploited vulnerabilities in each platform. Companies, in coordination with governments, should seek instead to establish thresholds for policing the most dangerous manipulative behavior, allowing defenders to focus on those APM methods and actors doing the most harm. Facebook has already moved in this direction.

A Model for Intelligence-led Social Media Defense

A model for facilitating an intelligence-led defense would consist of two organizational components: a global task force and central hubs or fusion centers (to borrow the term from counterterrorism) that would integrate intelligence into a standardized process and decision cycle. There should be fusion centers within individual social media companies and one public-private fusion center across the industry, all fluidly communicating insights to each other and to content-control managers and incident-response personnel.

The global task force would consist of information collectors and incident-response personnel performing several tasks that are critical to effective intelligence-led defense. They would provide internal, corporate information in response to requests from analysts who synthesize threat activity. Task force members would provide early warning of previously seen nefarious behavior and emerging shifts in threat activity. As a global network, they would communicate with partners locally to relay insights from social media companies, information-security professionals, and local, regional, and state agencies. Above all, task force members would be empowered by platforms to take action by rapidly implementing controls and making product changes. They would be responsible for consuming finished intelligence from the fusion centers and implementing their insights to ensure the safety and soundness of social media platforms and their users. The global task force would rapidly implement needed controls, incorporate threat insights into social media product design, and relay warnings to social media company employees and partners to further prevent nefarious activity.

The fusion centers would administer the business process and support the workflow of intelligence-led defense and incident response. They would ensure the rapid sharing of information as a result of collectors, analysts, and incident-response personnel sharing a physical space. The fusion centers would ingest reporting from the global task force, incorporate lessons learned from outside researchers, collaborate with industry partners and government representatives, rapidly disseminate guidance to social media incident-response divisions, aggregate and assemble incident data, and push finished intelligence products and their insights to the products, services, subsidiaries, and regions within the company’s orbit.

Even if remote-working or virtual-working arrangements tend to be liked by individuals and companies in the tech sector, a physical center bringing together intelligence and incident response remains a necessity. Virtual fusion centers do not adequately accelerate information sharing to catch up with threat actor behavior and may not facilitate sufficiently innovative analysis in support of threat mitigation. In creating fusion centers, social media companies also should not obsess over corporate structures and titles but ensure that key responsibilities for intelligence-led defense have been designated to specific individuals and entities.

The Process of Intelligence-led Social Media Defense

An intelligence-led process for social media defense would move away from the current siloed, threat-of-the-day approach to holistically defending platforms. The model of a global task force and fusion centers would accomplish this by enabling an intelligence cycle similar to the one employed by government intelligence agencies with a couple additional steps.

The process would begin with intelligence planning. Key platform security leaders and incident-response and intelligence personnel should meet in regular intervals (at least quarterly, but probably monthly) to identify the most prolific offenders on their platforms, the most common and dangerous threat methods, and gaps in their understanding of nefarious-actor behavior and platform vulnerabilities. Following on from this, the essential output of these sessions should be to raise intelligence questions (or requirements) that need to be answered and designated intelligence reports (or products) that need to be created.

The social media platforms’ respective intelligence teams and fusion centers should then initiate the collection of critical information to answer the drafted intelligence questions. A central information-collection manager—normally based in each fusion center—would gather responses and internal platform data via the global task force, collect related lessons learned from incident-response teams, request information from industry partners and government representatives, contact cybersecurity and social media vendors that track threat activity, and initiate outreach with academic partners and think tanks that may have a perspective on the questions raised.

Each fusion center would then process all responses to the intelligence questions from all available sources, creating a central repository for analysis. The collection manager and supporting team would check the quality of gathered data and responses (for veracity and reliability) before routing needed information to analysts. An analyst or team would then conduct analysis and production of finished intelligence reports, offering answers and insights for social media platform leaders and those implementing platform controls. Each intelligence report would increase situational awareness with regard to key vulnerabilities and prolific offenders identified during planning. Each report should conclude with recommendations for improving platform terms of service to mitigate manipulation, technical controls for stopping inauthentic manipulation, or policies for improving safety and soundness of products and services.

The dissemination of intelligence reports would begin at the fusion centers and spread out to the global task force. Shortly after this phase, the fusion centers would ensure the implementation of recommended controls and policy changes informed by analysis. Company executives would approve changes and then each instance of service or product implementation should be verified and monitored to evaluate its effectiveness in protecting the platform.

Finally, the entire intelligence process would undergo a performance-management review where leaders, intelligence-led personnel, and incident responders measure the effectiveness of their intelligence production in thwarting bad behavior on their platform. They also should collectively assess emerging threats to their company and to the industry, and identify improvements to the intelligence-led processes and products being produced. The goal each time should be to rapidly adapt to threat actors and methods while maintaining user safety and company innovation and competitiveness.

How Government and Industry can Interface

Social media companies are not very good at detecting threat actors and when they do catch them it is often too late. Intelligence and law-enforcement agencies are equally poor at spotting and understanding threat methods on social media. The latter—appropriately—cannot see the big-data “signatures” residing inside social media companies, and they lack the technological understanding of platforms to fully comprehend methods like computational propaganda or digital forgeries known as “Deep Fakes” or “Synthetic Media”. Simultaneously, legislators who now feel compelled to regulate the social media companies usually lack the skills to effectively legislate against technology platforms they do not entirely understand. Their calls for regulating the social media sector also feel hypocritical when their or their colleagues’ political campaigns use platforms to micro-target voters or troll political opponents.

Intelligence-led social media defense would push governments and companies to play to their respective strengths and complement each other to protect citizens and users. Rather than maintaining internal policing groups tasked to specific threats or threat actors, social media companies would do better to focus on the threat methods employed by APMs. They could then also avoid the pitfalls of conflicting regulatory regimes by focusing on consistently removing inauthentic social media personas, inauthentic coordinated activity, and digital forgeries regardless of actor.

Social media companies should also warn and then eventually suspend or remove authentic users who knowingly upload or share extremist content, false information, or stolen information on their platforms. An adjudication mechanism should be included in this process, allowing affected users to appeal decisions. Combatting computational propaganda should also be a major focus and could be assisted through the development of social-bot registration processes. Finally, thornier issues such as mitigating fake news and quelling erroneous science and expertise could be assisted by offering third-party rating agencies add-ons to their products. The rating of information outlets from outside the social media companies would allow the industry to be a neutral platform while still helping users assess the quality of information they encounter. Facebook and Google/YouTube have already moved slowly in this direction, but they must now move quickly and decisively. Content uploads peddling falsehoods that incite violence and threaten public safety are not protected speech, but rather the online equivalents of yelling fire in a movie theater when no such fire exists. This will also send a message to users that they have to be more discerning in how they use social media platforms and that they have a responsibility not to spread falsehoods or harmful content.

Facebook has committed enormous resources over the last two years to policing its platform, and Twitter and YouTube have made significant progress in removing bad content and squelching the worst offenders on their platforms. But the biggest impediment to success is that companies do not work together. The rise of Islamic State led to a limited collective arrangement in the form of the Global Internet Forum to Counter Terrorism where signatures of extremist content are shared in a central database. Whatever its successes, this effort may also have accelerated the migration of extremists to smaller social media companies that lack sufficient resources to police their platforms.

Social media companies have largely chosen to “go it alone” with regards to disinformation, hoping to quell user angst and stave off regulation by working internally rather than across the industry. That is a mistake. They should take a lesson from other industries that have faced similar existential challenges. Cybersecurity threats led the financial services industry to form an information security association focused on sharing threat analysis and developing legislative recommendations, even though individual institutions are otherwise competitors. Financial institutions and their customers benefit from the detection and mitigation of criminals and states infiltrating or seeking to destroy their systems. The same goes for social media companies that compete with each other in certain product lanes.

As well as individual company fusion centers, a public-private Social Media Intelligence Fusion Center (SMIFC) should be created to bridge the divide between governments, regulators, social media companies, and the research community. This could be based on the model of the National Cyber Forensics Training Alliance based in Pittsburgh, which offers a central hub for the aggregation of malware signatures and data exchanges between governments and the private sector. A SMIFC would seek to raise the operational costs for the most prolific APMs, not only driving them from the biggest platforms but also slowing their migration to smaller, less-policed social media apps. Information sharing would need to be robust and aggressive. Companies would need to offer data on their account closures as well as the threat activity and manipulative methods they witness on their platforms. Rather than the ad hoc patchwork of social media signature-sharing currently in place, a SMIFC would speed up detecting and thwarting through information transfers in a shared physical space.

Representatives from social media companies would staff the SMIFC and provide linkage with the intelligence and incident-response elements of their businesses. Liaison staff from the FBI and the Department of Homeland Security posted to the center would rapidly transmit known threat-actor social media identifiers to all the companies for appropriate action. A SMIFC would also be well positioned to commission research and share data with researchers, helping them move beyond, for example, the repeated investigations into Russia’s Internet Research Agency tweets from many years past.

Above all, a SMIFC could help educate and conduct trusted outreach with the public. Its central location in the social media ecosystem would allow it to be a fair arbiter for alerting media to account closures in a unified way. The center could offer optional add-on services helping users evaluate information sources and report nefarious activity. It could also open pathways for the academic community and advocacy groups to conduct “red-teaming” of threat-actor behaviors and “white hat” social-engineering tests to spot platform vulnerabilities.

A SMIFC should be resourced, empowered, located, and staffed correctly, and it should be located where the best talent and the shortest lines of communication to the social media industry reside. It should not be bogged down by the bureaucratic, politically charged environment of Washington, but instead located in Silicon Valley, where insights can be rapidly distributed and controls quickly implemented.

Conclusion

Shifting to an intelligence-led approach for defending social media companies will open opportunities for responsibly collecting and sharing threat data and streamlining policing efforts. Beyond the macro-level benefits of such an approach, a strategic industry-wide shift would also allow for greater focus at the micro-level of defense by eliminating redundant efforts and investing in talent and technology to address platform weaknesses and identify un- or under-policed pockets of the industry.

  1. The concepts of intelligence-led strategies advanced here were taught, refined, and developed by Thomas Harrington, the former associate deputy director of the FBI, who implemented these approaches in the U.S. government and the private sector, and Jerry Ratcliffe, professor of criminal justice at Temple University and author of Intelligence-led Policing.
  2. “Zero Day” describes a cyber-attack exploiting a vulnerability in software or hardware that has not been seen before by cybersecurity specialists and is thus undetectable by technological protections such as anti-virus software. See, for example, FireEye, “What is a Zero-Day Exploit?