Two years after the Russian government manipulated social media to interfere in the 2016 U.S. presidential election, online information platforms continue to serve as mediums for such operations, including the 2018 midterm elections. Under intense public criticism and congressional scrutiny, the three most prominent online information platforms – Facebook, Twitter, and Google – have taken steps to address vulnerabilities and to protect their users against information operations by actors linked to authoritarian regimes. However, given the ongoing nature of online authoritarian interference, the steps taken by these companies continue to fall short.
This report reviews and analyzes the steps taken by online information platforms to better defend against foreign interference since 2016, adopting the framing of the Senate Intelligence Committee by focusing on the largest and most influential online information platforms of Facebook, Twitter, and Google.
The platforms’ efforts to combat foreign interference have focused primarily on three key lines of effort: preventing or suppressing inauthentic behavior, improving political advertising transparency, and investing in forward-looking partnerships. Measures to limit user interaction with inauthentic behavior include content removal, labeling, and algorithmic changes. The platforms have also taken steps to improve advertising through policies to publicize advertiser information and improve verification standards for those hoping to publish political advertisements. Investments in forward-looking measures have included internal initiatives to critically assess vulnerabilities and external partnerships with civil society, academia, and fact-checking organizations. They have also led to increased transparency about the behavior and content of accounts linked to the Russian operation against the 2016 and 2018 elections, as well as other nation-state operations targeting Americans.
Though all of these steps are important, ongoing vulnerabilities demand more urgent action by the platforms to secure the online information space against foreign manipulation, while ensuring American’s ability to engage freely in robust speech and debate. Six areas where Facebook, Twitter, and Google must take further steps include:
- Focusing on behavior: Online information platforms have unique insight into the computational tools used by bad actors on their respective platforms, allowing them to identify and eradicate coordinated inauthentic behavior, even when attribution is impossible. Although they have made recent progress in targeting behavior rather than content, a more aggressive focus on detecting and tackling networks will be key to counter evolving influence operations.
- Increasing transparency and information sharing: Recent efforts to expose foreign interference operations have demonstrated greater transparency and information sharing by online information platforms. But these efforts remain largely ad hoc, and robust sharing that includes privacy protections requires the development of standing information sharing mechanisms with industry peers, government agencies, and the greater public.
- Establishing standardization and effective coordination: Despite numerous actions to counter disinformation and inauthentic behavior, platforms still lack a unified understanding of the threats they face. Standardizing terminology and constructing institutionalized communication mechanisms will foster better cross-platform cooperation to tackle interference operations.
- Improving policies and enforcing rules clearly and consistently: Platforms need to ensure that current policies go past window-dressing to achieve stated goals. And companies should work to more clearly articulate their terms of service, and should consistently and transparently apply those rules.
- Thinking critically about future technologies: As the threat of foreign interference continues to evolve and change, tech companies will need to think proactively about how to protect users against manipulation, and about how future technologies may be exploited by hostile foreign actors.
- Making user protection the bottom line: Platforms need to improve efforts to inform users about the threats that target them and to empower them with tools they can use to protect themselves. Further, platforms will need to change the ways that they design new features to emphasize user protection over ad revenue or convenience.
Online Information Platforms and Foreign Interference
Following a series of revelations throughout 2017 that Russia had exploited social media platforms to influence the 2016 presidential election, executives from Facebook, Twitter, and Google testified before the Senate Judiciary Committee on October 31, 2017 to discuss foreign interference on their platforms. Lawmakers chastised the platforms for failing to report disinformation campaigns waged by the Russian government and its proxies for almost a year. As described by the New York Times, the executives expressed remorse and regret for their companies’ failures during the 2016 election and promised to prevent future information operations from manipulating their users.
Over ten months later, on September 5, 2018, representatives from tech giants were again called to Capitol Hill to update lawmakers on their efforts in the lead-up to the midterm elections. In their written testimonies, all three companies reported numerous changes and policies to help improve transparency and protect users from foreign interference. However, questions from lawmakers elicited more apologies and promises than concrete solutions. And, in contrast to seemingly improved dialogue between policymakers and the witnesses from Facebook and Twitter, Google’s chair sat empty for the duration of the hearing, a symbolic reminder that cooperation between the public and private sector on technological threats to democracy remains insufficient.
This report reviews and analyzes the steps taken by online information platforms to better defend against foreign interference since 2016, specifically focusing on three lines of effort: policies to address inauthentic behavior, measures to improve advertising transparency, and forward-looking investments and external partnerships.
The analysis of this report adopts the framing of the Senate Intelligence Committee by focusing specifically on the online information platforms of Facebook, Twitter, and Google. Though interference operations are not limited to these platforms, these companies serve as leaders and trendsetters in the wider tech community, operate the largest and most influential social networks in the U.S., and function as important mediums for the spread and consumption of information. The report concludes with six broad recommendations for online information platforms to better protect the American people from foreign interference.
Reviewing Online Information Platforms’ Efforts to Counter Foreign Interference
Inauthentic Behavior and Inaccurate Information
In the years leading up to the 2016 election, the Russian Internet Research Agency employed inauthentic and automated accounts, often posing as American citizens, to spread false or divisive content, organize demonstrations and protests, and manipulate algorithms. Russian intelligence services similarly utilized inauthentic personas to help dispense and spread stolen information. Alongside these efforts, Russian government-linked media outlets and proxies spread disinformation across information platforms to disrupt and distract discussions surrounding key geopolitical events. Facebook, Twitter, and Google have sought to address inauthentic behavior and the spread of inaccurate information by targeting and removing inauthentic accounts, fact-checking and providing contextual information to users, and adjusting algorithms to reduce user interaction with misleading or harmful content.
Targeting Inauthentic Behavior
In the wake of revelations regarding foreign interference, online information platforms scrambled to improve their ability to target and remove malign content. In recent months, the platforms have begun to focus on tracking networks of inauthentic behavior in identifying and dismantling influence operations. In 2018, Facebook has identified and removed five major nation-state information operations targeting American audiences, including efforts originating in Iran and Russia. The company has also worked with governments around the world to take down inauthentic accounts seeking to manipulate information on elections. Twitter has similarly attempted to crack down on inauthentic accounts, particularly bot networks and accounts attempting to manipulate trending lists. In recent months, Twitter has suspended tens of millions of accounts (often unattributed bot networks), and has also joined Facebook in tackling nation-state information operations. For its part, Google has removed inauthentic accounts from its video-sharing platform YouTube, though on a smaller scale. Google’s platforms are more often targeted by overt propaganda efforts via Russian government-controlled media outlets, which operate with a large presence on YouTube and effectively dominate search results on issues key to Russia’s geopolitical interests.
While the three companies have shown better coordination and capacity in tackling inauthentic behavior in recent months, they need to demonstrate greater transparency and commitment to consistently enforcing their policies. For example, while Facebook released a detailed report on its August takedown of foreign interference campaigns, the company still has not revealed the names of all of the accounts involved, and only released select information and content samples, preventing researchers from learning more about the operation. Additionally, Twitter and Google – both of which participated in the coordinated takedown – failed to release any specifics on the operations removed from their platforms, leaving users unaware if they interacted with inauthentic content. More recently, Twitter set a good example by releasing a trove of data from accounts linked to the Russian Internet Research Agency and issuing a commitment to expose future information operations. This should become a new standard for disclosures of information operations, as greater transparency will prove key in inoculating users against the tactics of foreign interference campaigns and empowering researchers to find solutions to future threats.
Looking to the future, it is unclear whether the platforms have the capacity to keep up with the rapid speed and evolving threat of information manipulation. A recent report by the Knight Foundation revealed that despite Twitter’s large-scale purges of inauthentic accounts, over 80 percent of accounts linked to disinformation campaigns during the 2016 elections are still active. Additionally, Jonathan Albright’s research has revealed the way that coordinated influence operations on Facebook have adapted to use the platform’s “groups” to more covertly organize and execute information operations. While Facebook and Twitter have committed to hiring more personnel to help moderate content on their sites, the companies remain confident that artificial intelligence will solve their problems. However, researchers and experts remain skeptical of AI as a blanket solution to online information platform’s challenges, and many have criticized social media companies for overstating its effectiveness and capabilities. Even if AI does become an effective tool for combating inauthentic behavior, malign actors will also be able to make use of developing technologies to adapt and improve their efforts. In order to meaningfully reduce foreign interference on their sites, online information platforms will need to outsmart malign actors in a constant digital arms race.
Revelations since 2016 have also indicated a startling lack of coordination between the platforms in tackling information operations that exploit all of their platforms concurrently. For example, an inauthentic persona managed by Russian intelligence officers, Alice Donovan, was removed from Facebook in September 2017 following revelations of Russian interference. However, despite Facebook’s takedown, Alice Donovan’s twitter account remained active until June 2018 after the persona was exposed by an indictment on the Russian officers by Special Counsel Mueller. A New York Times investigation into Facebook’s management of its recent scandals also reveals that the company has actively attempted to deflect criticism toward its competitors rather than working with them to tackle issues. If online platforms hope to effectively combat malign actors – who operate with ease across various platforms – more open and institutionalized coordination will be key.
Finally, online information platforms have struggled to clearly state and consistently enforce their terms of service, leaving significant room for manipulation. According to data journalist Jonathan Albright, though Facebook has shown recent success in tackling inauthentic behavior, “a longstanding pattern of ineffective rules paired with inconsistent enforcement” undermine the company’s efforts and open up “many loopholes and workarounds” for malign actors. Twitter has received similar criticism for the way it “haphazardly” enforces of its terms of service (often in response to public criticism). For example, despite touting improved abilities to remove content that violates its rules, investigations have revealed that pages and accounts banned on Facebook have been easily reestablished under new names and with little loss to engagement metrics. Research indicates that Twitter has similarly struggled to uphold its standards, often failing to adequately respond or act on user reports for rules’ violations for hate speech, abuse, and impersonation. If online platforms hope to successfully counter information manipulation, they will need to more clearly define and consistently uphold their own rules.
Fact-checking and Labeling
Online platforms have also sought to combat the spread of inauthentic behavior by instituting various fact-checking features, and by labeling content and search results to provide important contextual information to users. Facebook has launched fact-checking partnerships with organizations in 16 countries, including an agreement to partner with the Associated Press in the leadup to the 2018 midterm elections. In 2017, Google instituted a label to help readers identify fact-checking articles in search results. Additionally, Google temporarily introduced a feature to highlight fact-checked content when users searched for publishers, though the company quickly removed the feature following backlash alleging that it was biased against conservative sources.
Online platforms have implemented various labeling features to provide contextual information for content and search results. Since the 2016 election, Facebook has introduced several new labels to provide users with important background information on publishers and articles shared on its site. In May 2018, Twitter similarly instituted labels for election candidates in the 2018 U.S. midterms to help users identify authentic accounts for candidates. Google has also implemented labeling features through YouTube. In July 2018, YouTube launched a new tool to provide users with context surrounding certain issues prone to misinformation. The tool displays “fact-confirming text” below videos and at the top of search results to help users separate fact from fiction on key subjects that attract conspiracy theories such as the moon landing, the JFK assassination, and the downing of flight MH17. YouTube also took steps to limit the impact of state-backed propaganda on its platform by labeling all content from state-sponsored news outlets, such as RT and Sputnik.
While fact-checking efforts have increased substantially in recent years, efforts remain incomplete and likely ineffective. Though Facebook has claimed that its fact-checking efforts have produced positive results, partners of the program have argued that it is far too limited to effectively keep pace with the spread of false information on the platform. Partners have also alleged that the program is more about window-dressing than about fixing the problem, with one former partner explaining, “They’ve essentially used us for crisis PR … They clearly don’t care.” Google’s fact-checking label may be similarly ineffective, as it appears infrequently in search results. Unfortunately, even more robust programs may prove equally as flawed, as researchers have questioned whether fact-checking is even an effective way to change readers’ opinions. Additionally, fact-checking often misses the point of information operations, which is not to establish a specific falsehood as the truth, but rather to flood the information space with so many competing narratives that there seems to be no truth at all.
Labeling efforts, although promising, are plagued by similar challenges. Twitter’s labeling of election candidates represents a small step forward, but much more must be done to empower users with more contextual information about content, such as why that content is being presented to them and whether it is being shared via an automated account. Additionally, while the recent inclusion of “fact-confirming” labels on YouTube presents a positive model for future efforts, the program needs to be significantly expanded to include additional searches. Further, YouTube’s current program does not include fact-confirming labels on state-sponsored videos, such as those from RT and Sputnik, even when those videos present misleading or inaccurate information. Finally, YouTube’s disclaimers for state-sponsored videos remain misleading, as videos produced by BBC, NPR, and RFE/RL include the same disclaimers as those accompanying RT and Sputnik videos. While it is true that all of these outlets are state-supported, only RT and Sputnik are used by their host government to publish false or deliberately misleading information. Conflating RT and Sputnik with institutions that have full, independent editorial control and high journalistic standards undermines legitimate news organizations and may mislead users into trusting inaccurate content.
Another significant line of effort for online platforms has been the adjustment and improvement of algorithms to reduce the spread of spam, inaccurate, or unverified content. Facebook announced several changes to reduce the spread of inaccurate content, most notably making significant adjustments to reduce the prevalence of news and advertising content in Newsfeeds in January 2018 (and promising to promote content from sources deemed “trustworthy” by users). Twitter has similarly reported that it is improving algorithms to reduce the visibility of suspicious accounts. In the leadup to the 2018 midterms, Twitter also launched a temporary feature that algorithmically generated a tweet feed to help users follow commentary on the upcoming elections. However, almost immediately after the feature launched, it surfaced tweets from accounts that are known promoters of conspiracy theories and disinformation campaigns.
Due to constant attempts by various actors to manipulate Google search results, Google is continuously working to refine and improve its algorithms. Malign actors, including the Russian Internet Research Agency, have been known to employ Search Engine Optimization teams to improve their visibility in search results. And in response to criticism that its algorithms inadvertently promote misleading information and state-sponsored propaganda, Google has announced on several occasions that it is working to specifically surface “more authoritative content” and to “improve search quality.”
Google’s efforts to promote authoritative content are important, but, so far, inadequate. Despite claims that the company is reducing the prominence of misleading content in search results, specifically citing RT and Sputnik, Russian state-sponsored propaganda continues to dominate Google’s search results on issues key to the Kremlin. By constantly reporting on these subjects, it appears that RT and Sputnik are able to game Google’s algorithm and dominate the search results for these events due to Google’s focus on surfacing recent reporting. Google’s News function is more successful at weeding out state-sponsored propaganda, but even News often promotes unlabeled Russian state-sponsored media in its results for certain geopolitical topics. To better protect and inform its users, Google should adjust its algorithms to value authoritative and trustworthy articles over “fresh” content in search results.
Additionally, recent revelations regarding the targeting of Google’s search results present a substantial vulnerability for foreign interference. Results of a new research study indicate that Google actively tailors its search results to specific users based on data collected from them, even when users are logged out or in private browsing mode. If accurate, this is extremely problematic, as this means Google’s search algorithm filters information to users that reinforces their preconceived notions, potentially promoting misinformation and propaganda over the truth.
Facebook’s algorithm changes have also drawn criticism, as the reduction of news in users’ newsfeeds significantly constricted online traffic to legitimate outlets. A more productive policy would be to promote the prevalence of verified content and sources, rather than to shun all news. Twitter should similarly promote the prevalence of verified accounts or content, and both platforms should ensure that their verification processes are stringent enough to weed out inauthentic actors.
On September 6, 2017, Facebook announced that the Russian government-linked Internet Research Agency (IRA) had spent $100,000 on advertising to influence the 2016 U.S. election. The ads, which were later released to the public, targeted specific U.S. demographics, seizing on and inflaming discussions of hot-button issues to amplify divisions between Americans and influence public opinion. In the months following, Facebook, Twitter, and Google, facing increased pressure from policymakers and the public, implemented numerous changes to increase transparency and security for political advertisements on their platforms.
One of the main policy changes introduced by the platforms is the institution of labeling for political advertisements to help users identify political ads and understand who is funding them. On Facebook, new features include “paid for by” tags for political and issue ads (defined as ads on “national issues of public importance”) and a tool allowing users to see all of the different active ads run by a Page. Twitter has similarly implemented labels for election-related and issue ads, and now requires disclaimers for promoted content to help users identify political campaigns. Finally, Google has also launched a new measure requiring election advertisers to include information on ads’ funding sources within the ads themselves.
The platforms also took measures to try to weed out foreign ads before they could go live. In May 2018, Facebook announced a new policy requiring political advertisers in the U.S. to verify their identity and location. Twitter instituted a similar verification standard for political and issue advertisers in the U.S., while Google now requires a government-issued ID or proof of lawful permanent residence to run election ads in the country.
A final component of advertising transparency reform has been the establishment of publicly-accessible archives for advertisements. In recent months, Facebook, Twitter, and Google have all launched online archives for ads, which include information about political advertisers in the U.S. Both Facebook and Twitter’s archives feature both political and issue ads, while Google’s archive is currently limited to election ads (with stated plans to expand in the future). Facebook’s archive also initially included advertisements from news organizations, a policy the company reversed after intense backlash pointed out that the inclusion inaccurately conflated marketing for news organizations with lobbying for a political agenda or candidate.
The most glaring deficiency in the measures adopted by the platforms to eliminate foreign political ads is apparent gaps in their enforcement and implementation. Since their adoption, researchers and media organizations have exposed numerous loopholes in the ad policies. For example, in August 2018, Facebook took down a foreign influence operation that included over $10,000 in ads. The takedown was initiated via a tip from cybersecurity firm FireEye, not from Facebook’s internal mechanisms.
Additionally, over the summer of 2018, researchers successfully purchased ads through Google while impersonating the Kremlin-linked Internet Research Agency (IRA). The researchers, who used the name and identifying details of the IRA to purchase ads that included known IRA content, were able to purchase ads on the YouTube channels and websites of CNN, CBS This Morning, HuffPost, and the Daily Beast, despite Google’s ad reforms. Google responded to the revelation by stating that it had “taken further appropriate action to upgrade our systems and processes.”
A similar experiment conducted by Vice News in the weeks before the midterm elections revealed a glaring vulnerability in Facebook’s “paid for by” label. Vice successfully purchased, and Facebook approved, ads that Vice inauthentically claimed were “paid for by” Vice President Mike Pence, the Islamic State, and all 100 sitting U.S. senators. According to Jonathan Albright, Facebook’s ad policies are plagued by structural “loopholes” that allow for exploitation, such as Facebook’s failure to adequately monitor pages running political ad campaigns after their initial verification.
In their current form, online information platforms’ ad policies also lack sufficient scope to prevent potential manipulation. Google’s focus on just election ads is inadequate and unrepresentative of the type of advertising used by foreign actors to interfere in elections and political debate in the past few years. Additionally, Twitter’s ad archive only includes ads from the past seven days, limiting the information provided to users. Finally, all three companies need to extend these features outside of the United States. Facebook’s recent expansion of transparency requirements for political advertisers in the U.K. is a positive step, but further implementation is necessary to protect users around the world.
A final issue with online information platforms’ focus on improving advertising policies is that reforming ad transparency constitutes a disproportionately large portion of the platforms’ efforts despite its limited role in foreign interference campaigns. In the case of the Russian Internet Research Agency, unpaid-for activity played a much larger part in spreading Russian narratives than advertisements. While Facebook, Twitter, and Google have proudly paraded their ad reforms on Capitol Hill, the companies should also acknowledge the limitations of focusing on content as a solution to foreign interference. Closing off these vulnerabilities remains important, but online information platforms should concentrate their efforts on targeting coordinated inauthentic behavior rather than the results of that behavior.
Forward-Looking Investments and External Partnerships
A final key effort that online information platforms have embraced to counter foreign interference is investing in forward-looking internal resources and external partnerships to build capacity. These efforts can be divided into two main categories: partnerships with researchers and experts, and partnerships with media institutions and civil society.
Partnerships with Researchers and Experts
By employing experts and sharing data with external researchers, online information platforms can build greater capacity to identify impending threats and empower analysts to identify potential solutions. Investment in these partnerships varies significantly between platforms, and greater commitment and information sharing will be necessary to secure online platforms from foreign interference.
Partnerships with external researchers allow the academic and policy communities to analyze the tactics and impact of online information operations and offer potential solutions. In April 2018, Facebook launched one such partnership, announcing plans to form a commission of academic experts to develop a research agenda about the impact of social media on elections. According to Facebook, the commission will solicit research and produce reports on the subject, although no such reports appear to have been released to date. Facebook also established a similar partnership with the Atlantic Council’s Digital Forensics Research Lab in May to help the company get “real-time insights and updates on emerging threats and disinformation campaigns.” Although it lacks formal partnerships with outside organizations, Twitter’s recent release of IRA data represents an important step towards greater information sharing with users and external researchers.
Online platforms have also constructed internal mechanisms to increase their capacity to identify and counter potential foreign interference. Facebook recently launched its “Investigative Operations Team,” a group of “ex-intelligence officers and media experts” who will help test the company’s systems, pages, and apps to identify potential areas of misuse. To complement this effort, the company also built a physical “war-room” to track potential interference surrounding the 2018 midterm elections, and is working towards doubling its 10,000-person security staff. Facebook shuttered its war-room shortly after the elections. Google has invested in internal research on future threats through its think tank “Jigsaw,” which is tasked with “invest[ing] in and develop[ing] tech solutions to geopolitical problems and digital attacks,” though efforts thus far have focused mostly on cybersecurity.
While external partnerships have enabled researchers and experts to assist online platforms in their battle against foreign interference, they have not gone far enough. For example, in July, Facebook shared only a fraction of the names of the pages and accounts associated with the Iranian and Russian interference operations that the company ultimately removed (and shared them only after they were erased off the site). Even Facebook’s partners were not allowed to review much of the content, including one page which had reportedly organized several protests in the United States. Google has similarly failed to publish the names of inauthentic accounts during its takedowns, inhibiting users and researchers from learning from the campaigns. Twitter’s recent mass release of IRA data sets an important precedent for greater information sharing, although even this data dump lacked information on inauthentic accounts that investigations have revealed were created by Russian intelligence officers in the run-up to the 2016 election.
Partnerships with Media and Civil Society
Through partnerships with journalists, publishers, and civil society organizations, online information platforms have sought to build resilience to disinformation and foreign information operations throughout society. Since the 2016 election, Facebook has partnered with outside organizations to support research on news literacy, publishing public service announcements on spotting false information, and investing in and collaborating with newsrooms to support journalists and local news outlets. Facebook also recently announced its “Digital Literacy Library,” which offers lesson plans for educators “to help young people think critically and share thoughtfully online.” Twitter launched similar initiatives, including investments in media literacy programs, partnerships to support digital literacy amongst educators and civil society, and training programs for journalists.
Even before the 2016 election, Google invested in funding, training, and support for journalists through several programs that now focus on combating misinformation in elections, helping to promote trustworthy content, and supporting newsrooms and journalists. In March 2018, the tech company expanded its efforts by launching the Google News Initiative (GNI). The GNI includes an array of efforts, including: training for journalists; software to recognize breaking news and direct searches to authoritative content; work with research institutions to improve media literacy; a Disinfo Lab aimed at fighting disinformation; open source tools to secure safer internet access for journalists; and initiatives to help media companies improve their revenue. Google has promised to commit $300 million to the GNI over the next three years.
Online information platforms’ investments in media organizations and civil society indicate an important recognition of the need to protect the information ecosystem. While these partnerships and initiatives are impressive, platforms must do more to inform and empower their users by highlighting the threats they are working to address and the programs they have created to address them. Even the best tools are useless if no one knows how to access and apply them, and, as of yet, platforms have done a poor job communicating with users about their counter-interference efforts.
Overall, platforms’ partnerships with external organizations should be cooperative, rather than competitive. Information operations cut across all online platforms, and establishing coordinated cross-platform partnerships with researchers and experts, along with structured lines of communication with media, users, and government agencies, will allow for more effective identification of and response to foreign interference. Partnerships should also constitute a commitment to information sharing on a holistic level. Steps towards transparency are welcome, but without more complete commitment, are little more than window-dressing.
Efforts to combat foreign interference by Facebook, Twitter, and Google have resulted in new initiatives to improve advertising transparency, address inauthentic behavior, and establish forward-looking investments and partnerships to build resilience to information manipulation. While these steps have yielded progress in understanding and addressing the threat of foreign interference, gaps and vulnerabilities persist. Most notably, Facebook, Twitter, and Google must make significant strides in six main areas:
- Focusing on behavior: Online information platforms have unique insight into the computational tools used by bad actors on their respective platforms. By focusing on their structural vulnerabilities, platforms can limit or quarantine malicious activity without regulating content. Identifying and eradicating coordinated inauthentic behavior does not require attribution and can be executed regardless of the motivation(s) of the actors involved. Online platforms are the only ones positioned to police this activity and, though they’ve made recent progress, more aggressive efforts to reduce the space for inauthentic behavior can minimize the scale and scope of evolving operations.
- Increasing transparency and information sharing: Since 2016, online information platforms have proved increasingly more willing to share information on foreign interference with the public and with government agencies. However, these efforts remain largely ad hoc, and the platforms should act to better institutionalize information sharing between their threat analysis teams and the appropriate government authorities, as well as with users and researchers. Public exposure of operations, while protecting user privacy, will be key to inoculating society against the effects of foreign interference.
- Establishing standardization and effective coordination: Facebook, Twitter, and Google continue to engage in counter-interference efforts without a unified understanding of the threats that face their community. Platforms should work to establish a more coordinated threat picture to encourage effective cross-platform cooperation. Platforms should also institutionalize community-wide communication mechanisms to encourage consistent information-sharing regarding emerging threats. Efforts to combat interference should be cooperative, not competitive, and this coordination will be key to tackle operations, which are often cross-platform, in a holistic and thorough manner.
- Improving policies and enforcing rules clearly and consistently: Though online information platforms are tackling inauthentic behavior and content at an increased rate, current efforts are plagued by vulnerabilities, inconsistencies, and a lack of clarity. Platforms should close the gaps in their current counter-interference efforts to ensure that new policies go past window-dressing to achieve intended outcomes. Additionally, platforms should more clearly articulate their terms of service and their responses to violations, and should consistently and transparently apply those rules. Clear communication and consistent enforcement will build credibility with users and civil society, and will demonstrate a stronger commitment to combating future interference efforts.
- Thinking critically about future technologies: Many of the policy updates and initiatives launched by online platforms are intended to correct the failures of the past, namely addressing interference tactics employed during the 2016 election. However, as the threat of foreign interference continues to evolve and change, tech companies will need to build their capacity to think more proactively about how to protect users against manipulation, and about how future technologies may be exploited by hostile foreign actors. Companies should act to institutionalize this type of thinking, and should prepare to take the initiative in recognizing, countering, and publicizing new forms of interference in the future.
- Making user protection the bottom line: Facebook, Twitter, and Google need to improve their efforts to inform and train users in regards to the threats that face them, and the tools and tactics they can employ in response. Further, companies need to provide users with more contextual information to evaluate content and should also explain to users why this context is important. Finally, platforms will need to change the ways that they design features to emphasize user protection over ad revenue or convenience. In the past, companies have created products with the intention of retaining user attention or manipulating human tendencies, providing a significant vulnerability for exploitation. Future technologies and platform features should hold user protection as their bottom line, rather than profit.