Since the launch of the Hamilton 68 dashboard in August 2017, data from the dashboard has been cited regularly by both the media and government to inform the public about the efforts of Russian-linked Twitter accounts to influence Americans via social media. We believe it is important to reiterate what data on the dashboard does and does not represent, and to clarify points of mischaracterization, especially regarding the network we monitor. This paper addresses questions that have arisen over the past six months, but does not replace (and, in fact, largely draws from) our original methodology paper, which remains the most comprehensive resource for information on Hamilton 68.

What is the Purpose of Hamilton 68?

Hamilton 68 tracks a network of Russian-linked accounts that have been connected to influence operations in the United States. The dashboard does not “track bots,” but rather it tracks and monitors the output of Kremlin-oriented influence operations, whether orchestrated by humans or automation. The objective of the dashboard is to increase understanding of the focus, spread, and effectiveness of these influence operations by dissecting and assessing the message, messengers, and methods the Kremlin and its proxies employ on social media. This is what Hamilton 68 does. 

What Accounts Are Monitored on Hamilton 68?

As discussed on our FAQ and methodology pages, the accounts monitored on the dashboard include three user types:

  • Accounts likely controlled by Russian government influence operations.
  • Accounts for “patriotic” pro-Russia users that are loosely connected or unconnected to the Russian government, but which amplify themes promoted by Russian government media.
  • Accounts for users who have been influenced by the first two groups and who are extremely active in amplifying Russian media themes. These users may or may not understand themselves to be part of a pro-Russian social network. 

While there is a great deal of consistency in the themes and messages these accounts seek to amplify, it is neither our assertion nor belief that all messaging on the dashboard is created or approved by the Russian government. It would therefore be inaccurate to describe the network as a hierarchical, centrally controlled operation, nor would it be correct to assume that all activity within the network is the product of a synchronized disinformation campaign. That said, there is a degree of coordination within the network, as influential accounts “seed” the network with talking points, which are then amplified by follower accounts, some real and some automated. Accounts in this network should therefore be considered interconnected, but the network as a whole should not be misunderstood to be controlled by a single entity.

Some accounts we track are automated bots, some are trolls, and some are real users. Some are in Russia, but many are not. Some of the accounts we monitor promoted content now known to have originated with the Internet Research Agency — the best-known Russian “troll factory”— but the dashboard did not specifically target that program. The Internet Research Agency is not, by any means, the only source or amplifier of Russian influence operations. We also do not believe the influence network monitored by the dashboard is the only Russian-linked influence network employed by the Kremlin and its proxies. This is but one sample in a wide-ranging population of Kremlin-oriented accounts that pursue audience infiltration and manipulation in many countries, regions, and languages via several social media platforms.

How to Interpret the Data

As we note on the dashboard’s front page: “Just because the Russia-aligned network monitored here tweets something, that doesn’t mean everyone who tweets the same content is aligned with Russia.” The presence of a hashtag on the Hamilton 68 dashboard also does not necessarily suggest that the success of a hashtag on Twitter is the result of Russian influence operations, nor does it mean that the hashtag originated within the network we monitor. These networks often participate in trending hashtags, topics, and URLs that they did not originate, and that would likely promulgate without their support. Because of the points above, we emphasize that it is INCORRECT to describe content linked-to by this network as Russian propaganda. Rather, content linked-to by this network is RELEVANT to Russian messaging themes, and is used for purposes of both insinuation and influence.

Importantly, the dashboard also does not analyze the spread of a hashtag across the entire Twitter platform; it only tracks the frequency of a hashtag’s use by accounts monitored on the dashboard. As mentioned, these accounts are a sample of one network of Twitter accounts noted for their promotion of pro-Kremlin narratives. Data from the dashboard should therefore not be viewed as representative sample of all Russian-linked accounts online. This means that the preponderance of themes on the dashboard that appear to be aimed at Americans on the right or far-right of the political spectrum should not necessarily be viewed as evidence that the Kremlin has disproportionately targeted that segment of the population. Rather, it simply serves as evidence of the network’s attempts to amplify and exploit themes that resonate with a specific audience of Americans.

Why Are Accounts Not Publicly Identified?

Since the launch of the dashboard, we have been repeatedly asked why we do not publicly identify the accounts we monitor. We have been transparent about our methodology, and stressed that not all of our accounts are based in Russia or directly controlled by the Kremlin. But it is worth reiterating the reasons why we believe that it is not only prudent but necessary to refrain from publishing the identities of the monitored accounts.

First, “exposing” an account actively engaged in a disinformation campaign would almost certainly result in a change of behavior, essentially rendering the dashboard useless. This could also result in an endless game of whack-a-mole, where operators of exposed accounts (particularly bot and cyborg accounts) would abandon those accounts, only to quickly reappear under new handles. Although this might temporarily disrupt activity within a network, the ease with which one can create a new pseudonym and open a new account would ultimately make this effort fruitless. It is also far easier to set up accounts for the purposes of spreading disinformation than it is to identify and track those accounts.

Second, the purpose of the dashboard is to understand how a pro-Kremlin network operates through the collection of aggregate data. Revealing accounts believed to be linked to influence operations would almost certainly shift focus to those individual accounts, which we believe is less informative and, for the reasons mentioned above, counterproductive.

Finally, as previously noted, we monitor accounts whose owners are extremely active in promoting Russian disinformation or Kremlin talking points, but who may or may not understand their role within the network. Although these accounts play an important role in the dissemination of disinformation and are thus worthy of being studied, we do not believe that it is ethical to publicly connect those accounts to disinformation campaigns. Additionally, while our metrics and subsequent manual review are extremely accurate in identifying accounts that are linked to or participate in influence operations, we have stated that as many as 2 percent of our accounts may not be relevant to influence operations. We are not willing to publicly attribute even one account incorrectly.