What we do know is that digital advertising provides far less revenue to news organizations than traditional print and TV ads — with serious consequences. News organizations, particularly local outfits, have shed reporters at a precipitous rate. Newspaper publishers employed 455,000 people in the news business in 1990; by early 2017, this had dropped by more than 50 percent to 173,900 employees. Most remaining media organizations and new Internet media companies are firmly based in major metropolitan areas, though even they are struggling to stay in the black.
Media outlets are shifting to new business models, whether relying more on revenue from subscribers, organizing events, or being bought by a billionaire. BuzzFeed has, for example, developed a nine-box model that combines multiple revenue streams from advertising, commerce, and studio development. But there is, as yet, no sure solution. The business model crisis makes news organizations more susceptible to disinformation: outlets chase eyeballs to secure ad dollars; there is a push for quantity of articles over quality; cost-cutting in the newsroom has reduced the number of editors who check copy before it is posted.
Revenue issues have intertwined with the second crisis of norms in journalism. Older journalistic standards like neutrality, balance, or objectivity — what media critic Jay Rosen in 2003 called “the view from nowhere” — developed over the 20th century and seemed so natural that we often forget their relatively recent origins. The rise of social media and online media outlets challenged these older norms in several fundamental ways. Rather than news organizations setting the agendas for readers, organizations increasingly pull their content from social media, writing articles around whatever post, photo, or tweet has just gone viral. Ordinary people can now easily publish on platforms like LinkedIn or on sites like Huffington Post that rely on citizen journalists (though Huffington Post itself has just moved to a more curatorial model). Readers can also push back against journalists in comment sections and on social media. News articles no longer exist in a vacuum without reader responses. This can create vigorous and meaningful debate; it can also give oxygen to extreme views as well as create space for abuse and misuse by bots and trolls.
The current concatenation of these two crises has made journalism particularly vulnerable to foreign attempts to spread disinformation. The systemic issues underlying these two crises deserve urgent attention. No one policy brief can overturn the economics of news or upend the culture of journalism. We recognize the constraints of media organizations and the difficulties of making sweeping changes. However, we can implement immediate incremental shifts to foster more responsible journalism.
Here, we focus on tactics over strategy and on solutions that can be applied tomorrow. We suggest some simple best practices to help journalists and editors to avoid playing an unintentional role in information warfare and to increase trust in journalism. Our recommendations fall into three categories: how to detect disinformation; how to increase literacy about foreign interference; how to anticipate future problems today.
1. How to Detect Disinformation
There are many types of deliberate disinformation. Some are domestic and some are foreign. The two most successful types of coordinated and deliberate foreign disinformation have been weaponized information and fake personae.
Weaponized information includes both leaking information and amplifying (dis)information designed to sow distrust or create discord. Journalists are key to foreign (dis)information campaigns based on leaked or hacked information dumps. These campaigns only succeed if journalists amplify the dumps by reporting on their contents. While some people will pick through the vast amount of information released in a data dump, the vast majority only hear about leaked or hacked information through traditional media organizations’ reporting.
Journalists have of course always relied on leaked information and are well-equipped to assess their sources’ reliability. Yet, new forms of leaks through hacking create new dilemmas. Journalists seize the information disclosed in a hack, but it is harder to assess whether the source is reliable or the hacked information valid when they do not have direct contact with the leakers.
In many ways, journalists would do well to go back to basics with weaponized information. Remember that a source of information has an agenda and that agenda does matter. Journalists are used to dealing with sources who have an axe to grind. We are not asking journalists to stop being journalists. But journalists may be pawns in a bigger chess game. Hacking operations by states or non-state actors can only successfully weaponize information if journalists publicize the contents of the hacks. When journalists report on leaked information posted online by outlets like Wikileaks, it is important to act with extreme caution. Consider whether stories on that type of material deserve the prominence that they received during the 2016 election campaign. Many readers may see stories online, but page one still sets the agenda.
Remember also that a leaked data dump may contain falsified information. Emmanuel Macron’s campaign even deliberately included erroneous information to invalidate any hacks of their materials. We also know that at least one hacked DNC email released by Wikileaks was altered prior to release. Like with any other information used by journalists, verifying and confirming the veracity of any leaked information is key. If they do report on the content of hacks, journalists might contextualize the information further by noting possible reasons for the hack, such as influencing voters.
One possible approach is to cover the story of the data dump itself, rather than its contents. French journalists took this approach when data from Macron’s campaign was hacked and released just before the election. This informed the public but did not amplify potentially falsified content. This approach may also help to prevent future hacks because it helps to invalidate their purpose.
Fake personae are the second key tool in foreign disinformation campaigns. We now have ample evidence of fake social media accounts and fake freelance journalists publishing articles in online outlets. These fake personae have duped major media organizations as well as users. Twitter estimates that there were over 50,000 Russian-linked social media accounts active on its platform during the 2016 election campaign. Twitter is now notifying users who followed, favorited, retweeted, or replied to these accounts. That ranges from Senator John Cornyn (R-TX) to major media outlets who featured tweets from these fake figures. (Note that this does not necessarily include everyone who saw or engaged with this content.)
Media outlets featured these tweets in multiple ways: they embedded them in articles as examples of how “ordinary people” reacted to news; they quoted them to showcase myriad opinions on the election; they embedded the particularly funny or pithy tweets to increase hits. To take one example, journalists often have to produce a reax story (reaction story) to an event within two hours so they pull some pithy tweets. It’s easy; it generates thousands of clicks; it probably fulfills the journalists’ quota for stories that day. But it’s also dangerous. Social media are key sites for information laundering, where Russian-linked groups post and amplify information, but maintain plausible deniability about Russian involvement. An account might seem to be real, like that of Jenna Abrams who seemed like a Trump-supporting all-American woman. She later turned out to be someone generated by the Internet Research Agency, the St. Petersburgbased Russian troll farm. One study found that 32 out of 33 major American news outlets featured tweets from the Internet Research Agency in their stories as evidence of American partisan opinion. If news articles embed these types of tweets, they may make readers think that there is more widespread support for certain viewpoints than actually exists in reality.
The incentives — speed and clicks —make it hard to change the behavior that led media outlets to feature those tweets.
Reax stories are not going away, but there are simple ways to make them more reliable. Embedding tweets from “ordinary people” is really an updated version of the “vox pop” or “man on the street” quotation that we have seen in articles for decades. Journalists often use “vox pop” to display different or opposing points of view on issues. There were informal guidelines and norms about whom to interview on the street and how. We suggest creating equivalent guidelines to avoid being duped by social media accounts again.
These guidelines need not be complicated. Instead, media organizations can establish simple procedures for verifying which social media posts to feature or freelance journalists to publish. This could be a checklist or it might be one simple step. For example, outlets might commit to only embedding tweets from people whom journalists have contacted on the phone. Verification on Twitter is not enough to ensure a user’s reliability. Nor is a written reply to a direct message. Instead, it is important to contact Twitter users briefly on the phone to check their identity before posting their tweets. It is better to feature fewer tweets because journalists cannot contact a user over the phone than to amplify fake figures.
There are also ways to embed verified social media posts more responsibly. Organizations could simply quote tweets rather than embedding them. If news outlets do embed tweets, they might consider cutting out the portion showing replies/retweets/favorites. The Financial Times has started to do this. It avoids providing a potentially inaccurate snapshot in time; it also avoids over-legitimizing tweets by providing social proof of the number of retweets or favorites; it avoids reflecting misleading numbers because many retweets or favorites may have come from bots.
Still, news organizations often need to report on extreme or suspicious figures, such as neo-Nazis. In those cases, organizations might consider simple techniques to avoid further amplification. Don’t link back to their tweets or posts; use a screenshot or quote them instead. Don’t include their Twitter handle. Ensure “non-follow tags” are enabled on news sites for those links so that any clicks don’t register as hits for them.
The same guidelines might apply to publishing freelance articles. It takes a few moments to speak with an author on the phone to verify their identity. Those few moments can safeguard against destroying years of credibility if an organization avoids publishing fake freelancers.
Journalists and editors can avoid amplifying disinformation. They can also do more than avoidance by taking positive steps to engage their audiences and educate them about Russian attempts to interfere in American democracy.
2. How to Increase Literacy about Foreign Interference
Many discussions about combatting Russian government attempts to undermine U.S. democracy call for greater media literacy. This is meant to build resiliency against manipulated social media or falsified information by teaching citizens how the media function and how to identify fake news. Often, these discussions imply that media organizations should take on the task of media literacy. The evidence suggests, however, that this kind of education needs to be undertaken by governments or civil society, for example by including a media module in high school social studies.
Media organizations, however, should focus on story literacy rather than media literacy. Story literacy focuses on how to help users understand a particular story, rather than how to understand media as a whole. Story literacy means that media organizations take responsibility for helping their consumers understand complex and developing stories, such as Russian attempts to undermine American democracy.
Russian malign influence operations are complicated, murky, and often hard to understand because they take on multiple forms. Story literacy on these issues is not just a matter of civic responsibility for news outlets. It can also generate greater user engagement, because users will turn to an outlet for digestible, comprehensible, and compelling information.
Story literacy can take many forms. It means repeating, summarizing, and reminding. Remember that most readers dip in and out of following stories. As media critic Jay Rosen has put it, journalists need to remember that most people are walking into the movie half-way through and need a plot summary to get them up to speed. It may be less exciting than a scoop, but a 200-word article summarizing the Trump-Russia story may be more useful.
Outlets can also create more frameworks and timelines for complex stories like financial transactions and potential money laundering by figures linked to the Russian government. The Washington Post creates good network diagrams of key figures showing their photographs and links between them. Or organizations might create a dedicated vertical to Russian activities, like NewsDeeply does for stories like Syria. This would create space for more stories around this complicated topic and make it easier for users to find background information and get up to speed swiftly. The Guardian now embeds explanations in the form of Q&A within complicated stories. Vox’s explainer cards are another example of easy-to-read and simple ways to break down complicated topics.
Part of story literacy can also be debunking. Debunking is an uphill battle. It might not even be winnable: the debunkings of the top 50 fake stories on Facebook in 2017 generated about 0.5 percent of the engagement of the fake stories themselves. But fact checks do still change minds. Leticia Bode and Emily Vraga’s study on changing misperceptions about Zika found that fact checking resulted in a 10 percent decrease in overall misperceptions. Outlets could consider creating regular debunking stories and easily shareable meme debunking. If the fact checks and debunkings change a few people’s minds, they are still worth it.
Finally, story literacy also means greater transparency. Nearly a decade ago, David Weinberger suggested that journalists take transparency as their highest value rather than objectivity. This suggestion rings even truer today. Transparency can increase trust in reporting; it can also guard against manipulation. Transparency is not simply a service; it can drive readership, viewership, and listenership too.
We suggest doubling down on two types of transparency: in reporting practice and in reporting procedure.
There are many swift and simple ways to bolster transparency in reporting practice. One is to participate in the Trust Project, a new initiative that pushes for standardized disclosures in news articles, such as information on the journalist’s expertise and the sources used. Consider moving to these standards like better story labelling sooner rather than later. It is also possible to introduce better disclosure practices for freelancers. The Conversation is a site where academics publish their research in op-ed format. The academics are required to disclose their sources of funding and any possible conflicts of interests. Other news organizations might adopt those disclosures for freelancers to guard against fake freelancers or other manipulation.
Individual journalists can also include more within stories on how they decide whether sources are trustworthy or how they found a source (within the boundaries of what journalists can reveal). Some mention of sources has of course long been standard practice. But journalists might remember that readers do not necessarily understand phrases like “off the record” or “a source close to X.”
Journalists can match more transparency within stories with more transparency in overall procedure. Transparency in procedure means explaining to readers more about how journalists do what they do. This is particularly important for stories on Russia where many of the sources will only speak on deep background or off-the-record. Readers might love a story about how journalists decide whether to trust a source or a story about an article that journalists could not write because they could not verify source material.