Reshaping Public Opinion Through Computational Propaganda

Did you know that approximately 60% of the Twitter conversation surrounding major news events online is now steered by bots? As a case in point, the Anti-Defamation League reported earlier this week that after analyzing 7.5 million Twitter messages, it was found that nearly 30% of the anti-Semitic attacks online are being steered by bots. In a related matter, the Oxford Internet Institute recently analyzed 2.5 million tweets and 6,986 Facebook pages surrounding the midterm elections and what did they find? That the amount of biased, hyperbolic and conspiratorial “junk news” in circulation is actually greater than it was in 2016. In other words, not only is the quality of content on our digital platforms becoming increasingly murky, but it’s becoming increasingly driven by automated accounts.

What’s worse—as Anne Applebaum points out in a Washington Post article published just yesterday—the real problem lies in the fact that the online disinformation virus is spreading from niche pockets and beginning to seep into the mainstream. In the article “We have learned a lot about online disinformation — and we are doing nothing,” she writes that messages of this type are “no longer seen just by a small fringe, but they are much more likely to be consumed by mainstream users of social media.” (More research later on how these types of messages are impacting mainstream behavior.)

In marketing we often talk about “traditional” vs. “digital” strategies. In this post I will address the dark side of digital marketing, and how digital strategies have been co-opted to undermine authentic marketing efforts. Specifically, I’ll be diving into the topics of misinformation, disinformation,  and computational propaganda. From there we’ll look at the latest research to see what the data science reveals, and how the conversations taking place online often differ drastically from the actual conversations happening offline. While much of the research I’ve come across relates to this particular election cycle, it serves as a good baseline for understanding how these tactics can be used against authentic marketing efforts in any industry.

Misinformation, Disinformation, and Computational Propaganda: A Primer

According to Renee DiResta, who investigates the spread of misinformation across social networks, the term “fake news” has lost all meaning, as it’s come to simply mean “news we don’t like” on the Internet. A better way of looking at the fake news problem is to understand the sources and methods of information sharing on the web.

From the Yale Review article “Computational Propaganda: If You Make It Trend, You Make It True,” DiResta explains the difference between misinformation, disinformation and computational propaganda, which is really just a fancy phrase for gaming the platforms and tactics of marketing to spread disinformation online.  Formally defined, computational propaganda is “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.” She explains:

Misinformation and disinformation are both, at their core, misleading or inaccurate information; what separates them is intent. Misinformation is the inadvertent sharing of false information; the sharer didn’t intend to mislead people and genuinely believed the story. Disinformation, by contrast, is the deliberate creation and sharing of information known to be false. It’s a malign narrative that is spread deliberately, with the explicit aim of causing confusion or leading the recipient to believe a lie.

In other words, amplifying disinformation is deliberately meant to sow chaos in a system, confuse people, and mislead.

As a former venture capitalist and someone who has advised Congress and the State Department about how to respond to attacks online, DiResta initially became interested in this line of work while looking at preschools in San Francisco for her first child back in 2013. Noticing that many preschools listed their vaccination rate percentages, she started looking into the anti-vaccination conversation online, and soon detected a heightened level of chatter surrounding a piece of California legislation having to do with the elimination of vaccine opt outs.

While the bill was pulling at 85% positive, nearly 99% of the online conversation was negative. It was at that point that she mapped out the Twitter conversation and realized that the majority of the accounts leading the conversation were not based in California, and that the primary accounts participating in the conversation were fake and/or running automated strategies. There was also a lot of harassment and attempts to silence real people with opinions not suited to the desired agenda. She took this information to the politicians to show them that the conversation that they were hearing online didn’t quite reflect their constituent base. Ultimately, the bill passed. DiResta cites this as one of the first real examples of a battle for public opinion fought online.

SoundCloud(Image Source)

From a recent Aspen Institute podcast entitled “The Menace of Disinformation,” DiResta further elaborates on the ins and outs of this complex problem. She describes her work as follows:

I look at inauthentic narratives on the Internet. Most of what we’re looking at is people who have an agenda, and the actors who have the agenda can be state actors, as we saw with Russia; it can be terrorist organizations, pushing narratives of violent extremism; it can be domestic ideologues, many of whom have a very legitimate position, but choose to use artificial amplification to spread their message; and sometimes it’s just economic actors who are interested in pushing a particular narrative through ad fraud or spam or merchandising reasons.

In this context, the most common form of inauthentic narratives we see in marketing play out in the form of ad fraud or spam. One example of this is the purchasing of fake followers, or buying fake clicks on social media to inflate engagement numbers.

What the Data Science Reveals

In the digital ecosystem, we know there are forces at work every single day to manipulate search results, steer conversations, reframe issues, and expose people to specific views and ideas. We’re in a competition for ideas, a battle for public opinion, and engaging with social media can sometimes feel like we’re all living in the Wild Wild West. But, what do Americans really believe and where do the online conversation patterns diverge from actual real life discussions and beliefs among “average” Americans? Are we really as divided and polarized as we’ve come to believe?

Hidden Tribes of America Project

Earlier this month I came across a year-long research project called “Hidden Tribes of America,” a project by More in Common, which surveyed 8,000 people who were statistically representative of the U.S. population based on Census data. Using a method of hierarchical clustering to identify similarities between people based on the core beliefs among the population, the results found that we may not be living in the polarized America we think we’re in.

Hidden Tribes
(Image Source)

Only 8 percent of the population was classified as “far left,” with 6 percent as “far right.” The findings indicated that the vast majority of Americans actually fall into the “exhausted majority.” Additional focus groups revealed that “most Americans are tired of this ‘us-versus-them’ mindset and are eager to find common ground.” As it turns out, findings from various data scientists reviewing conversation patterns online generally line up with research from the Hidden Tribes project. In the first example below, polarizing viewpoints shared on Twitter are less common, they just tend to get amplified more. And in the second example, many of the most polarizing accounts turned out to be fabricated personas geographically based outside of the United States.

Twitter May Not be the Echo Chamber We Think it is

In the first example, MIT Sloan Management Review reports that Twitter may not be the echo chamber that we think it is, noting that the majority of Americans tend to share politically moderate content, with a tiny network of highly circulated accounts sharing more polarizing views.

Citing recent research from Management Information Systems Quarterly, researchers assumed that social media fueled polarization because of the echo chamber effect. But after Jesse Shore and Chrysanthos Dellarocas from Boston University and Jiye Baek from Hong Kong University of Science and Technology analyzed a complete cross-section of tweets on Twitter, they found that politically charged tweet activity was much more limited in scope than they expected it to be. They write:

Contrary to prediction, we find that the average account posts links to more politically moderate news sources than the ones they receive in their own feed. However, members of a tiny network core do exhibit cross-sectional evidence of polarization and are responsible for the majority of tweets received overall due to their popularity and activity, which could explain the widespread perception of polarization on social media.

In short, the study found localized evidence of polarization, but no widespread evidence of echo chambers. You can read their full research here.

Black Lives Matter and the IRA

In a second example, Kate Starbird—a professor focused on research pertaining to human-computer interaction (HCI) and the emerging field of crisis informatics at the University of Washington—recently wrote a blog post summarizing her team’s research surrounding the Black Lives Matter conversation online. After studying the internet discourse around the #BlackLivesMatter movement, some interesting parallels were made.

In 2016 they took a meta-level view of more than 66,000 tweets and 8,500 accounts that were highly active in that conversation, creating a network graph. The graph revealed two distinct clusters of highly polarized activity. However, it wasn’t until the House Intelligence Committee released a list of Twitter accounts in November of 2017 that were found to be associated with Russia’s Internet Research Agency (IRA) that they began to make another connection. Many of the same accounts posting polarizing content related to the #BlackLivesMatter conversations were also IRA accounts. Starbird writes:

Looking over the list, we recognized several account names. We decided to cross-check the list of accounts with the accounts in our #BlackLivesMatter dataset. Indeed, dozens of the accounts in the list appeared in our data. Some—like @Crystal1Johnson and @TEN_GOP—were among the most retweeted accounts in our analysis. And some of the tweet examples we featured in our earlier paper, including some of the most problematic tweets, were not posted by “real” #BlackLivesMatter or #BlueLivesMatter activists, but by IRA accounts.

Once they overlaid the tweet activity from the #BlackLivesMatter dataset with the Russian IRA accounts they noticed an interesting pattern. Rather than seeing the conversation heavily skewed to one side, the conversation was actually punctuated by strong activity on both sides. In other words, the pattern looked less like an attempt to sway people in one direction, but more of an attempt to infiltrate and usurp opposing narratives.

Graph Data
(Image Source)

Citing the graph data, she continues:

As you can see, the IRA accounts impersonated activists on both sides of the conversation. On the left were IRA accounts like @Crystal1Johnson, @gloed_up, and @BleepThePolice that enacted the personas of African-American activists supporting #BlackLivesMatter. On the right were IRA accounts like @TEN_GOP, @USA_Gunslinger, and @SouthLoneStar that pretended to be conservative U.S. citizens or political groups critical of the #BlackLivesMatter movement.

You can read the full blog post here.

A Path Forward

In short, whether it’s a foreign agenda, a terrorist push, a blindly partisan advocate promoting a particular ideology, or—in the case of marketing—most often a fierce economic interest looking to gain traction for a new product or service in the marketplace, the fact is that conversations and communities are continually being exploited online for nefarious purposes. And it’s being partially accomplished through a method called computational propoganda.

As marketing and public relations professionals, we are often the gatekeepers, framers, amplifiers, and distributors of information. While computational propaganda efforts become ever more sophisticated, it’s up to marketing professionals to stay ahead of the curve and better learn how to both spot and combat these types of efforts, especially as it relates to defending their brands. It’s ultimately up to us to know the difference between real customers and inauthentic actors, to minimize the noise, and support/promote authentic discourse.

One good place to start is by following startups like the Berkeley-based RoBhat Labs. Want to find the accounts in your Twitter timeline are most likely bots? Check out the company’s first product at Botcheck.me.  They have also very recently launched a second tool aimed at news organizations called FactCheck.me, which allows journalists to see how much bot activity there is across an entire topic or hashtag.

Defending your company’s brand against these next level strategies will require new skills and a new level of understanding about how to operate effectively in the digital realm. If interested in staying up-to-date on this topic, feel free to follow a new Twitter list I’ve created here, which is a list that includes many of the researchers and organizations mentioned in this post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s