When I was in college I had a friend who was a stand-up comedian. Because he was into improv comedy, it was not uncommon for us to play a game of building upon each other’s imaginary tales in public with increasing wit and sass in the face of decreasing likelihood for the purposes of not only entertainment, but in his case, practice with a live audience. (Worth noting, this friend later went on to create an underground satire publication at the university we attended.)
Whenever we played this game, I noticed that people would first begin listening with quiet curiosity and awe, wondering if our stories could possibly be true. Before long we’d have other friends chime into the conversation, and eventually we could get most people to believe nearly anything. What started out as a fun game resulted in an important lesson about truth-telling and persuasion.
Now we’ve all heard about the concept of “willing suspension of disbelief” in theater, but what happens when an entire society puts their suspicions (or inner knowing) about the world on hold for the sake of wanting to believe (or at least trust in) the stories they’re being given? And what happens when everywhere you look, the stories you already believe about the world are being reinforced by the news shared in your social media feeds, the search results appearing in Google based on your previous search history, and supplemented by the friends’ opinions who are showing up with more frequency in your digital space thanks to algorithms designed to give you more of what you already like?
Welcome to the echo chamber folks, a phrase made popular as of late, and defined according to Wikipedia as such:
In news media, the term echo chamber is analagous with an acoustic echo chamber, where sounds reverberate in a hollow enclosure. An echo chamber is a metaphorical description of a situation in which information, ideas, or beliefs are amplified or reinforced by communication and repetition inside a defined system.
Put another way, what happens when one’s Google searches and social media news feed algorithms reflect not only an increasingly narrow scope of reality, but one that is supported or reinforced by advertisers, corporate, and political agendas that are craftily using digital messaging strategies that play right into and build upon your pre-existing beliefs? Well, of course this is already happening. But this is also where the role of marketing can step in.
Post-Truth vs. No Truth
What I’ve just described here makes up the second half of one huge problem I see unfolding across this “post-truth” nation (a follow up to my previous post “Ruminations on the Post-Truth Era and its Implications for Marketing“): the inability for individuals to ascertain what they’re being told vs. what they should know and don’t want to actually believe (or take the time to fact check) when it comes to making decisions that lead us down a particular decision-making path.
In marketing, this concept of moving an individual from Point A to B is known as the buyer’s journey, and it’s a marketer’s job to actually understand the steps a customer takes toward making a decision about their product or service in both the physical and digital landscape and intervene to take steps that will change one’s opinions and beliefs in their favor. Once understood, a marketer then injects strategies along the customer’s path to purchase aimed at moving them toward a particular idea or decision. And marketers are getting better at this every single day.
Below is a simple example of the stages along the buyer’s journey, with examples of how to reach people through marketing tactics at each stage:
In fact, not too long ago I wrote about how marketers use media mix optimization to create engagement and shift beliefs in a post entitled “How Targeted Content & Media Mix Optimization Combine to Create Effective Customer Engagement.” I urge you to check it out if you want to learn more about this process.
On a related note, you may also enjoy this TEDxUniveristyofNevada talk from veteran journalist Sharyl Attkisson about the concept of astroturf and the manipulation of media messages:
The Truth About Internet Search
To better understand the scale of this post truth vs. no truth problem in a way we can all understand: here’s an example best told through another author’s personal experience with Google’s predictive search results. In an article from The Guardian entitled “Google, democracy and the truth about internet search,” the author shares an experience she had with Google predictive search. She writes:
I feel like I’ve fallen down a wormhole, entered some parallel universe where black is white, and good is bad. Though later, I think that perhaps what I’ve actually done is scraped the topsoil off the surface of 2016 and found one of the underground springs that has been quietly nurturing it. It’s been there all the time, of course. Just a few keystrokes away… on our laptops, our tablets, our phones. This isn’t a secret Nazi cell lurking in the shadows. It’s hiding in plain sight.
What she’s describing in the article is this: everyone in business always wants to know how their company/product/service can appear on page one of Google search results. This drive to be relevant creates a competition for some to do whatever it takes to stand out above the noise, and of course it’s not always the most relevant voices in the crowd that actually get heard. This becomes a problem when certain forces are working ad naseum to gain relevancy in the marketplace, when in essence those ideologies are at best partly relevant.
Consider the fact that humans hardly account for half of all web traffic. From a study quoted by the Atlantic: “In 2015, humans were responsible for nearly 52 percent of all online traffic. Two years ago, humans drove less than 39 percent of overall web traffic.” The rise of bots is also another huge problem. Did you know?
A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior. Social bots have inhabited social media platforms for the past few years.
If enough of this happens, pretty soon you have a marketplace filled with false ideologies and bad stories that lead to worse outcomes. Check out this video (with link to scholarly journal article by Emilio Ferrara) from the Association of Computing Machinery:
This is a great journal article that I highly recommend you read to learn more about the nuances of what’s happening with bots in the digital marketplace.
When Society Just Can’t
Adding to the ideas I’ve already laid out, I also came across a recent article shared by Elon Musk on Twitter via NPR entitled “Study finds students have dismaying inability to tell fake news from real.” Worth noting, Elon accurately introduced the article with a statement “how do we know THIS news isn’t fake?” Good point, Elon. The article states:
“Many assume that because young people are fluent in social media they are equally savvy about what they find there,” the researchers wrote. “Our work shows the opposite. What we see is a rash of fake news going on that people pass on without thinking,” he said. “And we really can’t blame young people because we’ve never taught them to do otherwise.”
If you do a little research, you’ll currently find that there are multitudes of stories like this being written across the journalism community sounding alarms as we speak that can be found in virtually every major publication: from NPR (case above), to the Atlantic, the Guardian, Huffington Post, PBS, Wired and more. The articles are collectively working to define the problem and help us understand the reasoning behind our conscious and unconscious decision-making processes. The question is where do we go from here? Perhaps that shall be a separate post. In the meantime, would love to hear YOUR ideas!