Whenever I hear the term “fake followers,” I can’t help but think of this scene from a certain Taylor Swift music video where all of the girls are covered in plastic (and hence, fake):
The question is: does your social media audience look like this? And would you even care if it did? What if you could put this army of followers to work, fulfilling specific commands, executing a master agenda, and generating targeted activity online to mimic not only authentic human interaction, but to inflate analytics numbers to show higher levels of engagement for your brand? News flash: it’s already been happening for years.
If you’ve ever wondered about fake followers, where they come from, what purpose they serve, then this is the post for you! My focus will be on the implications of a fake follower audience as it relates to advertising fraud and disinformation, and why marketers should care. In this post I’ll explore what fake followers are, why they have become a problem, how their ecosystem thrives, and what marketers can do about them.
What is Ad Fraud and Bot Fraud?
According to research from WhiteOps—a cybersecurity company that protects digital advertisers and web app owners from ad fraud and other forms of automated threats— brands lost an estimated $7.5 billion due to ad fraud in 2016.
According to Wikipedia, ad fraud (also referred to as invalid traffic) is concerned with the theory and practice of fraudulently representing online advertisement impressions, clicks, conversion or data events in order to generate revenue. Bot fraud is a type of ad fraud, whereby bots are the automated entities capable of consuming any digital content, including text, video, images, audio, and other data.
Because advertising spend usually makes up 20 – 50% of your marketing budget, when it comes to ad fraud, most marketers generally tend to be concerned with:
- Wasted ad spend: spending wasted money on views and clicks that are not originating from an actual human
- Bad advertising placement: i.e. having your product’s ad show up just before an ISIS recruitment video
- Weakened reach: for example, missing the mark on connecting with your target audience by utilizing influencers with mostly fake followings
- Diminished ROI: with bot blocking technology in place on a campaign, it’s been found that engagement rates can increase by as much as 22%
Who or What Are Fake Followers?
Fake followers can either be real people operating fabricated social media accounts to support a particular purpose or agenda, or bots, which are automated social media accounts that pose as real people. According to WhiteOps, the most pervasive form of ad fraud today is the use of bots, or non-human traffic.
What to know how to spot a bot? Check out this Medium article, from the Atlantic Council’s Digital Forensic Research Lab which offers some great tips.
Exploring the Fake Follower Ecosystem
The ecosystem of ad fraud is extremely complex. But with the help of WhiteOps co-founder Michael Tiffany, achieving a technical understanding is plausible. In this recent video, Tiffany covers the bot landscape, the money behind it, and how this type of fraud affects advertising specifically. This is hands down the best presentation I’ve ever seen on this topic, and I encourage you to watch it:
In this presentation Tiffany explains that all of this is happening right now because everyone is interested in an engaged audience and they will get it any way they can. According to Tiffany, “no one can deliver a firehose feed of real human attention.”
When real sites need more traffic, they buy it from third parties. Publishers are doing this to maintain an edge, so they turn to those suppliers who can deliver an audience. Fake views are sourced through the same exact ecosystem that everyone else is using: bots often overlay their activity on top of real human interaction (i.e. the real reason why criminals want to hack your grandma’s computer). Hence, there no easy way to carve them out. And the game is constantly changing.
To make money, the “bad guys” simply set up fake websites to create fake traffic that looks like real human activity in the numbers. Tiffany notes that botnets today are wickedly good at hiding and faking engagement. Bots disguise themselves as humans using human computers: they rely on regular people’s computers to execute. Tiffany says that most people vastly underestimate the sophistication of these adversaries. Because of their hidden nature, but also because of cognitive dissonance, this problem has largely been able to thrive.
What’s the Big Deal Anyway?
Not only are brands losing billions of dollars each year that are being siphoned off to criminals, and not only are marketers seeing lower engagement and reach on their campaigns because of it, but the prevalence of ad fraud is also enabling a much bigger problem: the spread of disinformation.
In case you missed it, Facebook, Twitter and Google executives recently testified at a congressional hearing regarding Russian influence in the 2016 elections, specifically as it relates to the advertisements and disinformation being circulated on those platforms. You may have heard about some of the fabricated Twitter accounts that were used to gain influence. Or some of the fake Facebook pages that were used to spread disinformation. Here are some of the latest stats:
- Facebook: Facebook estimates roughly 270 million fake accounts on their platform. Facebook also estimates that as many as 126 million Americans were exposed to Russian-backed election content.
- Twitter: In March, a study identified some 48 million fake accounts on Twitter. Last week, Twitter announced that Russia-linked accounts “generated approximately 1.4 million automated, election-related tweets, which collectively received approximately 288 million impressions” last year from September 1 to November 15.
- Instagram: Recent estimates indicate that approximately 8% of Instagram is made up of fake accounts. According to this Techcrunch article, 120,000 Instagrams by Russian election attackers hit 20 million Americans.
In short, the spread of disinformation relies on the very tactics that marketers use to promote their ideas and products. But the system is being reverse engineered by criminals who have a bigger economic incentive to turn those tools against us.
What Marketers Can Do About Fake Followers
Because bots are using the mechanisms of targeting against us, actively playing back against us using our own tools, detection is becoming harder to spot. But there are many things you can do as a marketer to protect your advertising spend. WhiteOps research generally recommends the following tips:
- Demand transparency from third party vendors and data providers
- Learn how to spot fake accounts and disinformation by becoming more digitally literate
- Maintain strong ties with your IT department or professionals to better detect and respond to cyber crime
- Monitor campaigns and sourced traffic for suspicious activity
- Support the Trustworthy Accountability Group (TAG), a joint marketing-media industry program designed to eradicate digital advertising fraud
Luckily, there are many resources available for how you can get involved in fighting this crime.
- Botlab – Botlab is a non-profit, volunteer based research foundation focused on research, publication and open-source development related with ad fraud, malvertising, privacy violations and other malicious practices in the advertising supported internet.
- Trustworthy Accountability Group – Trustworthy Accountability Group (TAG) is a first-of-its-kind cross-industry accountability program to create transparency in the business relationships and transactions that undergird the digital ad industry, while continuing to enable innovation.
- WhiteOps – A cybersecurity company that protects digital advertisers and web app owners from automated threats: threats like ad fraud, account stuffing, or fake engagement whose customers include some of the largest and most forward-thinking companies on the web.
I encourage you to check our the above-listed resources for more information on this topic.