You have probably heard that bots account for approximately 45% of all web traffic. Many are “good bots” helping the web function: industrious worker-bees automating tasks such as updating sports scores and weather.
But increasingly many of the bots are bad bots.
They are part of a global network of ad fraud costing $23 billion a year, seeking to drain advertising budgets, and distorting our data. Bad bots are used by fraudsters to carry out account hijacking, web scraping, stealing financial data, and DDoS (Denial-of-service) attacks.
Marketing spend on social media platforms reached $107 billion in 2019, accounting for 17% of global ad spend. This is a huge surface for bot-runners.
In this post we look at the increasing challenge of bots on social media. How is it affecting the ROI on your campaigns and what are the big ad networks doing to combat the problem?
In cost per click (CPC) or pay per click (PPC) advertising, click fraud is what happens when an algorithm imitates a normal user and clicks on an ad without any intent or interest in what the ad is actually about.
The first step to protecting your business from bot-driven click fraud is to learn how to recognize it. While click frauds vary in nature and method of execution, two common signs that you should look out for include:
There are a number of ways to monitor and prevent bots from clicking on your ads.
Of all the social media platforms, Facebook is the big beast, with marketing spend on the social media giant reaching $70 billion each year.
In the past year alone, Facebook has been busy against bots: taking enforcement action against app developers accused of generating fraudulent revenue, fraudsters deceiving people into installing malware manipulating ads, and services running deceptive ads.
There is also a case of marketers who see bots click on their ads, and then pour more money into chasing them.
Facebook lookalike audiences let you reach a large number of people who share the same characteristics as your existing customers (33% of marketers use retargeting). However, in many cases, the retargeting efforts can involve putting good money after bad bots that had engaged with your Facebook ads.
In one case, in 2019, Twitter’s ad platform was exploited for months by app fraudsters. Twitter’s platform manipulations policies, revised in September 2019, set out prohibited behavior including “coordinated activity, that attempts to artificially influence conversations through the use of multiple accounts, fake accounts, automation and/or scripting.”
Even LinkedIn, which is far less associated with political controversy when it comes to bots, has its share of activity automations, bots, and fake profiles.
In one case, a fake LinkedIn profile (claiming to be on the board of Sequoia Capital) attempted to carry out phishing attacks, while there is also widespread use of automation tools. In another example, Ryan Gellis of global digital ad agency RMG said that in trying to promote a webinar to a C-suite audience, they saw a number of bot-looking clicks essentially wasting their daily spend of $500.
“Nobody exhibited what I believe is real user behavior on the site after clicking the ad,” Gellis said.
Using the log activity he found that bot activity or misattributed mis-clicks had been the cause because visitors would bounce away from the webinar’s landing page before it even had a chance to render – leaving the landing page in less than 1.3 seconds. Like the other platforms, LinkedIn is seeking to mitigate the problem, stating that they, “prohibit the use of bots or other automated fraudulent methods to access our services, as they are in violation of the User Agreement.”
Socialbakers CEO Yuval Ben-Itzhak is among those who have talked about the challenges of potential fakery on platforms such as Instagram, and particularly the issues of influencer fraud.
“Authenticity is increasingly pervasive, and fake accounts which lead to inflated reach and performance numbers aren’t helping brands or influencers to engage in real conversations with their audiences,” Ben-Itzhak said.
Socialbakers’ platform features instant fraud detection on influencer accounts and Instagram has further ramped up its efforts against bots, asking users to confirm their identities when it identifies patterns of potential inauthentic behavior.
Platforms are taking significant action, including revising guidance and taking fraudsters to court, in a bid to eliminate the harm that bots can wreak on campaigns. This is even more pressing at a time when social media-fueled quality leads are vital for revenue growth.
Unfortunately, the motivation, sophistication, ferocity, and returns for fraudsters to carry out bot-driven activity have only increased during the pandemic. Former Unilever CMO Keith Weed, who was responsible for the second largest advertising budget in the world, put it bluntly in 2019: “We want to know that real people, not robots, are enjoying our ads – bots don’t eat a lot of Ben & Jerry’s”.
Marketers need to be constantly vigilant against the bots we now share the internet with.