If you spend any amount of time on the internet, you might’ve seen or heard about bots. Perhaps you’ve even interacted with one.
Short for robots, bots are pieces of software designed to create content and interact with people on social media. They are either partially or fully automated, and in recent years, they’ve risen to public consciousness as part of our growing conversations about misinformation and disinformation.
Unlike chatbots that we encounter more often in customer service roles, bots on social media are often cheaper and less complicated to manage. Where a single chatbot might need a person or team to develop and maintain, social media bots can be managed in the hundreds or thousands by just one person, who can then use them for their own purposes across various issues — whether that’s climate change or Covid-19.
People have sounded the alarm on bots and their harmful effects on political discourse, but just how many bots are out there? The answer depends on who you ask.
Twitter estimates that around 5% of its 300 million user base are bot accounts. But other studies, like a 2017 report by the University of Southern California and Indiana University, place this figure between 9% and 15%. This translates to up to 48 million accounts — and they’re saying this is a conservative estimate.
Not All Bots Are Made Equal
For starters, it’s important to note that not all bots are bad. Some are actually quite handy, like @WhatTheFare, a Twitter bot that helps users look up the Uber fare for specific pick-up and drop-off points, and @EarthquakeBot, which alerts people to earthquakes that are at least 5.0 on the Richter Scale in real time.
Meanwhile, @NYPDedits logs Wikipedia edits created by users with IP addresses in the New York Police Department. This was in response to reports of edits to pages about police brutality and its victims, like Eric Garner and Amadou Diallo, coming from 1 Police Plaza. These edits included the erasure of information about police misconduct, as well as insidious rewording of real events. For example, “Garner raised both his arms in the air,” was edited by users with NYPD IP addresses as “Garner flailed his arms about as he spoke.”
Still others can be quite refreshing to have on your feed, like @MuseumBot, which posts images from the Metropolitan Museum of Art four times a day, and @tinycarebot, which reminds users to take small breaks for self-care every now and then.
However, it’s the bots that pretend to be human that we should worry about. Referred to as “bad bots,” these are the ones that pretend to be people and are often used for political reasons, like spreading propaganda, inflating politicians’ follower counts, attacking political rivals, and hijacking their conversations.
The use of bad bots for politics has been reported across the globe — from the Peñabots in Mexico (named after former president Enrique Peña Nieto) to the StrongerIn-Brexit conversations, where 1% of accounts made up one-third of all tweets related to Brexit. In the US, around one in four tweets about the first presidential debate in 2016 were made by bots.
Outside of politics, bots have also been used as part of coordinated disinformation campaigns surrounding issues like the anti-vax movement. They’ve also been used to influence stock and financial markets.
Interestingly, there are some areas of the world where bots are not very popular among those looking to control online spaces. For example, chief architects of networked disinformation in the Philippines are wary of using them — relying instead on real-life writers who are more knowledgeable about local vernacular and are more creative.
How Social Media Bots Work
When used in high numbers, bots can generate buzz around a person, product, or issue, and push a specific point of view.
Bots as Message Amplifiers
Bots are often programmed to retweet questionable, or low-credibility, articles within seconds of these articles being posted. This was common in the 2016 US presidential elections, as social bots worked to make certain pieces of content appear more popular. Among low-credibility content sources, 1 in 3 of its top sharers are bots — much higher than sources of fact-checked content.
By creating the illusion that a particular story or source is popular, bots and those who wield them encourage actual humans to trust the source and share the post. It’s this momentum-building effect that helps make fake news so compelling to real people: The more often we see a message, the more likely we might think it’s true.
University of Southern California’s Dr. Emilio Ferrara, however, argues that this tendency can also be used for spreading positive messages and behavior. In his team’s study about Twitter bots for good, positive hashtags about health tips and fun activities were able to influence people to adopt positive behaviors when they’re more exposed to them.
Either way, a key function fulfilled by bots is to provide a baseline buzz from which messages can go viral. “Once enough accounts are tweeting about the same thing, that creates buzz,” says Terry College of Business’s Carolina Salge. “And organizations really respond to buzz.”
Aside from sharing content on their own accounts, bots also tend to target accounts with many followers — either by mentioning them in their own tweets about low-credibility content or replying to that person’s tweets with links to that article. This way, followers of verified or popular accounts might see a bot’s tweet, or the accounts themselves might retweet the bot.
Bots as Content Polluters
Aside from promoting content from low-credibility sources, bots also make a lot of noise on their own to create new divides, worsen existing divides, or hijack movements from the opposite side of a divide.
One example, explored by researchers from George Washington University, the University of Maryland, and Johns Hopkins University, is the issue of vaccines. “The vast majority of Americans believe vaccines are safe and effective,” George Washington University’s David Broniatowski explained back in 2018. “But looking at Twitter gives the impression that there is a lot of debate.”
Their study found that bots posted anti-vaccination messages as much as 75% more than the average Twitter user, making up a huge chunk of online discourse at the time. Though the long-term effects of this campaign — especially in today’s pandemic — are yet to be explored, it’s clear that bots often use topics like vaccination as a wedge to erode public trust in key institutions.
Similarly, in the weeks leading up to the Catalan independence referendum in 2017, bots were used to bombard influential Twitter users on both sides of the debate with violent, inflammatory content. The goal, it seemed, was to worsen existing political divides and boost feelings of alarmism and fear both during and after the referendum.
Another common way that bots “pollute” our information ecosystem with inauthentic behavior is through hashtag hijacking, or when bots hired by an individual or organization co-opt their opponents’ hashtags with spam. They then also report their opponents’ legitimate content so that their posts might get removed from the platform. Through this, the original message of the hashtag gets lost in the noise, and the opponents of a bot’s client would have a harder time organizing online.
It’s worth noting, however, that this technique isn’t just used by bots. Large groups of people can hijack hashtags too — and sometimes, for good reason, as in the case of K-Pop fans who fought against racism and drowned out the #WhiteLivesMatter hashtag.
Fake News, Real World
Because bots are designed to mimic people, it can be hard to tell which account is a bot and which one is not. And just like the people who make them, bots can also be good or bad.
Factor in the sheer number of tweets and posts that are made every minute, as well as bot makers’ ability to react to measures taken on by platforms against them, and you can see why University College London’s Juan Guzman describes bot detection as a “cat-and-mouse game.”
“Every time we identify a characteristic we think is prerogative of human behavior, such as sentiment of topics of interest, we soon discover that newly-developed open-source bots can now capture those aspects,” adds Dr. Ferrara.
To help keep people from falling for common bot tactics, studies have pointed to the effectiveness of flagging tweets from suspicious accounts. Meanwhile, organizations like Quartz have put up bots like @probabot_, developed specifically to identify other bots masquerading as humans.
For its part, Twitter encourages users to report suspicious behavior so it can better improve its measures against platform manipulation.
So How Can You Tell?
Though most Americans are aware of bots and the threat they present, less than half of those who know about them are confident that they can spot them. But there are some tell-tale signs you can watch out for across different social media platforms.
Look at Their Profile
If the profile was created very recently, with longer usernames that contain numbers and an empty bio, then it’s very likely a bot. No pictures, or pictures that do not show faces, are also red flags.
Look at Their Network
For less sophisticated bots, one tell-tale sign would be their friend or follower network. Bots tend to follow other bots, while humans tend to follow other humans. Often, bots also have high following counts and very low follower counts.
As bots grow, however, this technique may not be as useful. Older and more sophisticated bots have been found to build entire social networks that closely resemble real humans’ networks.
Look at Their Account Activity
When thinking about whether an account you’re interacting with is a bot or not, look at how often they tweet or post. A lot of posts or retweets in a short amount of time is one clue, especially if all their tweets are about the same thing, with the same hashtags, over and over. Moreover, humans tend to tweet less and scroll more towards the end of their online sessions, a behavior that bots don’t tend to have.
Other studies have also found that humans tend to write more positive messages than bots, and tend to change their sentiments about topics over time. Bots, which are paid to promote certain messages, don’t do that.
A Word of Caution
Bad bots can be, well, very bad, and the tips above can help you discern if you are talking to one and if it’s time to report. But researchers also caution against automatically assuming that someone whose political views go against yours is automatically a bot.
For instance, in the aftermath of the 2016 elections, Twitter saw an uptick of people accusing each other of being bots — when they were, in fact, real-life people. These false accusations are not only symptomatic of a larger issue of hostility on social media, where it’s becoming normal to insult and dehumanize others. But crucially, it also helps actual bots hide in plain sight.