“All war is based on deception,” wrote Sun Tzu, the ancient Chinese military strategist, back in the 5th century BC. Though buzzwords like “fake news” are fairly new, the phenomenon they describe isn’t.
In fact, it can be traced as far back as ancient Rome, when military leader Octavian spread misleading information about his political rival, Mark Anthony. Through poetry and short slogans printed on coins, as well as a questionable last will and testament supposedly penned by Anthony, Octavian convinced people that Anthony was a traitor to Rome on top of being a womanizer and a drunk — and therefore, unfit to rule. By winning people over this way, Octavian went on to win the civil war and become Rome’s first Emperor.
History is rife with similar stories of fake news, and as modern communication technologies continue to transform the manner and scale we send and receive information, it’s easier to see why. The right or wrong information can influence elections, pandemic management, and our collective sanity and mental health, which makes one thing clear: We are in an information war.
Much of the conversation on the topic revolves around two terms, misinformation and disinformation, and these are often interchanged. But the first step to arming ourselves for the information war is knowing exactly what we are up against: It’s important to start with studying what’s being shared, why it’s being shared, and how.
Misinformation and Disinformation, Defined
Misinformation was Dictionary.com’s 2018 word of the year, and it’s defined as misleading information that is created and shared without the intent to deceive. This last point is important, as it makes it distinct from disinformation, which involves deliberately creating and sharing false information to confuse and manipulate others.
Though the two are different in this crucial way, they can amplify one another. For example, if you’ve come across questionable posts about the coronavirus pandemic as a Chinese bioweapon on your newsfeed, a friend or family member of yours may have unknowingly fallen prey to a sophisticated disinformation campaign. This study on a coronavirus-related disinformation campaign on Twitter was able to identify a network of at least 2,903 accounts, retweeting each other 4,125 times. Disinformation can easily turn into misinformation when real people believe false material shared by these automated disinformation accounts, more commonly known as bots.
Different Levels of Intent to Deceive
But even with this key distinction between mis- and disinformation, the practices related to them can still be a bit confusing. This is why it’s helpful to think of problematic information as a range of different practices, explains researcher Claire Wardle in collaboration with Global Voices. The typology she developed places seven key practices on a scale, arranged below according to how strong the intent to deceive or manipulate is:
- Satire or parody posts can sometimes get lost in translation for some — and may even accidentally fool others. Sure, it can be funny (there’s even an entire subreddit on people who’ve fallen victim to The Onion’s specific brand of satire!), but it can also be a little worrying. A 2017 study by researchers at Ohio State University reveals that satire news reinforces people’s existing beliefs, regardless of the truth.
- False connection is when headlines, images, or captions don’t match or support a piece of content online. Like satire and parody, this can result in a few chuckles, but it can also be downright dangerous. For instance, childcare professionals have raised the alarm on popular images of babies asleep next to stuffed animals and blankets. These are often used as feature images in parenting articles, but replicating this image in real life can actually be deadly for infants, who might get strangled, trapped, or suffocated by the picturesque props — something that parenting websites definitely do not want to promote.
- Misleading content is one of the trickier ways people create problematic information. That’s because they take a grain of truth and mix it with lies in order to frame an issue or individual. For example, when conservatives accused Black Lives Matter co-founder Patrisse Khan-Cullors of using BLM funds to buy a $1.4 million home, they jumped on a post about her recent home purchase. They, however, completely ignored the fact that she has her own separate career, and that the organization doesn’t pay her nearly enough to afford the home.
- False context is when people share real photos or stories, but under wrong contextual information. A common example are photos of storms from years ago being shared whenever a new hurricane hits big cities.
- Impostor content involves people impersonating genuine sources, whether it’s accounts posing as celebrities and politicians, or very convincing duplicates of real news websites.
- Manipulated content is when images or data are doctored or amended to provide a whole different meaning. Throughout history, there have been plenty of famous doctored or photoshopped images. Though many people might think they’d know a fake image when they see one, a 2017 study by the University of Warwick found that humans, in general, aren’t very good at that.
- Fabricated content involves information that is completely false, with no basis in reality. This is created purely to deceive and harm others. For example, false information about COVID-19 has led to hundreds dead and thousands hospitalized.
The Why and the How
Now that we have a clearer idea of what we’re up against, the next questions to ask are why and how.
Examining the Why
Given the political nature of the most notorious instances of mis- and dis-information, two strong motivators for their creation and dissemination are political influence and propaganda. From the example of Octavian all the way to today’s governments, it’s easy to see why political gain plays a role in problematic information.
But politics isn’t all there is to fake news, as some turn to mis- and disinformation practices for economic gain. For instance, Gwyneth Paltrow’s infamous lifestyle brand Goop has come under fire time and again for dangerous and dubious medical claims made to sell its products.
Others, meanwhile, may simply be committing errors when reporting events and real facts. These can come in the form of factual mistakes, or writers may feel the pressure to dumb down their stories or create clickbait articles that distort the truth.
Still others might engage in these practices to provoke and punk others for a good laugh.
And Finally, the How
You’ve probably seen an uncle, an old classmate, or a random acquaintance fall victim to disinformation and share fabricated content. Or, you may have made a mistake when posting something yourself. But there are other ways mis- and disinformation can thrive in our public spaces.
Behind the scenes, overworked journalists and writers may also be under pressure to churn out content to match the relentless pace of social media reporting. Making sense of complex issues and events takes time, and contexts and details may sometimes be sacrificed.
In other cases, small groups may be pushing certain messages through problematic information to sway public opinion. And most worryingly, other cases might be parts of a more sophisticated disinformation campaign, created through troll factories and bot networks. When false and problematic information is repeated over and over, it’s easy to think of them as true.
The Bottom Line
Given all this, it’s important to recognize that the problem of fake news is more than just a technologically challenged uncle or a mysterious foreign organization with a questionable web presence. It’s easy to blame them and call it a day. But the truth is, the problem of fake news is a complex issue that’s infected our information ecosystems, and it calls for more than just campaigns for media literacy.
This isn’t to say, of course, that campaigns to help people recognize mis- and disinformation are not important. They are. But we have to have more difficult conversations, too.
We need to talk about how news and features are produced, both in terms of ownership (and, therefore, implicit motivations), as well as the precarious working conditions of those whose vital job it is to tell us the truth.
We have to talk about the responsibility (and complicity) of private platforms that control public access to information and hold so much data about us and our daily lives.
And we need to talk about the role of governments in regulating these platforms — that is, when they’re not using fake news laws to silence journalists.
The fight against mis- and dis-information is complex, and the road ahead in the information war is far from easy. But it is a necessary fight and one that we must do our best to win.