What’s the difference between disinformation and misinformation?
(I get asked this question a lot, so I thought it was time to write about it).
The difference between disinformation and misinformation lies in the presence of intent.
The Difference Between Disinformation and Misinformation
Let’s look at a couple of examples to understand the difference between disinformation and misinformation.
Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.
Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.
For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.
Malinformation and Fake News
We also have malinformation. This is information based on reality, used maliciously to harm its target.
The target could be an individual, a country or an organization. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.
Finally, the one everyone knows best: fake news.
Donald Trump popularized the term in 2016 (although Hillary Clinton was apparently the first to use it).
However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information that I’ve already mentioned.
New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.
Other Misleading Information
Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.
Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites such as The Onion or NewsThump have become popular.
But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.
In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer.
Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.
Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.
In this case, a Twitter account (later attributed to the Kremlin) shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.
Designed to trigger anti-Muslim sentiments, numerous far-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image might have been genuine, the context was not.
As one of the newest forms of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.
A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, deepfakes have mainly been used in porn. But, in a handful of cases, they’ve also played a role in disinformation efforts.
We may have overstated the immediate risk of deepfakes, but they do have the potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.
Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.
Shallowfakes and Cheapfakes
In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using only simple video editing software.
Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.
In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.
The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.
In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).
Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.
Far-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.
Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).
Memes are devoid of any nuanced explanation. The viewer simply gets a quick hit of emotion that has a triggering effect. That taps into an oversimplified outrage that has become a core feature of today’s political life online.
These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.
The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps they’re something to keep an eye on.
From Disinformation to Misinformation (and back again?)
In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet?
Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?
For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.
They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?
Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation.
We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.
Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do.
Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. And therein lies much of the danger.