Home » Infodemic

Category: Infodemic

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organisation. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularised the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites have become popular, such as The Onion and NewsThump.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account later attributed to the Kremlin shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, alt-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image was genuine, the context was not.  

Deepfakes

As a new form of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, porn has been the main area where deepfakes are being used. But in a handful of cases they’ve played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes. But they do have potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Alt-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are totally devoid of any nuanced explanation. The viewer gets a quick hit of emotion that has a triggering effect. This taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. Therein lies much of the danger. 

4 Things I’ve Learned From Analysing Russia-Aligned COVID-19 Coverage

Much social unrest has emerged amid COVID-19, such as anti-lockdown protests, attacks on 5G masts, and violent reactions when asked to wear masks. As I write this, a murky far-right group called ‘UK Freedom Movement’ is organising a new spate of anti-lockdown protests around the UK.

This month I’ve been reviewing Russia-aligned news sites. I’ve been looking for key narratives on COVID-19 and the US election. I’ve examined two types of sites: those directly linked to the Russian state, and those with a similar political stance. Many sites share the same core group of authors.

Here are some of my findings, related to the current discussions on social unrest, conspiracy theories and the infodemic.

COVID-19 narratives are consistent across websites

Topics covered on these sites reflect COVID-19 conspiracy narratives found on social media since the pandemic began. Here are three prime examples.

Bill Gates the ‘criminal globalist’

The Microsoft boss features regularly, from the Kremlin-funded news outlet InfoRos to the Russia-aligned news site Fort Russ. Narratives unfold along similar lines.

They claim that Gates is the ‘criminal globalist’ ringleader of a cabal using coronavirus as a smokescreen to impose mandatory tracking and mandatory vaccines.

Justifications for singling out Gates are usually his prescient 2015 talk, in which he highlighted the global risk of a pandemic, or the Gates Foundation’s funding of WHO.

Herd immunity vs lockdown

Another key narrative centres on the benefits of herd immunity, often juxtaposed against the negatives of lockdown. Sweden is the poster child for herd immunity. Lockdown is presented as a corrupt government-led attempt to remove people’s basic freedoms.

It’s not hard to imagine how this framing could trigger people who value freedom above all else – and cause events like the anti-lockdown protests that have been cropping up across the US and UK.

The smouldering culture war of Trump and Brexit has extended into new battle lines of ‘lockdown vs herd immunity’. As a result, pandemic control efforts are at risk.

Scapegoating China

China is presented as an innocent player in the pandemic. The US is blamed for targeting China with information warfare in order to blame it for the coronavirus.

In some articles, the authors claim that the pandemic could create a ‘New Cold War’ between the US and China, with severe consequences for the global economy.

Other sites take it even further, claiming that COVID-19 could spark a nuclear war between the US and a newly formed Russia/China alliance.

Narratives claim that COVID-19 will reshape the world 

Another popular theme is how the outcome of the US 2020 election, plus the effects of coronavirus will cause the US to lose hegemony. The result will be a shift into multilateralism.

Some sites claim coronavirus will cause Western governments to “face a legitimacy crisis like never before”, eventually causing so much chaos that it will reshape the global order.

To reinforce this point they highlight how the US has failed to protect its people from coronavirus, so it can no longer be called a superpower. Multilateralism is presented as inevitable, due to the unprecedented crisis the world now faces.

Anti-imperialism has been a key feature of pro-Russian media for decades. It overlaps with certain far-left lines of thinking, especially among those who critique Western military actions around the world.

They don’t support Trump

“Voters now must choose between Donald Trump, an unstable, incompetent president whose blatant narcissism has been on full display as the nation suffers from coronavirus, and the former vice-president who will diligently represent the rich and govern for their good above all others.”

American Herald Tribune

We often assume that Russia-aligned media is pro-Trump. In fact, many of these news sources criticise Trump as much as Biden. Criticisms of Trump include poor handling of the pandemic, and ‘imperialist shenanigans’ in foreign policy.

Framing of Biden often paints him as sleazy, citing the recent Tara Reade case as evidence. Some articles suggest he may have dementia. Such framing of both candidates as hopeless choices could be a subtle attempt at voter suppression. 

They frame themselves as ‘independent’ thinkers

Most of these websites present themselves as bastions of independent thought. They encourage readers to go beyond the mainstream and discover ‘new’ perspectives.

It reflects a common refrain among social media conspiracy theorists, who often talk about the need to “do your own research” . Often, that translates as “using Google or YouTube to find content that reinforces one’s existing views”.

Pro-Russia news sites tap into this way of thinking. They use it as a defining aspect of their reporting. It’s a message likely to resonate with the exact kind of person who questions everything.

What’s the link to real life unrest? 

Looking at these websites in aggregate, it’s easy to see how their typical narratives link to social unrest during the pandemic.

I’ve noticed the same themes popping up over and over on social media. Ordinary citizens share them in mainstream Facebook groups (e.g. local news and discussion groups).

These ideas have become rooted in public consciousness. They drive a growing sense of distrust in Western governments, particularly in the UK and US, where populations are already polarised. Both countries have handled the pandemic badly, so it’s easier to create scepticism among a fearful population.

If we were to survey the beliefs of anti-lockdown protesters, 5G mast attackers, and mask-related violence, I bet we’d find echoes of the same narratives found across these ‘alternative’ news websites, many of them either Russian government funded, or publishing work from the same authors.

Analysing Trump’s Medical Disinformation on Facebook

US president Donald Trump shocked the world this week with his latest piece of medical disinformation.

Trump claimed that injecting disinfectant into the body could be an ‘interesting’ way to cure COVID-19.

He later tried to back-pedal, claiming he was being sarcastic. But that wasn’t how most of the world took it.

Dangers of medical disinformation

The mainstream media and the public widely lambasted this dangerous medical disinformation.

Amid the furore over Trump’s remarks, a major disinfectant firm issued a statement urging the public not to inject or drink any of their products.

However, members of pro-Trump Facebook groups dedicated to conspiracy theories displayed quite the opposite reaction. 

I examined some of these groups to provide comment for an article in CodaStory. I’d previously gathered this list because of the strong focus on various ‘corona disinformation conspiracies’.

These include 5G causing the virus, the virus being a US bioweapon, and Bill Gates as having orchestrated the ‘virus hoax’ in his ambition to enforce a worldwide vaccine programme. 

Many of the groups also centred around the Qanon conspiracy theory.

Pro-Trump Facebook reactions

You might expect the suggestion of injecting bleach to be a step too far even for these largely pro-Trump groups. Not so. 

In my initial observation of the groups, I noticed three distinct ways in which the members attempted to account for Trump’s bizarre medical disinformation.

First, that Trump was just ‘playing the media’. People must be stupid if they believe he meant what he said.

Commenters also attributed all the negative media coverage to ‘yet another’ MSM (mainstream media), liberal, or Democrat attempt to smear Trump.

Secondly, some commenters claimed that the media had quoted Trump ‘out of context’. According to them, he was speaking ‘more generally’ about possible ways to treat COVID-19.

Others highlighted a fact check article from far-right news outlet Breitbart. But no-one acknowledged the videos of Trump making these claims for everyone to see and hear. 

The third claim relates more closely to other COVID-19 medical disinformation, ‘miracle cures’. This commenter claimed that Trump must have been referring to those UV light therapy and ozone therapy, which already exist.

Things got more interesting when the commenter drew links between the medical disinformation about bleach and the popular narrative of ‘Vitamin C as miracle cure’.

They claimed that taking Vitamin C causes hydrogen peroxide to build up in the body. It followed that hydrogen peroxide has a disinfectant effect, so Trump’s comments have a basis in medical fact.

Rationalising medical disinformation

These three counter-narratives about Trump’s medical disinformation all attempt to rationalise an influential figure making a dangerous and irresponsible remark.

Tribal attitudes drive many of these rationalisations. For example, the claims that the media purposefully misinterpreted Trump’s comments in a ‘libs’ or ‘Dems’ smear attack. Once again, this reinforces the existing divide between populist pro-Trump narratives and the mainstream.

The question remains: How many of these Facebook group members are genuine American citizens? Facebook itself is the only entity that could properly attribute the accounts. And it doesn’t seem to be giving much away.

I suspect group members are a mix of genuine Trump supporters and astroturfers working to stir up tribal hatred of the ‘other side’.

Tribal attitudes can be dangerous, particularly in relation to public health. People in the pro-Trump tribe are more likely to challenge messages from the perceived ‘outgroup’ (‘experts’ and the ‘MSM’) such as critical public health advice from the WHO.

A similar dynamic has fuelled recent anti-lockdown protests across the US, which may already have spread the virus further and compromised the entire country. Astroturfing was certainly a factor there; there’s no reason why it couldn’t be influencing these groups too.