Home » Social Media

Category: Social Media

online trolling

8 Common Online Trolling Tactics (And How to Handle Them)

In this post, I’ll discuss the all-too-common problem of online trolling and explain some troll tactics. I’ll also look at the important links between online trolling, disinformation and cyber warfare campaigns. 

What is online trolling?

Anyone who has spent much time on social media has likely experienced their fair share of online trolling. Unfortunately, it’s become the norm these days, when we spend so much time online and hidden behind our screens. 

Trolling is especially common in discussions about contentious topics such as politics, abortion, race, gender identity, and so on. But what exactly is trolling? And how can you tell the difference between an actual troll and a real person who is just being nasty?

A troll is a social media user who posts deliberately provocative messages to social media accounts, online forums or comment threads. Their intention is to disrupt the discussion, cause arguments and upset people. 

But trolls are more than just unpleasant individuals who enjoy harassing others. They actively seek to draw their opponent into a never-ending argument. That way, they can waste as much of the opponent’s time as possible, while also making them angry and frustrated. 

Why is it called trolling?

The word ‘troll’ might conjure up images of an ugly mythical creature that lives under a bridge, or perhaps something like the Orcs in Lord of the Rings. That’s fairly appropriate. 

But the real origins of the term for this particular type of online behavior are quite different. Before the world moved online, a troll wasn’t some kind of monster, but a fisherman. 

The Merriam Webster dictionary defines ‘troll’ as follows:

Troll

verb \ˈtrōl\

  1. to fish with a hook and line that you pull through the water
  2. to search for or try to get (something)
  3. to search through (something)

Basically, online trolls are ‘fishing’ for responses. They’re searching for engagement because that gives them satisfaction and dopamine hits (increasingly, trolling is also part of campaigns to achieve certain political agendas, which we’ll discuss later on). 

In short, trolls show up to generate arguments. In many cases, they don’t really give a damn about the actual topic being discussed. 

What are the most common online trolling tactics? 

As with any defensive response, knowing your enemy is vital. So here’s a speedy primer on the most common online trolling tactics.

Ad Hominem Attacks 

Short for argumentum ad hominem, ad hominem is a Latin phrase that translates as ‘to the person’. It describes a rhetorical strategy where the speaker attacks something personal about their opponent, such as character, motive, or some other attribute, instead of attacking the substance of the argument itself. Online trolling often relies on ad hominem attacks to derail the discussion. But it’s an unsophisticated technique. Social media users commonly notice it and call the trolls out. 

Whataboutism

Also known as whataboutery, whataboutism is the tactic of responding to a statement or argument by accusing the opponent (or something the opponent supports) of even worse faults or crimes. Trolls often leverage whataboutism in political discussions, for example a pro-Trump troll might respond to a valid critique of Trump by referring to something even worse about the opponent (such as Hillary Clinton or Joe Biden). In many instances, these ‘even worse’ examples are pure disinformation. 

 seal

Sealioning

This tactic, named after an online comic, involves the troll asking relentless questions about the topic of discussion, even after the opponent has explained it in detail numerous times. When called out, the troll will insist that they’re being civil, and just wanting to learn. But in reality, their game is to keep the opponent online as long as possible, derailing the original discussion and making the opponent frustrated and angry. Trolls often use sealioning to derail discussions of important but contentious issues, such as climate change. 

Debate Obsessives

Some trolls will hold forth on the great value of debate, claiming that it’s all that they seek. They make debating seem like a wholesome and healthy pastime, luring unsuspecting opponents into never-ending and fruitless interactions. When challenged, trolls will often bring up their ‘freedom of speech’, and how they’re supposedly being deprived of it. Don’t fall for this tactic; it’s not proper debate. They just want to waste your time. 

Sockpuppetry

Not everyone you encounter online is who they claim to be, or even who they appear to be. Sockpuppetry is a technique where the same person controls a number of fake accounts. It’s often used in cyberwarfare and astroturfing campaigns, where the objective is to create a false perception of public opinion. In many cases, sockpuppets also engage in trolling, because it helps their operators drive division between different identity groups online. This is a key objective of cyberwarfare campaigns, such as the Russian one that helped bring Trump to power. 

Ardent Researchers

These trolls like to harp on about ‘doing your own research’, which has become a common catchphrase among conspiracy theorists. They appear on discussion threads with inflammatory remarks like “COVID-19 is a hoax”, or “Bill Gates is responsible for the pandemic”. When people ask them to provide evidence of these claims, the trolls just tell them to go “do your own research”. For good measure, they may also accuse others of being ‘sheeple’ (i.e. brainwashed by the mainstream). 

strawman

The Strawman Fallacy 

This is another example of a logical fallacy that trolls often use to counter their opponent’s argument. They put forth a ridiculously exaggerated or distorted version of that argument, which is then much easier to defeat. The distortion is where the name ‘straw man’ comes from; i.e. a weak version of the original. Trolls often use the strawman fallacy in political arguments, or arguments about emotive topics like abortion or the death penalty. 

Tribalism

These trolls take advantage of our natural tendency towards tribalism by stoking conflict between groups. Tribalism as an online trolling tactic is most often seen in political discussions involving voter groups, such as Democrats vs Republicans or Leavers vs Remainers. It can also show up in discussions about seemingly unrelated issues, such as whether or not to wear a mask during the COVID-19 pandemic. Tribalism in online trolling involves making derogatory remarks about the perceived outgroup. Trolls who create intergroup conflict often have a vested interest, such as being part of a political influence operation. 

Hate Mongering 

Some of the online trolling tactics I’ve mentioned above can be quite subtle. But there’s nothing subtle about trolls who use hate mongering. They may spew racist and/or misogynistic attacks, use hate speech or even issue death threats. The good thing is: all these behaviors go against social media platforms’ terms of use, so it’s worth reporting them. Normally, Twitter is more responsive to these violations than is Facebook.   

Online trolling and influence operations 

Online trolling has become more sinister in recent years. In particular, investigators revealed it to have played a role in influence operations around both the 2016 US election and Brexit. The Russia-based Internet Research Agency (IRA) used tactics such as meme warfare to create identity groups then set them against one another.  

Now, with the 2020 US election coming up, online trolling tactics have evolved to become more subtle, according to new research. That’s why it’s so important to be familiar with online trolling tactics. What’s more, it’s also useful to know the key signs that indicate fake accounts. When armed with this information, ordinary social media users have a fighting chance of defending themselves against coordinated attacks on democracies.  

How to deal with online trolling 

Unfortunately, there are no magic solutions to the troll problem. There will always be trolls, whether those who are being paid to push an agenda, or those who troll simply because it amuses them. But the social media platforms provide at least couple of options to help lessen the effects of trolling.

  • Ignore the troll’s posts (or their whole account) using Twitter’s mute function. On Facebook, turn off your notifications for the relevant comment thread. 
  • Muting is better than blocking, because it denies the troll the satisfaction of a response. But if a troll is really bothering you, you can also use the block function, available on any platform.
  • If you absolutely must reply, do it once, then mute the troll or switch off your notifications.
  • If the troll is making comments as described in the ‘hate mongering’ section above (i.e. threatening, racist etc), then report them to the platform (this works better on Twitter than Facebook). 

Summary

  • I discussed what online trolling is and how it got its name.
  • Next, I presented eight commonly used troll tactics, then explained how cyber warfare campaigns leverage these to achieve their divisive agendas.
  • I concluded with a quick note on how to deal with trolls.

Have you experienced online trolling? If so, how did you handle it? Tell us in the comments.

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organization. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularized the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information that I’ve already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites such as The Onion or NewsThump have become popular.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account (later attributed to the Kremlin) shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, numerous far-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image might have been genuine, the context was not.  

Deepfakes

As one of the newest forms of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, deepfakes have mainly been used in porn. But, in a handful of cases, they’ve also played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes, but they do have the potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using only simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Far-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are devoid of any nuanced explanation. The viewer simply gets a quick hit of emotion that has a triggering effect. That taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps they’re something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. And therein lies much of the danger. 

Dopamine and Disinformation: How To Quit Feeding The Trolls

Dopamine and disinformation are intrinsically linked. In this article, I’ll explain how this works, and look at some ways to lessen the effects.

I used to spend a lot of time arguing with strangers on the Internet.

It normally went like this: I would post something political and the trolls would arrive. I’d get stuck in endless back-and-forths with them. It was a futile endeavor. But somehow I felt compelled to keep on engaging.

On Twitter or Facebook, new notifications would keep coming in, alerting me of another new comment. Then I’d engage again. It wasted my time, raised my anxiety levels, and made me feel more aggressive.

I’d gotten stuck in a dopamine loop.

This happens to us a lot, for example when we can’t stop refreshing our social media feeds. But what exactly is dopamine and what does it do?

How dopamine creates addiction

Dopamine is a chemical created in the brain. It affects many functions, such as thinking, mood, attention and motivation. It also plays a critical role in triggering our desire to seek out and search. In short, dopamine makes us want to search out information.

Two parts are at play in the dopamine loop. There’s the ‘wanting’ part and the ‘liking’ part. The wanting part makes you take action (i.e. searching), while the liking part satisfies you and makes you stop searching. But the wanting part is stronger than the liking part. And that’s how we get trapped in endless scroll mode.

Another important part of the dopamine system relates directly to the issue of engaging with troll comments.

The dopamine system is very sensitive to cues that indicate the possibility of a reward. On social media, notifications are the cues. They make the addiction even stronger. Anticipation is key. If we got a reward every time we logged in, it would weaken the addiction. It’s the uncertainty that gets us hooked.

So how did these behavioural psychology elements get incorporated into our Internet experience in the first place?

Engineered addiction and trolling

It began with a researcher called BJ Fogg and his Persuasive Technology Lab at Stanford University. In September 2007, Fogg and his students began work on a formula that would change the world — and all of our daily lives.

They built Facebook apps using techniques from behavioral psychology, techniques that could engineer addiction, such as the dopamine loop that I described above. Fogg developed a formula, suggesting that people will act when three forces converge: motivation, trigger, and ability.

Let’s apply this formula to a hostile online interaction. The motivation is your desire to convince the other person that your opinion is right, or to get ‘one up’ on them; the trigger is seeing another notification on your social media app, indicating that your opponent has responded; and the ability is having your phone to hand, which lets you check right away and respond.

These techniques tap directly into the workings of the human brain, making them extremely powerful.

So why do social media platforms leverage such addictive and manipulative design practices?

Simple: their profitability depends on it.

The ad-supported business model means that users spending more time on the app leads to more profit for the company. All that time you spend arguing with trolls on Facebook is making the platform more valuable to potential advertisers.

Dopamine and disinformation

Arguing online also relates to dopamine and disinformation. It can make us more susceptible to the latter. The dopamine loop gives a powerful tool to those who seek to divide us. It perpetuates a sense of an adversarial environment and keeps us always on the attack.

When we divide ourselves into tribes and adopt a hostile attitude towards a perceived outgroup, we risk becoming more susceptible to harmful disinformation. We are more likely to share content that resonates with our existing beliefs and reflects the wider attitudes of our tribe.

The dopamine loop also affects our interactions with agreeable content. When we post something that our tribe members approve of, we’ll receive dopamine hits via our notifications of likes and encouraging comments. That boosts our self-esteem and keeps us coming back for more.

So what can we do to fix the troll problem and reduce our susceptibility to disinformation?

Breaking out of the loop

Short-term practical solutions mainly involve adapting our devices to help break the dopamine loop. For example, we could make our phones less appealing by changing the screen to grayscale mode, or switching off all notifications.

But we can also tackle adversarial online behavior in another way.

‘Don’t feed the trolls’ has become an Internet cliché. But it starts to make sense when thinking about the dopamine loop and anticipatory reward.

Everyone who posts online is looking for a response. They want to perpetuate the dopamine loop. If we can maintain our self control by denying them that response, then we can break the cycle.

I’ve managed to break my own addiction to engaging with trolls. On Twitter, I simply mute the conversation. I can’t see the person anymore, and I receive no notifications of any of their comments. It makes the troll invisible to me, breaking the dopamine loop and allowing my brain to focus on other things.

On Facebook, I simply turn off notifications for a particular post. This has the same effect as muting on Twitter. Both platforms also offer a blocking option. I don’t normally use this because it gives trolls the satisfaction of knowing they’ve got a response. Muting is better, because it means they’re left wondering if I’m ignoring them. They just keep on yelling into the void.

Battleground or community

If we could all break the cycle and quit feeding the trolls, then adversarial disinformation and influence ops could lose much of their power. The online environment would feel like more of a community instead of a battleground. In turn, this may help reduce polarization.

But it has to be done en masse. A handful of people breaking the cycle won’t be enough to change the overall environment. Social media is designed to be addictive, so a wider intervention is necessary to drive change.

Of course, the social media platforms have all the power. They could redesign their structures to destroy these dopamine loops and disincentivize disinformation. But their ad-driven business model means they lack an incentive to do so.

Nevertheless, we can still improve our individual online experiences by taking steps to break the dopamine and disinformation cycle. A key part of doing so is to disengage with users who aim to trap us into never-ending adversarial debates based around polarizing topics.

Analyzing Trump’s Medical Disinformation on Facebook

US president Donald Trump shocked the world with his latest piece of medical disinformation.

Trump claimed that injecting disinfectant into the body could be an ‘interesting’ way to cure COVID-19.

He later tried to back-pedal, claiming he was being sarcastic. But that wasn’t how most of the world took it.

Dangers of medical disinformation

The mainstream media and the public widely lambasted this dangerous medical disinformation.

Amid the furore over Trump’s remarks, a major disinfectant firm issued a statement urging the public not to inject or drink any of their products.

However, members of pro-Trump Facebook groups dedicated to conspiracy theories displayed quite the opposite reaction. 

I examined some of these groups to provide comment for an article in CodaStory. I’d previously gathered this list because of the strong focus on various ‘corona disinformation conspiracies’.

These include 5G causing the virus, the virus being a US bioweapon, and Bill Gates as having orchestrated the ‘virus hoax’ in his ambition to enforce a worldwide vaccine programme. 

Many of the groups also centred around the QAnon conspiracy theory.

Pro-Trump Facebook reactions

You might expect the suggestion of injecting bleach to be a step too far even for these largely pro-Trump groups. Not so. 

In my initial observation of the groups, I noticed three distinct ways in which the members attempted to account for Trump’s bizarre medical disinformation.

First, that Trump was just ‘playing the media’. People must be stupid if they believe he meant what he said.

Commenters also attributed all the negative media coverage to ‘yet another’ MSM (mainstream media), liberal, or Democrat attempt to smear Trump.

Secondly, some commenters claimed that the media had quoted Trump ‘out of context’. According to them, he was speaking ‘more generally’ about possible ways to treat COVID-19.

Others highlighted a fact check article from far-right news outlet Breitbart. But no-one acknowledged the videos of Trump making these claims for everyone to see and hear. 

The third claim relates more closely to other COVID-19 medical disinformation, ‘miracle cures’. This commenter claimed that Trump must have been referring to those UV light therapy and ozone therapy, which already exist.

Things got more interesting when the commenter drew links between the medical disinformation about bleach and the popular narrative of ‘Vitamin C as miracle cure’.

They claimed that taking Vitamin C causes hydrogen peroxide to build up in the body. It followed that hydrogen peroxide has a disinfectant effect, so Trump’s comments have a basis in medical fact.

Rationalizing medical disinformation

These three counter-narratives about Trump’s medical disinformation all attempt to rationalize the actions of an influential figure, who has made a dangerous and irresponsible remark.

Tribal attitudes drive many of these rationalizations. For example, the claims that the media purposefully misinterpreted Trump’s comments in a ‘libs’ or ‘Dems’ smear attack. Once again, this reinforces the existing divide between populist pro-Trump narratives and the mainstream.

The question remains: How many of these Facebook group members are genuine American citizens? Facebook itself is the only entity that could properly attribute the accounts, but it doesn’t give much away.

I suspect group members are a mix of genuine Trump supporters and astroturfers working to stir up tribal hatred of the ‘other side’.

Tribal attitudes can be dangerous, particularly in relation to public health. People in the pro-Trump tribe are more likely to challenge messages from the perceived ‘outgroup’ (‘experts’ and the ‘MSM’) such as critical public health advice from the WHO.

A similar dynamic has fuelled recent anti-lockdown protests across the US, which may already have spread the virus further and compromised the entire country. Astroturfing was certainly a factor there; so there’s no reason why it couldn’t influence these groups too.

Tribalism In The Time Of Coronavirus

As I write this, the world has descended into a major crisis, with effects more far-reaching than anything I’ve experienced in my lifetime. A powerful virus has swept onto the scene and is now ripping its way through the world. Barely any country has been spared.

Here in the UK, the coronavirus crisis is getting worse by the day. But merely observing the city streets on this sunny spring Sunday would give no indication of the gravity of the situation. Indeed, some UK tourist spots, notably Snowdon, experienced their ‘busiest day in living memory’. That’s quite something at a time when a highly contagious virus is on the loose.

In contrast, the streets of Paris, Lisbon and Barcelona are deserted. Most EU countries have issued a decisive response, putting their populations under strict lockdown to try and curb the spread of the virus. The UK government hasn’t followed suit.

Britain is saddled with unfortunate leadership in such a time of crisis. Messages from central government have been unclear and have arrived far too late. Many people have died. Amid the frenzied warnings from other countries, tribalism, rooted in the impulses that drove Brexit, still bisects British society — even influencing how we perceive the choice between life and health, or possible death. 

Brexit tribalism could be seen as a barometer for who will approve or disapprove of Boris Johnson’s handling of the coronavirus situation. No scientific study has yet been conducted to prove or disprove this, but research from Cambridge has shown that Leave (and Trump) voters have a strong tendency to believe conspiracy theories.

So if I may hypothesise for a moment, it would go as follows.

Those who believe Johnson is doing well and don’t believe in the necessity of self isolation — more likely to be Leave voters. Those who believe Johnson is doing the wrong thing and we should follow the majority of the EU (and the world) into lockdown — more likely to be Remain voters. 

I can’t help but wonder if these divided attitudes are linked to the government’s aggressively anti-EU narrative. Could it possibly be that our leaders are reluctant to implement lockdown because it would mean them falling into line with the EU? The British government can’t possibly be seen to do that. On the contrary, it must do the exact opposite. After all, there’s a voter base to keep happy.

This tribal stance has filtered down to the population. People’s cavalier real-life behaviour at a critical juncture risks the health and safety of us all.

We’ve gone beyond Brexit concerns now. Freedom of movement is no longer the most important thing at stake. Continued tribal attitudes in the UK could now lead to significant numbers of deaths. The reckoning has arrived. No matter what side of the political spectrum we’re on, we must ensure that tribalism does not cloud our actions on tackling the virus, as the New European so rightly points out.

There’s another factor influencing public opinion around coronavirus: online disinformation. It’s been a key part of turbocharging existing tribal divisions.

Based on my research so far, I’ve seen the following positions solidifying into recurring narratives. Many are from sources that originate in the United States, but the shared language and overlapping ideologies mean they can mostly be considered as UK-relevant too.  

Narratives primarily from conservative/right-wing/pro-Leave sources:

  • The coronavirus is a hoax used as a smokescreen for elites to take control of society
  • It’s no worse than the flu, so there’s no need to believe WHO or UN advice (in fact we shouldn’t trust them because they may be part of the elite conspiracy)
  • Social distancing is unnecessary / too extreme
  • China is to blame for all this. To quote Trump, coronavirus is ‘the Chinese virus’ 

Narratives primarily from liberal/left-wing/centrist/pro-Remain sources:

  • The coronavirus is real, serious, and affects everyone 
  • It can’t be compared to flu
  • We should trust advice from WHO/UN and other legitimate experts
  • Social distancing and possibly lockdown is necessary to save lives across the wider population. 

Most of the disinformation that I’ve observed so far plays on the core narrative strands in the first group. People targeted by these narratives might well be less likely to take the virus seriously and more likely to carry on with a semblance of normal life, thus continuing the pandemic. This unhelpful behaviour is exacerbated by the population spending more time at home and hence online, seeking out constant updates on this critical global threat.

In the next post, I will unravel the coronavirus disinformation narratives in more detail, providing data-driven examples. It’s critical to understand the why behind the seeding of this disinformation, so I’ll also discuss the various incentives that are driving it.