Home » Election Monitoring

Category: Election Monitoring

voter suppression

Voter Suppression Online: How To Fight Back

Introduction

In this week’s article, I’m going to delve into the important issue of online voter suppression. I’ll give some examples of what voter suppression attempts look like on social media and explain specifically how disinformation is involved. I’ll also discuss the psychological drivers that make online voter suppression effective. In closing, I’ll suggest some ways that we can all fight back. 

What is Voter Suppression?

Voter suppression is defined as “the use of specific tactics with the intent of manipulating the outcome of an election.” Historically, these tactics have included bringing in new voter ID laws, preventing early voting, purging voter rolls, and creating systematic disenfranchisement. 

Voter suppression commonly affects minority and vulnerable groups, such as the elderly or disabled. In the United States, suppression attempts have existed as a practice ever since voting began, despite the original intention for voting to be open to all. The Jim Crow voting laws of 1877 are a prime example of historical voter suppression targeting the black vote. 

Voter suppression happens in other countries too. For example, in 2019, the UK’s ruling Conservative Party introduced pilot schemes to require voters to produce photo identification at polling stations. Photo ID is not compulsory in the UK. Activists argued that this move would disenfranchise those sectors of society who do not hold a driving licence or passport; often the poor.    

Voter Suppression in 2020

We live in a technologically advanced age, but voter suppression hasn’t stopped. If anything, it has become even more of a risk to democratic processes worldwide. The internet, and social media in particular, has given new scope to voter suppression attempts. Now, it’s easy for hostile foreign powers to conduct voter suppression campaigns targeting the citizens of another nation.

Online voter suppression is highly likely in 2020. Firstly, since 2016, falsehoods, conspiracy theories and uncertainty have characterized our information environment. It’s become the new normal. Add to that a pandemic, during which people are constantly online while feeling fearful and paranoid. Then top things off with a critical presidential election in a deeply polarized United States. 

Foreign powers are also weighing in on the upcoming US election. China and Russia both have stakes in the outcome, but they’re pulling in opposite directions. According to a recent intelligence report, China is supporting Joe Biden, while Russia prefers a Trump win. Therefore, we can expect online voter suppression activity to be characterized along those dynamics.

Online Voter Suppression Examples

Let’s examine some examples of how voter suppression works in a social media environment. 

Facebook groups, especially private locally-focused ones, are likely to become key targets for voter suppression attempts in the run up to US 2020. Tactics may include the following: 

  • Using fear, e.g. claims that COVID-19 is circulating and people should not go out to vote; claims about violent street protests near polling stations  
  • Spreading disinformation about the voting process, e.g. giving false polling station opening hours; claiming that only pencils can be used to mark the ballot; claiming that photo ID is essential. 
  • Creating apathy, e.g. earlier in 2020, Russian media sources were spreading a narrative that both Biden and Trump are bad options for president, because they both ‘have mental health issues’. The goal here is to discourage people from voting for either candidate.  
  • Targeting a candidate with malinformation, disinformation or conspiracy theories, e.g. the Clinton email leak of 2016 was the starting point for a lot of long-running activity intended to harm Hillary Clinton’s candidacy. In 2020, we’ve seen narratives circulating that question Joe Biden’s mental health, e.g. accusing him of having dementia or Alzheimer’s. 
  • Using Facebook’s micro-targeting capabilities to reach specific parts of the voting population with ads only visible to them. This provides endless scope for targeting vulnerable subsections with suppressive messages that other voters (and the press) will probably never see.
  • Creating mistrust in the legitimacy of the voting process, e.g. by spreading online narratives about fraudulent behavior at the ballot box (e.g. when counting votes). NB: Dictators often use this tactic to discredit their opponents. For example, in 2019, Turkey’s president, Recep Tayyip Erdogan, ordered a recount of the Istanbul mayoral election vote after his party lost.
  • Combinations of any of the above, for example using Facebook micro targeted ads to deliver messaging about localized street violence or lockdowns, to discourage particular groups from going out to vote.  

Voter Manipulation Example

There are other ways to direct a vote, which don’t quite fall into the category of voter suppression, but are worth mentioning as they may overlap. For example, it’s possible to astroturf the perception of divisions between subgroups on one political side.

Here’s an example from the UK. In the general election of 2019, the pro-Remain side (primarily Labour and Liberal Democrats) may have been subjected to online efforts aimed at turning groups against one another. 

Creating this division would have been high stakes, because coordinated strategic voting for pro-Remain parties was considered the only way to oust sitting Conservative MPs. Done successfully, this could have prevented a Conservative (and pro-Brexit) majority in government. 

I observed frequent activity in local Facebook groups that was in line with this goal, e.g. lots of supposedly pro-Remain users stating they ‘could never vote for Jeremy Corbyn’, claims that Labour was ‘communist’, claims that the Liberal Democrats were simply Conservatives under another moniker (‘yellow Tories’), and so on. (NB: Astroturfing has not yet been proven in this case. This example represents an educated assessment based on first-hand observation and background knowledge).

Instead of cooperating, Labour and Liberal Democrat voters turned against one another. Many refused to vote tactically, citing ideological reasons. 

Those sentiments likely undermined the tactical voting initiative. They may have helped tilt the election outcome in favor of the Conservative Party/pro-Brexit side, which went on to win the election.  

The Psychology of Online Voter Suppression

Online voter suppression works by tapping into many of the same psychological biases that make social media users vulnerable to disinformation in general, especially within a deeply polarized political environment. 

Here, people are highly likely to interact online only with members of their own tribe, i.e. those who share their views. Interactions outside the tribe are likely to be hostile in nature, aimed at getting one up on the opponent. 

A range of cognitive biases help our brains make sense of the world and avoid information overload. But some of them make us more susceptible to manipulation. For example, thanks to confirmation bias, we’re far more likely to believe information that fits our existing worldview. On the other hand, we’re quick to doubt that which does not. 

Furthermore, we tend to engage positively with information shared by people we know, or who appear to be in our tribe. The problem is, badges of tribe membership (e.g. use of hashtags, flag emojis, or ideological statements in the bio) can easily be faked on social media. It’s not always easy to be sure if we’re interacting with a genuine person or someone with a political agenda. 

How You Can Fight Back 

  • Question the incentives of all voting-related content that you see on social media. 
  • Know the tell-tale signs of inauthentic social media users (hint: examine the source carefully).
  • Look for repetitive user behavior and single issue fixation. 
  • Be suspicious of users who try to stir up intergroup conflict over voting related issues. 
  • Think about tribalism, look out for those who use hyper-partisan language (e.g. lefty, commie, woke, libtard, MAGAT).  
  • Pause before sharing voting-related posts and think about what’s compelling you to share them. Is your need to share an emotion-driven knee-jerk response, or are you doing it in a more measured, rational way? 
  • Only pay attention to voting-related messages from trustworthy official sources (i.e. government websites or official social media accounts). 
  • Register to vote and go to your local polling station on voting day, no matter what happens. It’s NEVER a wasted exercise, no matter what some manipulators would like you to believe. 

Conclusion

In this post, I briefly present the history of voter suppression and how it has evolved in the 21st century. I also give examples of how it manifests on social media. Then I touch on the psychological factors making us susceptible to it. I end with some suggestions for how ordinary social media users can fight back against potential voter suppression attempts.

 

 

 

 

fake news and disinformation

4 Simple Ways To Stop Sharing Fake News and Disinformation

Introduction

Fake news, more specifically known as disinformation, is a major problem that shows no sign of going away. If anything, it’s evolving in new ways to become more nefarious than before. Elections are always major flashpoints for fake news, and the US presidential election of 2020 is no exception. 

Many individuals and organizations are working hard to come up with ways to fight fake news and disinformation. In the meantime, ordinary internet users can also do their part to help.

In this post, I’ll discuss four simple ways that you can stop sharing fake news and disinformation.

4 Simple Ways To Stop Sharing Fake News and Disinformation

Break Out Of Dopamine Loops

fake news and disinformation

What is a dopamine loop and how does it relate to fake news and disinformation?

Dopamine is a chemical in the brain that affects functions such as mood, attention and motivation. It also plays a key role in affecting our desire to seek out new things – like information. 

Dopamine loops consist of two parts: wanting and liking. ‘Wanting’ compels you to keep searching for new information, while ‘liking’ is the part that makes you feel satisfied once you find it.

The trouble is, in the dopamine loop, wanting is stronger than liking. This leads to us getting trapped, constantly compelled to seek new information. 

The original designers of social media knew all about dopamine loops. They designed the platforms with them in mind, knowing that the loops would keep users hooked on the platform. That would increase the likelihood of users viewing online ads.  

So how does the dopamine loop relate to fake news and disinformation? One major way that we get dopamine hits online is through receiving notifications on social media.

You know, those little red numbers in the top corner of Facebook. Think about how you feel when you open your profile and see a bunch of them waiting for you. You feel good, right? This is dopamine firing in your brain. 

Sharing content with your friends and followers is a great way to get notifications, which gives you even more dopamine. But this is where we find the danger from fake news and disinformation.

When we share to get dopamine hits, we’re less likely to take the time to check whether the information we’re sharing is actually true. After all, we’re constantly in a state of information overload. 

One way to stop spreading fake news and disinformation is to break our addiction to dopamine. It makes us vulnerable. We need to avoid getting stuck in dopamine loops, constantly refreshing our social media apps in the hunt for fresh information and notifications. 

Quick ways to break the dopamine loop:

  • Turning off your social media notifications altogether
  • Switching your device to grayscale mode (making it less likely to produce a dopamine hit)
  • Pausing to take a few deep breaths before sharing any content 

But there’s another critical way to stop sharing fake news and disinformation…

Avoid Heated Arguments Online

fake news and disinformation

The internet is full of trolls. No matter what you say online, it often feels like someone is always ready to attack you for it. Your natural instinct is to strike back. That’s a very human response. But it risks making you more likely to share fake news and disinformation.  

Why? Because arguing online is another way to get trapped in a dopamine loop. Your antagonist keeps responding, you keep getting more notifications. You keep arguing back, and the cycle continues.

Often, you’ll share a piece of online content, perhaps a news article, to prove your point and get ‘one up’ on your opponent. When doing so, you probably don’t take the time to fact-check the article. That’s where the danger is. 

What’s more, some online trolls are there deliberately. They’re part of coordinated inauthentic behavior campaigns designed to sow division and hostility around certain topics (usually political ones).

These campaigns usually involve fake news and disinformation too. By arguing with these political trolls, you’re giving them exactly what they want. 

Luckily, there’s an easy way to avoid being drawn into online political arguments. On Twitter, it’s the mute function (either mute conversation, or mute user). On Facebook, you can turn off notifications about a specific post.

These features are great, because they allow you to break out of the dopamine loop and the troll has no idea. They just carry on yelling into the void. Meanwhile, you carry on with your day and remain blissfully unaware.

Check Your Confirmation Biases

confirmation bias

Confirmation bias plays a key role in increasing our likelihood of sharing fake news and disinformation. But what exactly is it?

Confirmation bias is our natural tendency to search for, favor and easily believe information that fits with our existing worldview. 

Let’s look at how confirmation bias works in practice. For example, you see a tweet (falsely) claiming that US presidential candidate Joe Biden has dementia.

You’re a Trump supporter and you don’t like Biden. Thanks to confirmation bias, you’re very likely to hit retweet on this tweet without even stopping to question if it’s really true. 

You also know that your Twitter followers (who have similar worldviews) will appreciate your sharing this tweet. They’re likely to give it lots of attention, including retweets and favorites – i.e. plenty of extra dopamine for you. 

However, if you saw a similar tweet questioning Trump’s mental health, it’s far more likely that you’d be skeptical of it. Of course, this works in the other direction too. Confirmation bias is not unique to either end of the political spectrum. 

It’s dangerous, because it makes people automatically believe (and probably share) content that fits their worldviews, without stopping to check its veracity. 

If you really want to stop sharing fake news and disinformation, you have to approach your social media use knowing that you have confirmation bias. You have to consciously remind yourself what exactly it is that compels you to share a certain post.

It’s not easy, but it’s a necessary step to help stop sharing fake news and disinformation.

Consider Content Incentives For Fake News

incentives for fake news

Finally, I want to discuss the incentives of social media content. Every post and article on the internet has a certain incentive behind it. For many content creators, publishing articles is a way to drive traffic to their websites, to earn money from online ads. This is their main incentive.

But the social media space is noisy, so those articles need to stand out. That’s why you’ll see so many overblown clickbait titles that often bear little relevance to the actual content of the article.

In particular, politics is a highly charged and emotive topic, so it’s often used to catch attention and drive site traffic. That’s how the infamous Macedonian teenagers made money from pushing pro-Trump fake news in 2016.

Another incentive in content creation is to push a specific worldview, perhaps on behalf of a foreign government. The Kremlin uses this technique a lot.

Amid the early days of the 2020 pandemic, I found that Russia-linked news sites were pushing conspiracy theory narratives (e.g. the dangers of 5G, Bill Gates as responsible for coronavirus, coronavirus as hoax, etc). These showed up consistently on social media, for example in US and UK based Facebook groups.  

Before sharing something on social media, consider the incentives of its creator. Are you truly happy to help that clickbait website make more ad money, or to help a hostile foreign government promote its worldview to your fellow countrymen?

Summary

In this article, I presented four simple ways to stop sharing fake news and disinformation. I talked about the following points:

  • How to break out of dopamine loops 
  • How to avoid heated arguments online 
  • Why you should check your confirmation biases
  • Why you should consider the incentives of content

Are you doing any of these already? Let us know in the comments.

coordinated inauthentic behavior

Three Easy Ways to Spot Coordinated Inauthentic Behavior Online

After spending almost a year tracking coordinated inauthentic behavior on behalf of Facebook and Google, I’ve developed a good sense of how an inauthentic social media user looks and acts online. 

Each platform has its own nuances. But many universal patterns indicate the likelihood of coordinated activity. Here I’ll discuss three common indicators – and how you can effectively spot them across any social media platform. 

But first, let’s take a look at what coordinated inauthentic behavior actually means. We’ll also briefly explore some controversy around its definition.

What is coordinated inauthentic behavior?

Two years ago, Facebook first coined the phrase ‘coordinated inauthentic behavior’ (known in the industry as CIB).

Facebook defines CIB as follows:  “When groups of pages or people work together to mislead others about who they are or what they’re doing.”

Facebook (and other platforms) are keen to highlight the ‘behavior’ side of the phrase. This helps to shield the platform from accusations of being biased against any particular political ideology.

People would be quick to make those accusations if Facebook simply focused on removing inauthentic content. It would raise the question of which content should get removed and which should stay. This would trigger wider concerns about freedom of speech and the First Amendment.  

The double standards controversy

Writing for Slate, Harvard researcher Evelyn Douek is concerned that Facebook’s definition of coordinated inauthentic behavior is lacking in clarity.

She argues that certain groups will consider certain kinds of CIB acceptable, whereas others will not. Douek draws on the example of the TikTok video that caused hundreds of young people to artificially inflate attendance numbers at a Donald Trump rally by reserving tickets en masse.

Douek contrasts that real-life scenario with a hypothetical example of Qanon supporters doing the same to Joe Biden. She highlights the risk of applying double standards to CIB, as well as to disinformation.

That’s a real concern, especially in deeply polarized times. Polarization is the key driving force behind this issue. We assume that ‘our’ side is doing good, while ‘their’ side is doing bad. That view influences how we judge the motives of coordinated inauthentic behavior. 

For the purpose of this post, we’ll use the official CIB definition. It’s still the standard that most social media platforms use. But it’s important to know that the term is not perfect, and has attracted controversy.

Is coordinated inauthentic behavior the same as misinformation or disinformation?

No. But they certainly play a role in it. For example, members of a Twitter botnet might work together to constantly pump out and amplify misleading tweets about a political figure.

Or groups of paid operatives might enter Facebook groups and astroturf the discussion about coronavirus by posting lots of comments about the dangers of vaccines. Astroturfing is a common technique of CIB, i.e. to create an appearance of legitimate ‘grassroots’ consensus on certain topics. 

OK, I’ve answered some key questions about coordinated inauthentic behavior. Now let’s look at three ways to spot it.

What are some key indicators of coordinated inauthentic behavior?

Identity Signaling

The concept of identity is at the heart of many coordinated inauthentic behavior and disinformation efforts. CIB campaigns often play on existing social and political divisions within their target audience.

For example, they might astroturf a widespread sense of approval for a certain government policy, such as a tougher stance on immigration. Immigration is an emotive issue for many people, and has the potential to drive ingroup vs outgroup sentiments.

When examining accounts for signs of inauthenticity, I consider overt identity signals, especially political ones, to be a red flag. These could include national flags, divisive political hashtags (e.g. #MAGA (make America great again) or #FBPE (follow back pro-Europe)), or a bio stuffed with identity-promoting keywords like “Army vet, patriot, Trump supporter, family man, God lover”.

Taken together, those signs indicate that the profile primarily exists to promote a certain political identity – a common giveaway of astroturfing or coordinated inauthentic behavior.

Copy Paste Sharing

It’s common to find groups of accounts sharing links or posts accompanied by the exact same text (e.g. in a quoted tweet or a Facebook share). This isn’t normal behavior for an ‘organic’ social media user, so it’s a suspicious sign.

Copy paste sharing usually indicates a campaign designed to amplify a certain message. It is likely that Twitter accounts constantly tweeting the same messages in tandem are automated (i.e. bots).

Aggressive Political Agenda

When I’m seeking signs of coordinated inauthentic behavior, I always examine the posting history of a social media account. I check whether all its posts support a specific political agenda (usually in an aggressive and antagonistic way). If so, that’s another red flag.

Sure, regular people can also post aggressively in support of a political agenda. But it’s less likely that those posts will make up the whole of their posting history. A one-topic account is a key sign of coordinated inauthentic behavior. 

Summary

In this post we examined the origins of the term ‘coordinated inauthentic behavior’. Also, we explored one of the key debates around the validity of its definition. Finally, we looked at three simple ways to spot coordinated inauthentic behavior on social media platforms.

  • First, I looked at identity signaling, where accounts project a strong sense of a certain identity (usually political) via profile hashtags, profile imagery, bio information, or posting history. 
  • Second, I discussed copy paste posting, where multiple accounts share something with the exact same accompanying text. This is often a sign of automated coordinated inauthentic behavior. 
  • Finally, I highlighted the significance of one-topic accounts that support a certain political agenda, usually in an aggressive way.

Coronavirus Conspiracy Theories, Tribalism And Public Health

During the pandemic, large crowds of Trump supporters took to the streets of US cities, demanding an end to coronavirus restrictions, such as lockdown and the wearing of masks. Britain saw similar issues, albeit on a smaller scale.

Why are some people so determined to ignore public health advice? Part of the answer may be found by examining political tribalism and coronavirus conspiracy theories.

In this post, I’ll explain how coronavirus conspiracy theories and disinformation leverage tribalism to influence people’s behaviour.

Divided societies, universal threat

When the pandemic first hit, some hoped that the shared experience of facing universal threat would bring warring political tribes together. But it seems the opposite is happening. This is partly driven by an organised and sustained campaign of disinformation and coronavirus conspiracy theories.

In the UK and US, government responses to the virus have been unlike those of many other countries. Portugal, Germany, New Zealand, Canada and South Korea have already managed to regain some level of control over its spread.

In contrast, both the UK and the US were slow to implement lockdown measures. Both gave their populations mixed messages about how to handle the pandemic. Both countries’ leaders have displayed a cavalier attitude towards the virus.

Political tribalism in the UK and the US is now affecting their populations’ response to the coronavirus crisis. This tribalism is a hangover from 2016, the same force that played a role in the election of Trump and the vote for Brexit – polarising the populations in the process.

Coronavirus conspiracy theories demonise groups

A sustained torrent of coronavirus disinformation has compounded these issues. In particular, numerous coronavirus conspiracy theories have eroded trust in public institutions among some segments of the population. Distrust of experts is nothing new. It’s been a central feature of tribal politics since 2016 and shows no sign of dissipating in this pandemic.

Common coronavirus conspiracy theories include:

Tribalism means viewing the world as ‘us vs them’, with ‘us’ being superior and ‘them’ being threatening. This perspective is inherent in these coronavirus conspiracy theories.

Many revolve around the demonisation of a particular group (e.g. elites, the establishment, experts, the WHO, China, and so on). True believers view anyone who supports the demonised group as being part of it. And so the tribal divisions persist.

These coronavirus conspiracy theories cast doubt on the public health situation. They promote distrust of expert advice and official organisations. The result is shifts in population behaviour, e.g, people refusing to follow lockdown, wear masks or practise social distancing.

From Facebook to the streets

The situation has become particularly dangerous in the US, with its current protests. Here the role of social media comes under the spotlight.

Private Facebook groups have been key sites for inciting and organising these protests. Some groups are large, such as ‘Michiganders Against Excessive Quarantine’, or ‘Reopen Virginia’ (the latter with over 18,000 members)

Both groups are full of talk of coronavirus conspiracy theories, such as the below from the Michigan group.

Source: https://twitter.com/willsommer/status/1250838111992647680

Below is an example comment from the ‘Reopen Virginia’ group. This user is calling for civil unrest, while also demonising the outgroup (‘leftist Democrats’). The post has attracted significant engagement, both comments and likes.

Source: https://twitter.com/jaredlholt/status/1250842215435337728/photo/3

These posts show how belief in tribal coronavirus conspiracy theories can lead to virus scepticism and denial. It can also trigger people to take real-life protest action, which risks turning violent.

Furthermore, it’s not easy to know who is producing these comments. Do they reflect the views of genuine American citizens? Or are some of the comments being astroturfed by those who seek to create social unrest?

Coronavirus conspiracy theories are a problem for other social media platforms too. YouTube hosts thousands of videos discussing all kinds of conspiracy theories in great detail. The platform recently changed its policies in an attempt to crack down on coronavirus conspiracy theories and 5G content. But it’s likely too little, too late.

The trouble is, platform takedowns are viewed as a sign of elite censorship in the minds of people already suspicious of experts and official organisations. This adds even more fuel to the fire of coronavirus conspiracy theories.

Local groups are key targets

Private local Facebook groups are a prime target for influence operations. They have already been identified as key battle sites for the US 2020 election, where influence operatives aim to manipulate the political narratives in key swing states.

Targeting local Facebook groups is an effective way to do this. As well as activity such as voter suppression in these groups, influence operations can also compel populations to protest on the streets.

It’s difficult for researchers and analysts to study private Facebook groups in aggregate, as tools such as CrowdTangle don’t allow access to private groups.

These groups are hotspots for US 2020 manipulation activities. Facebook should monitor them carefully. Its moderators should look out not only for signs of voter suppression, but also for coordinated attempts to incite populations to violence.

We must take coronavirus conspiracy theories seriously

These times of heightened fear offer a prime opportunity to for disinformation purveyors to influence the outcome of the US 2020 election.

When political tribalism is so entrenched, fact checking and counter disinformation messaging campaigns may be less effective on a large scale. Instead, they risk exacerbating people’s existing suspicions of the establishment and ‘elites’.

Coronavirus conspiracy theories are not trivial. They risk causing harm on a massive scale, by encouraging populations to ignore public health advice and instigate real life violence.

It’s essential that social media companies take coronavirus conspiracy theories seriously, particularly within private groups. Whether or not they do so may end up as a key deciding factor of the US 2020 election. 

disinformation

6 Things I’ve Learned About COVID-19 Disinformation

Disinformation thrives on chaos. A global pandemic is about as chaotic as it gets.

For those who seek to spread disinformation, COVID-19 presents a far grander opportunity than either the 2016 US election or the vote on Brexit. The upcoming 2020 US presidential election further fans the flames.

That’s why it’s important to stop and take stock of lessons learned from the front lines of disinformation tracking.

I’ve been studying cross-platform coronavirus narratives for the last month or so. Here are a few of the things I’ve found.

What I’ve learned about COVID-19 disinformation

disinformation virus

1. Q is a key player in disinformation efforts

Qanon is a mega conspiracy narrative that encompasses a whole range of smaller ones. The basic premise of Qanon claims that Donald Trump is in league with a shadowy figure called Q.

Together, Trump and Q fight against a group of elite paedophiles entrenched within the mainstream media and the Democrat Party.

Previous presidential candidate Hillary Clinton and current one Joe Biden have both been major targets for Q’s accusations.

Every so often, Q releases tantalising nuggets of new information (called ‘Q drops’) for his followers to chew over.

These have sparked a whole ecosystem of pervasive social media content, from Twitter threads to entire YouTube channels.

Coronavirus disinformation is being well-leveraged by Q and followers. Q related themes and activity underpin many of the most widely spread corona conspiracies.

Those include coronavirus being either a hoax or a bioweapon, 5G causing the virus, a supposed plan to enforce mandatory vaccinations, and the imminent arrival of military martial law.

2. Mainstream media is pushing disinformation narratives

Conservative media sources in the US, such as Fox News, play a significant role in promoting narratives that draw on conspiracies, including around coronavirus disinformation. They claim it’s ‘not a big deal’, or it’s ‘just like the flu’, or, ‘it’s all a big hoax’.

Although these stories may be less colourful than those of the average Q acolyte, they are still risky.

Provenance in established media sources provides the necessary social proof to make the narratives more credible in the minds of their audiences.

What’s more, this scenario means less work for those who intend to manipulate public opinion around the coronavirus.

They no longer have to waste time crafting convincing content, but can simply engage with organic content that already exists. And that’s exactly what they’re doing, with a firm eye on the US 2020 election.

3. Coronavirus tribalism is prevalent

Pitting ‘us’ against ‘them’ is at the core of most disinformation, including conspiracy theories. The narratives can take many forms, but always come down to one group (the ingroup) facing off against a predefined opposing group (the outgroup).

For Qanon, it’s Q’s followers who are the ‘enlightened’ ingroup, joining forces with him and Trump to battle the predatory elites. In British politics, we see ‘patriotic’ supporters of Brexit setting themselves against ‘treacherous’ Remainers (and vice versa).

Tribalism even filters down to matters of life or death, i.e. the coronavirus. On social media, I’ve noticed a recurring adversarial narrative emerging around how best to respond to the pandemic.

One camp downplays the severity of the virus, claiming measures such as the lockdown are an overreaction. The other camp is strongly in favour of lockdown and promotes WHO advice to Stay At Home. Each camp supports their own and attacks the other, often in derogatory and aggressive ways.

It’s dangerous when people are already suspicious of ‘elites’ and experts. They have a tendency to dismiss guidance from governments and public health organisations, which can lead to the flouting of virus mitigation measures. Real world harms can result.

4. Virus fears being monetized 

The chaos and fear of a global pandemic has spawned many opportunities for leveraging the attention economy.

As well as conspiracy theories, there are many examples of people making money via coronavirus disinformation, by tapping into people’s fear, boredom, and increased need for answers.

I’ve identified two main ways of doing this. The first is through creating highly clickable content about the virus.

This content may or may not be factual; it doesn’t matter to the creator, as long as it brings in the clicks.  Content is published on websites festooned with online ads, where each click brings extra ad dollars to the site owner.

The second way is to create content on topics such as ‘miracle cures’, which then feeds into attempts to sell products. Vitamin C is a prime example.

It’s a cynical exploitation of people’s fears about the virus and their need to regain a sense of control.

These ‘miracle cures’ are not scientifically proven. They provide a false sense of security, which may lead to individuals choosing not to self isolate and spreading the virus as a result.

5. Takedowns have a ‘backfire effect’ 

Takedowns are a necessary part of tackling the coronavirus disinformation problem.

However, denying bad actors freedom of reach can also strengthen the impetus behind conspiracy theories by feeding into an existing sense of elite suppression.

Here, conspiracy theorists view the platforms as part of the elite, keeping the ‘truth’ hidden from the people.

Conspiracy theorists are quick to react to takedowns, working them into their coronavirus disinformation narratives.

With 5G, a trend has sprung up of referring to it as ‘5gee’ or similar permutations. This is an attempt to avoid the keyword being picked up by moderators or analysts who are tracking it.

For conspiracy adherents, this sense of persecution further reinforces their existing worldview. It makes them more likely to cling to it. In this way, a ‘backfire effect’ has occurred. 

6. Platform responses are shifting 

Social media companies are frequently accused of not doing enough to reduce the flood of misleading content that overwhelms their platforms.

I don’t think they’re reluctant to do so, but they have to balance this move with being seen as supportive of free speech.

Finding that balance can be challenging when addressing conspiracy theories, as opposed to purely false information.

Most conspiracy theories are spun up like candy floss around a small kernel of truth.

A typical post will build a whole story around how some real life event is of possible significance to the wider narrative arc.

The difference between opinion and actual false information is not always clear-cut. This creates murky territory for the platforms. 

But things have shifted after some conspiracy theories, such as the one about 5G causing coronavirus, triggered real life harms.

A recent video by notorious conspiracy theorist David Icke was pulled from YouTube just days after it was released, heralding a change in approach.

A growing amount of research indicates that coronavirus conspiracy theories form a central part of coordinated influence operations.  

We can no longer afford to overlook the role of conspiracy theories and disinformation in influence operations.