Home » Influence Operations

Category: Influence Operations

coordinated inauthentic behavior

Three Easy Ways to Spot Coordinated Inauthentic Behavior Online

After spending almost a year tracking coordinated inauthentic behavior on behalf of Facebook and Google, I’ve developed a good sense of how an inauthentic social media user looks and acts online. 

Each platform has its own nuances. But many universal patterns indicate the likelihood of coordinated activity. Here I’ll discuss three common indicators – and how you can effectively spot them across any social media platform. 

But first, let’s take a look at what coordinated inauthentic behavior actually means. We’ll also briefly explore some controversy around its definition.

What is coordinated inauthentic behavior?

Two years ago, Facebook first coined the phrase ‘coordinated inauthentic behavior’ (known in the industry as CIB).

Facebook defines CIB as follows:  “When groups of pages or people work together to mislead others about who they are or what they’re doing.”

Facebook (and other platforms) are keen to highlight the ‘behavior’ side of the phrase. This helps to shield the platform from accusations of being biased against any particular political ideology.

People would be quick to make those accusations if Facebook simply focused on removing inauthentic content. It would raise the question of which content should get removed and which should stay. This would trigger wider concerns about freedom of speech and the First Amendment.  

The double standards controversy

Writing for Slate, Harvard researcher Evelyn Douek is concerned that Facebook’s definition of coordinated inauthentic behavior is lacking in clarity.

She argues that certain groups will consider certain kinds of CIB acceptable, whereas others will not. Douek draws on the example of the TikTok video that caused hundreds of young people to artificially inflate attendance numbers at a Donald Trump rally by reserving tickets en masse.

Douek contrasts that real-life scenario with a hypothetical example of Qanon supporters doing the same to Joe Biden. She highlights the risk of applying double standards to CIB, as well as to disinformation.

That’s a real concern, especially in deeply polarized times. Polarization is the key driving force behind this issue. We assume that ‘our’ side is doing good, while ‘their’ side is doing bad. That view influences how we judge the motives of coordinated inauthentic behavior. 

For the purpose of this post, we’ll use the official CIB definition. It’s still the standard that most social media platforms use. But it’s important to know that the term is not perfect, and has attracted controversy.

Is coordinated inauthentic behavior the same as misinformation or disinformation?

No. But they certainly play a role in it. For example, members of a Twitter botnet might work together to constantly pump out and amplify misleading tweets about a political figure.

Or groups of paid operatives might enter Facebook groups and astroturf the discussion about coronavirus by posting lots of comments about the dangers of vaccines. Astroturfing is a common technique of CIB, i.e. to create an appearance of legitimate ‘grassroots’ consensus on certain topics. 

OK, I’ve answered some key questions about coordinated inauthentic behavior. Now let’s look at three ways to spot it.

What are some key indicators of coordinated inauthentic behavior?

Identity Signaling

The concept of identity is at the heart of many coordinated inauthentic behavior and disinformation efforts. CIB campaigns often play on existing social and political divisions within their target audience.

For example, they might astroturf a widespread sense of approval for a certain government policy, such as a tougher stance on immigration. Immigration is an emotive issue for many people, and has the potential to drive ingroup vs outgroup sentiments.

When examining accounts for signs of inauthenticity, I consider overt identity signals, especially political ones, to be a red flag. These could include national flags, divisive political hashtags (e.g. #MAGA (make America great again) or #FBPE (follow back pro-Europe)), or a bio stuffed with identity-promoting keywords like “Army vet, patriot, Trump supporter, family man, God lover”.

Taken together, those signs indicate that the profile primarily exists to promote a certain political identity – a common giveaway of astroturfing or coordinated inauthentic behavior.

Copy Paste Sharing

It’s common to find groups of accounts sharing links or posts accompanied by the exact same text (e.g. in a quoted tweet or a Facebook share). This isn’t normal behavior for an ‘organic’ social media user, so it’s a suspicious sign.

Copy paste sharing usually indicates a campaign designed to amplify a certain message. It is likely that Twitter accounts constantly tweeting the same messages in tandem are automated (i.e. bots).

Aggressive Political Agenda

When I’m seeking signs of coordinated inauthentic behavior, I always examine the posting history of a social media account. I check whether all its posts support a specific political agenda (usually in an aggressive and antagonistic way). If so, that’s another red flag.

Sure, regular people can also post aggressively in support of a political agenda. But it’s less likely that those posts will make up the whole of their posting history. A one-topic account is a key sign of coordinated inauthentic behavior. 

Summary

In this post we examined the origins of the term ‘coordinated inauthentic behavior’. Also, we explored one of the key debates around the validity of its definition. Finally, we looked at three simple ways to spot coordinated inauthentic behavior on social media platforms.

  • First, I looked at identity signaling, where accounts project a strong sense of a certain identity (usually political) via profile hashtags, profile imagery, bio information, or posting history. 
  • Second, I discussed copy paste posting, where multiple accounts share something with the exact same accompanying text. This is often a sign of automated coordinated inauthentic behavior. 
  • Finally, I highlighted the significance of one-topic accounts that support a certain political agenda, usually in an aggressive way.

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organisation. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularised the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites have become popular, such as The Onion and NewsThump.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account later attributed to the Kremlin shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, alt-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image was genuine, the context was not.  

Deepfakes

As a new form of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, porn has been the main area where deepfakes are being used. But in a handful of cases they’ve played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes. But they do have potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Alt-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are totally devoid of any nuanced explanation. The viewer gets a quick hit of emotion that has a triggering effect. This taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. Therein lies much of the danger. 

4 Things I’ve Learned From Analysing Russia-Aligned COVID-19 Coverage

Much social unrest has emerged amid COVID-19, such as anti-lockdown protests, attacks on 5G masts, and violent reactions when asked to wear masks. As I write this, a murky far-right group called ‘UK Freedom Movement’ is organising a new spate of anti-lockdown protests around the UK.

This month I’ve been reviewing Russia-aligned news sites. I’ve been looking for key narratives on COVID-19 and the US election. I’ve examined two types of sites: those directly linked to the Russian state, and those with a similar political stance. Many sites share the same core group of authors.

Here are some of my findings, related to the current discussions on social unrest, conspiracy theories and the infodemic.

COVID-19 narratives are consistent across websites

Topics covered on these sites reflect COVID-19 conspiracy narratives found on social media since the pandemic began. Here are three prime examples.

Bill Gates the ‘criminal globalist’

The Microsoft boss features regularly, from the Kremlin-funded news outlet InfoRos to the Russia-aligned news site Fort Russ. Narratives unfold along similar lines.

They claim that Gates is the ‘criminal globalist’ ringleader of a cabal using coronavirus as a smokescreen to impose mandatory tracking and mandatory vaccines.

Justifications for singling out Gates are usually his prescient 2015 talk, in which he highlighted the global risk of a pandemic, or the Gates Foundation’s funding of WHO.

Herd immunity vs lockdown

Another key narrative centres on the benefits of herd immunity, often juxtaposed against the negatives of lockdown. Sweden is the poster child for herd immunity. Lockdown is presented as a corrupt government-led attempt to remove people’s basic freedoms.

It’s not hard to imagine how this framing could trigger people who value freedom above all else – and cause events like the anti-lockdown protests that have been cropping up across the US and UK.

The smouldering culture war of Trump and Brexit has extended into new battle lines of ‘lockdown vs herd immunity’. As a result, pandemic control efforts are at risk.

Scapegoating China

China is presented as an innocent player in the pandemic. The US is blamed for targeting China with information warfare in order to blame it for the coronavirus.

In some articles, the authors claim that the pandemic could create a ‘New Cold War’ between the US and China, with severe consequences for the global economy.

Other sites take it even further, claiming that COVID-19 could spark a nuclear war between the US and a newly formed Russia/China alliance.

Narratives claim that COVID-19 will reshape the world 

Another popular theme is how the outcome of the US 2020 election, plus the effects of coronavirus will cause the US to lose hegemony. The result will be a shift into multilateralism.

Some sites claim coronavirus will cause Western governments to “face a legitimacy crisis like never before”, eventually causing so much chaos that it will reshape the global order.

To reinforce this point they highlight how the US has failed to protect its people from coronavirus, so it can no longer be called a superpower. Multilateralism is presented as inevitable, due to the unprecedented crisis the world now faces.

Anti-imperialism has been a key feature of pro-Russian media for decades. It overlaps with certain far-left lines of thinking, especially among those who critique Western military actions around the world.

They don’t support Trump

“Voters now must choose between Donald Trump, an unstable, incompetent president whose blatant narcissism has been on full display as the nation suffers from coronavirus, and the former vice-president who will diligently represent the rich and govern for their good above all others.”

American Herald Tribune

We often assume that Russia-aligned media is pro-Trump. In fact, many of these news sources criticise Trump as much as Biden. Criticisms of Trump include poor handling of the pandemic, and ‘imperialist shenanigans’ in foreign policy.

Framing of Biden often paints him as sleazy, citing the recent Tara Reade case as evidence. Some articles suggest he may have dementia. Such framing of both candidates as hopeless choices could be a subtle attempt at voter suppression. 

They frame themselves as ‘independent’ thinkers

Most of these websites present themselves as bastions of independent thought. They encourage readers to go beyond the mainstream and discover ‘new’ perspectives.

It reflects a common refrain among social media conspiracy theorists, who often talk about the need to “do your own research” . Often, that translates as “using Google or YouTube to find content that reinforces one’s existing views”.

Pro-Russia news sites tap into this way of thinking. They use it as a defining aspect of their reporting. It’s a message likely to resonate with the exact kind of person who questions everything.

What’s the link to real life unrest? 

Looking at these websites in aggregate, it’s easy to see how their typical narratives link to social unrest during the pandemic.

I’ve noticed the same themes popping up over and over on social media. Ordinary citizens share them in mainstream Facebook groups (e.g. local news and discussion groups).

These ideas have become rooted in public consciousness. They drive a growing sense of distrust in Western governments, particularly in the UK and US, where populations are already polarised. Both countries have handled the pandemic badly, so it’s easier to create scepticism among a fearful population.

If we were to survey the beliefs of anti-lockdown protesters, 5G mast attackers, and mask-related violence, I bet we’d find echoes of the same narratives found across these ‘alternative’ news websites, many of them either Russian government funded, or publishing work from the same authors.

Coronavirus Conspiracy Theories, Tribalism And Public Health

During the pandemic, large crowds of Trump supporters took to the streets of US cities, demanding an end to coronavirus restrictions, such as lockdown and the wearing of masks. Britain saw similar issues, albeit on a smaller scale.

Why are some people so determined to ignore public health advice? Part of the answer may be found by examining political tribalism and coronavirus conspiracy theories.

In this post, I’ll explain how coronavirus conspiracy theories and disinformation leverage tribalism to influence people’s behaviour.

Divided societies, universal threat

When the pandemic first hit, some hoped that the shared experience of facing universal threat would bring warring political tribes together. But it seems the opposite is happening. This is partly driven by an organised and sustained campaign of disinformation and coronavirus conspiracy theories.

In the UK and US, government responses to the virus have been unlike those of many other countries. Portugal, Germany, New Zealand, Canada and South Korea have already managed to regain some level of control over its spread.

In contrast, both the UK and the US were slow to implement lockdown measures. Both gave their populations mixed messages about how to handle the pandemic. Both countries’ leaders have displayed a cavalier attitude towards the virus.

Political tribalism in the UK and the US is now affecting their populations’ response to the coronavirus crisis. This tribalism is a hangover from 2016, the same force that played a role in the election of Trump and the vote for Brexit – polarising the populations in the process.

Coronavirus conspiracy theories demonise groups

A sustained torrent of coronavirus disinformation has compounded these issues. In particular, numerous coronavirus conspiracy theories have eroded trust in public institutions among some segments of the population. Distrust of experts is nothing new. It’s been a central feature of tribal politics since 2016 and shows no sign of dissipating in this pandemic.

Common coronavirus conspiracy theories include:

Tribalism means viewing the world as ‘us vs them’, with ‘us’ being superior and ‘them’ being threatening. This perspective is inherent in these coronavirus conspiracy theories.

Many revolve around the demonisation of a particular group (e.g. elites, the establishment, experts, the WHO, China, and so on). True believers view anyone who supports the demonised group as being part of it. And so the tribal divisions persist.

These coronavirus conspiracy theories cast doubt on the public health situation. They promote distrust of expert advice and official organisations. The result is shifts in population behaviour, e.g, people refusing to follow lockdown, wear masks or practise social distancing.

From Facebook to the streets

The situation has become particularly dangerous in the US, with its current protests. Here the role of social media comes under the spotlight.

Private Facebook groups have been key sites for inciting and organising these protests. Some groups are large, such as ‘Michiganders Against Excessive Quarantine’, or ‘Reopen Virginia’ (the latter with over 18,000 members)

Both groups are full of talk of coronavirus conspiracy theories, such as the below from the Michigan group.

Source: https://twitter.com/willsommer/status/1250838111992647680

Below is an example comment from the ‘Reopen Virginia’ group. This user is calling for civil unrest, while also demonising the outgroup (‘leftist Democrats’). The post has attracted significant engagement, both comments and likes.

Source: https://twitter.com/jaredlholt/status/1250842215435337728/photo/3

These posts show how belief in tribal coronavirus conspiracy theories can lead to virus scepticism and denial. It can also trigger people to take real-life protest action, which risks turning violent.

Furthermore, it’s not easy to know who is producing these comments. Do they reflect the views of genuine American citizens? Or are some of the comments being astroturfed by those who seek to create social unrest?

Coronavirus conspiracy theories are a problem for other social media platforms too. YouTube hosts thousands of videos discussing all kinds of conspiracy theories in great detail. The platform recently changed its policies in an attempt to crack down on coronavirus conspiracy theories and 5G content. But it’s likely too little, too late.

The trouble is, platform takedowns are viewed as a sign of elite censorship in the minds of people already suspicious of experts and official organisations. This adds even more fuel to the fire of coronavirus conspiracy theories.

Local groups are key targets

Private local Facebook groups are a prime target for influence operations. They have already been identified as key battle sites for the US 2020 election, where influence operatives aim to manipulate the political narratives in key swing states.

Targeting local Facebook groups is an effective way to do this. As well as activity such as voter suppression in these groups, influence operations can also compel populations to protest on the streets.

It’s difficult for researchers and analysts to study private Facebook groups in aggregate, as tools such as CrowdTangle don’t allow access to private groups.

These groups are hotspots for US 2020 manipulation activities. Facebook should monitor them carefully. Its moderators should look out not only for signs of voter suppression, but also for coordinated attempts to incite populations to violence.

We must take coronavirus conspiracy theories seriously

These times of heightened fear offer a prime opportunity to for disinformation purveyors to influence the outcome of the US 2020 election.

When political tribalism is so entrenched, fact checking and counter disinformation messaging campaigns may be less effective on a large scale. Instead, they risk exacerbating people’s existing suspicions of the establishment and ‘elites’.

Coronavirus conspiracy theories are not trivial. They risk causing harm on a massive scale, by encouraging populations to ignore public health advice and instigate real life violence.

It’s essential that social media companies take coronavirus conspiracy theories seriously, particularly within private groups. Whether or not they do so may end up as a key deciding factor of the US 2020 election. 

disinformation

6 Things I’ve Learned About COVID-19 Disinformation

Disinformation thrives on chaos. A global pandemic is about as chaotic as it gets.

For those who seek to spread disinformation, COVID-19 presents a far grander opportunity than either the 2016 US election or the vote on Brexit. The upcoming 2020 US presidential election further fans the flames.

That’s why it’s important to stop and take stock of lessons learned from the front lines of disinformation tracking.

I’ve been studying cross-platform coronavirus narratives for the last month or so. Here are a few of the things I’ve found.

What I’ve learned about COVID-19 disinformation

disinformation virus

1. Q is a key player in disinformation efforts

Qanon is a mega conspiracy narrative that encompasses a whole range of smaller ones. The basic premise of Qanon claims that Donald Trump is in league with a shadowy figure called Q.

Together, Trump and Q fight against a group of elite paedophiles entrenched within the mainstream media and the Democrat Party.

Previous presidential candidate Hillary Clinton and current one Joe Biden have both been major targets for Q’s accusations.

Every so often, Q releases tantalising nuggets of new information (called ‘Q drops’) for his followers to chew over.

These have sparked a whole ecosystem of pervasive social media content, from Twitter threads to entire YouTube channels.

Coronavirus disinformation is being well-leveraged by Q and followers. Q related themes and activity underpin many of the most widely spread corona conspiracies.

Those include coronavirus being either a hoax or a bioweapon, 5G causing the virus, a supposed plan to enforce mandatory vaccinations, and the imminent arrival of military martial law.

2. Mainstream media is pushing disinformation narratives

Conservative media sources in the US, such as Fox News, play a significant role in promoting narratives that draw on conspiracies, including around coronavirus disinformation. They claim it’s ‘not a big deal’, or it’s ‘just like the flu’, or, ‘it’s all a big hoax’.

Although these stories may be less colourful than those of the average Q acolyte, they are still risky.

Provenance in established media sources provides the necessary social proof to make the narratives more credible in the minds of their audiences.

What’s more, this scenario means less work for those who intend to manipulate public opinion around the coronavirus.

They no longer have to waste time crafting convincing content, but can simply engage with organic content that already exists. And that’s exactly what they’re doing, with a firm eye on the US 2020 election.

3. Coronavirus tribalism is prevalent

Pitting ‘us’ against ‘them’ is at the core of most disinformation, including conspiracy theories. The narratives can take many forms, but always come down to one group (the ingroup) facing off against a predefined opposing group (the outgroup).

For Qanon, it’s Q’s followers who are the ‘enlightened’ ingroup, joining forces with him and Trump to battle the predatory elites. In British politics, we see ‘patriotic’ supporters of Brexit setting themselves against ‘treacherous’ Remainers (and vice versa).

Tribalism even filters down to matters of life or death, i.e. the coronavirus. On social media, I’ve noticed a recurring adversarial narrative emerging around how best to respond to the pandemic.

One camp downplays the severity of the virus, claiming measures such as the lockdown are an overreaction. The other camp is strongly in favour of lockdown and promotes WHO advice to Stay At Home. Each camp supports their own and attacks the other, often in derogatory and aggressive ways.

It’s dangerous when people are already suspicious of ‘elites’ and experts. They have a tendency to dismiss guidance from governments and public health organisations, which can lead to the flouting of virus mitigation measures. Real world harms can result.

4. Virus fears being monetized 

The chaos and fear of a global pandemic has spawned many opportunities for leveraging the attention economy.

As well as conspiracy theories, there are many examples of people making money via coronavirus disinformation, by tapping into people’s fear, boredom, and increased need for answers.

I’ve identified two main ways of doing this. The first is through creating highly clickable content about the virus.

This content may or may not be factual; it doesn’t matter to the creator, as long as it brings in the clicks.  Content is published on websites festooned with online ads, where each click brings extra ad dollars to the site owner.

The second way is to create content on topics such as ‘miracle cures’, which then feeds into attempts to sell products. Vitamin C is a prime example.

It’s a cynical exploitation of people’s fears about the virus and their need to regain a sense of control.

These ‘miracle cures’ are not scientifically proven. They provide a false sense of security, which may lead to individuals choosing not to self isolate and spreading the virus as a result.

5. Takedowns have a ‘backfire effect’ 

Takedowns are a necessary part of tackling the coronavirus disinformation problem.

However, denying bad actors freedom of reach can also strengthen the impetus behind conspiracy theories by feeding into an existing sense of elite suppression.

Here, conspiracy theorists view the platforms as part of the elite, keeping the ‘truth’ hidden from the people.

Conspiracy theorists are quick to react to takedowns, working them into their coronavirus disinformation narratives.

With 5G, a trend has sprung up of referring to it as ‘5gee’ or similar permutations. This is an attempt to avoid the keyword being picked up by moderators or analysts who are tracking it.

For conspiracy adherents, this sense of persecution further reinforces their existing worldview. It makes them more likely to cling to it. In this way, a ‘backfire effect’ has occurred. 

6. Platform responses are shifting 

Social media companies are frequently accused of not doing enough to reduce the flood of misleading content that overwhelms their platforms.

I don’t think they’re reluctant to do so, but they have to balance this move with being seen as supportive of free speech.

Finding that balance can be challenging when addressing conspiracy theories, as opposed to purely false information.

Most conspiracy theories are spun up like candy floss around a small kernel of truth.

A typical post will build a whole story around how some real life event is of possible significance to the wider narrative arc.

The difference between opinion and actual false information is not always clear-cut. This creates murky territory for the platforms. 

But things have shifted after some conspiracy theories, such as the one about 5G causing coronavirus, triggered real life harms.

A recent video by notorious conspiracy theorist David Icke was pulled from YouTube just days after it was released, heralding a change in approach.

A growing amount of research indicates that coronavirus conspiracy theories form a central part of coordinated influence operations.  

We can no longer afford to overlook the role of conspiracy theories and disinformation in influence operations.