4 Things I’ve Learned From Analysing Russia-Aligned COVID-19 Coverage

Much social unrest has emerged amid COVID-19, such as anti-lockdown protests, attacks on 5G masts, and violent reactions when asked to wear masks. As I write this, a murky far-right group called ‘UK Freedom Movement’ is organising a new spate of anti-lockdown protests around the UK.

This month I’ve been reviewing Russia-aligned news sites. I’ve been looking for key narratives on COVID-19 and the US election. I’ve examined two types of sites: those directly linked to the Russian state, and those with a similar political stance. Many sites share the same core group of authors.

Here are some of my findings, related to the current discussions on social unrest, conspiracy theories and the infodemic.

COVID-19 narratives are consistent across websites

Topics covered on these sites reflect COVID-19 conspiracy narratives found on social media since the pandemic began. Here are three prime examples.

Bill Gates the ‘criminal globalist’

The Microsoft boss features regularly, from the Kremlin-funded news outlet InfoRos to the Russia-aligned news site Fort Russ. Narratives unfold along similar lines.

They claim that Gates is the ‘criminal globalist’ ringleader of a cabal using coronavirus as a smokescreen to impose mandatory tracking and mandatory vaccines.

Justifications for singling out Gates are usually his prescient 2015 talk, in which he highlighted the global risk of a pandemic, or the Gates Foundation’s funding of WHO.

Herd immunity vs lockdown

Another key narrative centres on the benefits of herd immunity, often juxtaposed against the negatives of lockdown. Sweden is the poster child for herd immunity. Lockdown is presented as a corrupt government-led attempt to remove people’s basic freedoms.

It’s not hard to imagine how this framing could trigger people who value freedom above all else – and cause events like the anti-lockdown protests that have been cropping up across the US and UK.

The smouldering culture war of Trump and Brexit has extended into new battle lines of ‘lockdown vs herd immunity’. As a result, pandemic control efforts are at risk.

Scapegoating China

China is presented as an innocent player in the pandemic. The US is blamed for targeting China with information warfare in order to blame it for the coronavirus.

In some articles, the authors claim that the pandemic could create a ‘New Cold War’ between the US and China, with severe consequences for the global economy.

Other sites take it even further, claiming that COVID-19 could spark a nuclear war between the US and a newly formed Russia/China alliance.

Narratives claim that COVID-19 will reshape the world 

Another popular theme is how the outcome of the US 2020 election, plus the effects of coronavirus will cause the US to lose hegemony. The result will be a shift into multilateralism.

Some sites claim coronavirus will cause Western governments to “face a legitimacy crisis like never before”, eventually causing so much chaos that it will reshape the global order.

To reinforce this point they highlight how the US has failed to protect its people from coronavirus, so it can no longer be called a superpower. Multilateralism is presented as inevitable, due to the unprecedented crisis the world now faces.

Anti-imperialism has been a key feature of pro-Russian media for decades. It overlaps with certain far-left lines of thinking, especially among those who critique Western military actions around the world.

They don’t support Trump

“Voters now must choose between Donald Trump, an unstable, incompetent president whose blatant narcissism has been on full display as the nation suffers from coronavirus, and the former vice-president who will diligently represent the rich and govern for their good above all others.”

American Herald Tribune

We often assume that Russia-aligned media is pro-Trump. In fact, many of these news sources criticise Trump as much as Biden. Criticisms of Trump include poor handling of the pandemic, and ‘imperialist shenanigans’ in foreign policy.

Framing of Biden often paints him as sleazy, citing the recent Tara Reade case as evidence. Some articles suggest he may have dementia. Such framing of both candidates as hopeless choices could be a subtle attempt at voter suppression. 

They frame themselves as ‘independent’ thinkers

Most of these websites present themselves as bastions of independent thought. They encourage readers to go beyond the mainstream and discover ‘new’ perspectives.

It reflects a common refrain among social media conspiracy theorists, who often talk about the need to “do your own research” . Often, that translates as “using Google or YouTube to find content that reinforces one’s existing views”.

Pro-Russia news sites tap into this way of thinking. They use it as a defining aspect of their reporting. It’s a message likely to resonate with the exact kind of person who questions everything.

What’s the link to real life unrest? 

Looking at these websites in aggregate, it’s easy to see how their typical narratives link to social unrest during the pandemic.

I’ve noticed the same themes popping up over and over on social media. Ordinary citizens share them in mainstream Facebook groups (e.g. local news and discussion groups).

These ideas have become rooted in public consciousness. They drive a growing sense of distrust in Western governments, particularly in the UK and US, where populations are already polarised. Both countries have handled the pandemic badly, so it’s easier to create scepticism among a fearful population.

If we were to survey the beliefs of anti-lockdown protesters, 5G mast attackers, and mask-related violence, I bet we’d find echoes of the same narratives found across these ‘alternative’ news websites, many of them either Russian government funded, or publishing work from the same authors.

6 Things I’ve Learned From Tracking Coronavirus Disinformation

Disinformation thrives on chaos, and a global pandemic is about as chaotic as it gets. For those who seek to disinform, the coronavirus presents a far grander opportunity than either the 2016 US election or the vote on Brexit. The upcoming 2020 US presidential election further fans the flames.

With that in mind, it’s important to regularly stop and take stock of lessons learned from the front lines of disinformation tracking. I’ve been studying cross-platform coronavirus narratives for the last month or so. Here are a few of the things I’ve found.

Main themes in coronavirus disinformation

1. Q is a key player in coronavirus disinformation

Qanon is a mega conspiracy narrative that encompasses a whole range of smaller ones. The basic premise of Qanon has Donald Trump in league with a shadowy figure called Q. Together, Trump and Q are fighting against a group of elite paedophiles entrenched within the mainstream media and the Democrat Party.

Previous presidential candidate Hillary Clinton and current one Joe Biden have both been major targets for Q’s accusations. Every so often, Q releases tantalising nuggets of new information (called ‘Q drops’) for his followers to chew over. These have sparked a whole ecosystem of pervasive social media content, from Twitter threads to entire YouTube channels.

Coronavirus disinformation is being well-leveraged by Q and his followers. Q related themes and activity underpin many of the most widely spread corona conspiracies, including coronavirus being either a hoax or a bioweapon, 5G causing the virus, a supposed plan to enforce mandatory vaccinations, and the imminent arrival of military martial law.

2. Mainstream media is pushing conspiracy narratives

Conservative media sources in the US, such as Fox News, play a significant role in promoting narratives that draw on conspiracies, including around coronavirus disinformation. They claim it’s ‘not a big deal’, or it’s ‘just like the flu’, or, ‘it’s all a big hoax’.

Although these stories may be less colourful than those of the average Q acolyte, they are still risky. Provenance in established media sources provides the necessary social proof to make the narratives more credible in the minds of their audiences.

What’s more, this scenario means less work for those who intend to manipulate public opinion around the coronavirus. They no longer have to waste time crafting convincing content, but can simply engage with organic content that already exists. And that’s exactly what they’re doing, with a firm eye on the US 2020 election.

3. Coronavirus tribalism is prevalent

Pitting ‘us’ against ‘them’ is at the core of most disinformation, including conspiracy theories. The narratives can take many forms, but always come down to one group (the ingroup) facing off against a predefined opposing group (the outgroup).

For Qanon, it’s Q’s followers who are the ‘enlightened’ ingroup, joining forces with him and Trump to battle the predatory elites. In British politics, we see ‘patriotic’ supporters of Brexit setting themselves against ‘treacherous’ Remainers (and vice versa).

Tribalism even filters down to matters of life or death, i.e. the coronavirus. On social media, I’ve noticed a recurring adversarial narrative emerging around how best to respond to the pandemic.

One camp downplays the severity of the virus, claiming measures such as the lockdown are an overreaction, while the other camp is strongly in favour of lockdown and promotes WHO advice to Stay At Home. Each camp supports their own and attacks the other, often in derogatory and aggressive ways.

When people are already suspicious of ‘elites’ and experts, there’s a real tendency to dismiss guidance from governments and public health organisations, which can lead to the flouting of virus mitigation measures. Real world harms can result.

4. Virus fears are being monetised 

The chaos and fear of a global pandemic has spawned many opportunities for leveraging the attention economy. As well as conspiracy theories, there are many examples of people making money via coronavirus disinformation, by tapping into people’s fear, boredom, and increased need for answers.

I’ve identified two main ways of doing this. The first is through creating highly clickable content about the virus. This content may or may not be factual; it doesn’t matter to the creator, as long as it brings in the clicks.  The content is published on websites festooned with online ads, where each click brings extra ad dollars to the site owner.

The second way is to create content on topics such as ‘miracle cures’, which then feeds into attempts to sell products. Vitamin C is a prime example. It’s a cynical exploitation of people’s fearfulness about the virus and their need to somehow regain a sense of control.

These ‘miracle cures’ are not scientifically proven. They provide a false sense of security, which may lead to individuals choosing not to self isolate and spreading the virus as a result.

5. Takedowns have a ‘backfire effect’ 

Takedowns are a necessary part of tackling the coronavirus disinformation problem. However, denying bad actors freedom of reach can also strengthen the impetus behind conspiracy theories by feeding into an existing sense of elite suppression. Here, the platforms are viewed as part of the elite, working together to keep the ‘truth’ hidden from the people.

Conspiracy theorists are quick to react to takedowns by working them into their coronavirus disinformation narratives. With 5G, a trend has sprung up of referring to it as ‘5gee’ or similar permutations, in an attempt to avoid the keyword being picked up by moderators or analysts who are tracking it.

For conspiracy adherents, this sense of persecution further reinforces their existing worldview, making them more likely to cling onto it. In this way, a ‘backfire effect’ has occurred. 

6. Platform responses are shifting 

Social media companies are frequently accused of not doing enough to reduce the flood of misleading content that overwhelms their platforms. I don’t believe they’re reluctant to do so, but they have to balance it with being seen as supportive of free speech. Finding that balance can be challenging when addressing conspiracy theories, as opposed to purely false information.

Most conspiracy theories are spun up like candy floss around a small kernel of truth. A typical post will build a whole story around how some real life event is of possible significance to the wider narrative arc. This creates murky territory for the platforms because the difference between opinion and actual false information is not always clear-cut.

But things have shifted after some conspiracy theories, such as the one about 5G causing coronavirus, triggered real life harms. A recent video by notorious conspiracy theorist David Icke was pulled from YouTube just days after it was released, heralding a change in approach.

A growing amount of research indicates that coronavirus conspiracy theories form a central part of coordinated influence operations.  We can no longer afford to overlook the role of conspiracy theories in influence operations. 

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organisation. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularised the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites have become popular, such as The Onion and NewsThump.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account later attributed to the Kremlin shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, alt-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image was genuine, the context was not.  

Deepfakes

As a new form of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, porn has been the main area where deepfakes are being used. But in a handful of cases they’ve played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes. But they do have potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Alt-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are totally devoid of any nuanced explanation. The viewer gets a quick hit of emotion that has a triggering effect. This taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. Therein lies much of the danger. 

Dopamine and Disinformation: How To Quit Feeding The Trolls

Dopamine and disinformation are intrinsically linked. In this article, I’ll explain how this works, and look at some ways to lessen the effects.

I used to spend a lot of time arguing with strangers on the Internet.

It normally went like this: I would post something political and the trolls would arrive. I’d get stuck in endless back-and-forths with them. It was a futile endeavour. But somehow I felt compelled to keep on engaging.

On Twitter or Facebook, new notifications would keep coming in, alerting me of another new comment. Then I’d engage again. It wasted my time, raised my anxiety levels, and made me feel more aggressive.

I’d gotten stuck in a dopamine loop.

This happens to us a lot, for example when we can’t stop refreshing our social media feeds. But what exactly is dopamine and what does it do?

How dopamine creates addiction

Dopamine is a chemical created in the brain. It affects many functions, such as thinking, mood, attention and motivation. It also plays a critical role in triggering our desire to seek out and search. In short, dopamine makes us want to search out information.

Two parts are at play in the dopamine loop. There’s the ‘wanting’ part and the ‘liking’ part. The wanting part makes you take action (i.e. searching), while the liking part satisfies you and makes you stop searching. But the wanting part is stronger than the liking part. And that’s how we get trapped in endless scroll mode.

Another important part of the dopamine system relates directly to the issue of engagement with trolling comments.

The system is very sensitive to cues that indicate the possibility of a reward. On social media, notifications are the cues. They make the addiction even stronger. Anticipation is key. If we got a reward every time we logged in, it would weaken the addiction. It’s the uncertainty that gets us hooked.

So how did these behavioural psychology elements get incorporated into our Internet experience in the first place?

Engineered addiction and trolling

It all started with a researcher called BJ Fogg and his Persuasive Technology Lab at Stanford University. In September 2007, Fogg and his students began work on a formula that would change the world — and all of our daily lives.

They built Facebook apps using techniques from behavioural psychology, techniques that could engineer addiction, such as the dopamine loop that I described above. Fogg developed a formula, suggesting that people will act when three forces converge: motivation, trigger, and ability.

Let’s apply this formula to a hostile online interaction. The motivation is your desire to convince the other person that your opinion is right, or to get ‘one up’ on them; the trigger is seeing another notification on your social media app, indicating that your opponent has responded; and the ability is having your phone to hand, which lets you check right away and respond.

These techniques tap directly into the workings of the human brain, making them extremely powerful.

So why do social media platforms leverage such addictive and manipulative design practices?

Simple: their profitability depends on it.

The ad-supported business model means that users spending more time on the app leads to more profit for the company. All that time you spend arguing with trolls on Facebook is making the platform more valuable to potential advertisers.

Dopamine and disinformation

Arguing online also relates to dopamine and disinformation. It can make us more susceptible to the latter. The dopamine loop gives a powerful tool to those who seek to divide us. It perpetuates a sense of an adversarial environment and keeps us always on the attack.

When we divide ourselves into tribes and adopt a hostile attitude towards a perceived outgroup, we risk becoming more susceptible to harmful disinformation. We are more likely to share content that is cognitively resonant with our existing beliefs and which reflects the attitudes of our tribe.

The dopamine loop also affects our interactions with agreeable content. When we post something that our tribe members approve of, we’ll receive dopamine hits via our notifications of likes and encouraging comments. That boosts our self-esteem and keeps us coming back for more.

So what can we do to fix the trolling problem and reduce our susceptibility to disinformation?

Breaking out of the loop

Short-term practical solutions mainly involve adapting our devices to help break the dopamine loop. For example, we could make our phones less appealing by changing the screen to grayscale mode, or switching off all notifications.

But we can also tackle adversarial online behaviour in another way.

‘Don’t feed the trolls’ has become an Internet cliché. But it starts to make sense when thinking about the dopamine loop and anticipatory reward.

Everyone who posts online is looking for a response. They want to perpetuate the dopamine loop. If we can maintain our self control by denying them that response, then we can break the cycle.

I’ve managed to break my own addiction to engaging with trolls. On Twitter, I simply mute the conversation. I can’t see the person anymore, and I receive no notifications of any of their comments. It makes the troll invisible to me, breaking the dopamine loop and allowing my brain to focus on other things.

On Facebook, I simply turn off notifications for a particular post. This has the same effect as muting on Twitter. Both platforms also offer a blocking option. I don’t normally use this because it gives trolls the satisfaction of knowing they’ve got a response. Muting is better, because it means they’re left wondering if I’m ignoring them. They just keep on yelling into the void.

Battleground or community

If we could all break the cycle and quit feeding the trolls, then adversarial disinformation and influence ops could lose much of their power. The online environment would feel like more of a community instead of a battleground. In turn, this may help reduce polarisation.

But it has to be done en masse. A handful of people breaking the cycle won’t be enough to change the overall environment. As social media is designed to be addictive, a wider intervention would be necessary to encourage people to do this.

Of course, the social media platforms have all the power. They could redesign their structures to destroy these dopamine loops and disincentivise disinformation. But their ad driven business model means they don’t have an incentive to do so.

Nevertheless, we can still improve our individual online experiences by taking steps to break the dopamine and disinformation cycle. Part of doing this is to disengage with users who aim to trap us into never-ending adversarial debates based around polarising topics.

Analysing Trump’s Medical Disinformation on Facebook

US president Donald Trump shocked the world this week with his latest piece of medical disinformation.

Trump claimed that injecting disinfectant into the body could be an ‘interesting’ way to cure COVID-19.

He later tried to back-pedal, claiming he was being sarcastic. But that wasn’t how most of the world took it.

Dangers of medical disinformation

The mainstream media and the public widely lambasted this dangerous medical disinformation.

Amid the furore over Trump’s remarks, a major disinfectant firm issued a statement urging the public not to inject or drink any of their products.

However, members of pro-Trump Facebook groups dedicated to conspiracy theories displayed quite the opposite reaction. 

I examined some of these groups to provide comment for an article in CodaStory. I’d previously gathered this list because of the strong focus on various ‘corona disinformation conspiracies’.

These include 5G causing the virus, the virus being a US bioweapon, and Bill Gates as having orchestrated the ‘virus hoax’ in his ambition to enforce a worldwide vaccine programme. 

Many of the groups also centred around the Qanon conspiracy theory.

Pro-Trump Facebook reactions

You might expect the suggestion of injecting bleach to be a step too far even for these largely pro-Trump groups. Not so. 

In my initial observation of the groups, I noticed three distinct ways in which the members attempted to account for Trump’s bizarre medical disinformation.

First, that Trump was just ‘playing the media’. People must be stupid if they believe he meant what he said.

Commenters also attributed all the negative media coverage to ‘yet another’ MSM (mainstream media), liberal, or Democrat attempt to smear Trump.

Secondly, some commenters claimed that the media had quoted Trump ‘out of context’. According to them, he was speaking ‘more generally’ about possible ways to treat COVID-19.

Others highlighted a fact check article from far-right news outlet Breitbart. But no-one acknowledged the videos of Trump making these claims for everyone to see and hear. 

The third claim relates more closely to other COVID-19 medical disinformation, ‘miracle cures’. This commenter claimed that Trump must have been referring to those UV light therapy and ozone therapy, which already exist.

Things got more interesting when the commenter drew links between the medical disinformation about bleach and the popular narrative of ‘Vitamin C as miracle cure’.

They claimed that taking Vitamin C causes hydrogen peroxide to build up in the body. It followed that hydrogen peroxide has a disinfectant effect, so Trump’s comments have a basis in medical fact.

Rationalising medical disinformation

These three counter-narratives about Trump’s medical disinformation all attempt to rationalise an influential figure making a dangerous and irresponsible remark.

Tribal attitudes drive many of these rationalisations. For example, the claims that the media purposefully misinterpreted Trump’s comments in a ‘libs’ or ‘Dems’ smear attack. Once again, this reinforces the existing divide between populist pro-Trump narratives and the mainstream.

The question remains: How many of these Facebook group members are genuine American citizens? Facebook itself is the only entity that could properly attribute the accounts. And it doesn’t seem to be giving much away.

I suspect group members are a mix of genuine Trump supporters and astroturfers working to stir up tribal hatred of the ‘other side’.

Tribal attitudes can be dangerous, particularly in relation to public health. People in the pro-Trump tribe are more likely to challenge messages from the perceived ‘outgroup’ (‘experts’ and the ‘MSM’) such as critical public health advice from the WHO.

A similar dynamic has fuelled recent anti-lockdown protests across the US, which may already have spread the virus further and compromised the entire country. Astroturfing was certainly a factor there; there’s no reason why it couldn’t be influencing these groups too.