Home » Disinformation In Politics

Category: Disinformation In Politics

voter suppression

Voter Suppression Online: How To Fight Back

Introduction

In this week’s article, I’m going to delve into the important issue of online voter suppression. I’ll give some examples of what voter suppression attempts look like on social media and explain specifically how disinformation is involved. I’ll also discuss the psychological drivers that make online voter suppression effective. In closing, I’ll suggest some ways that we can all fight back. 

What is Voter Suppression?

Voter suppression is defined as “the use of specific tactics with the intent of manipulating the outcome of an election.” Historically, these tactics have included bringing in new voter ID laws, preventing early voting, purging voter rolls, and creating systematic disenfranchisement. 

Voter suppression commonly affects minority and vulnerable groups, such as the elderly or disabled. In the United States, suppression attempts have existed as a practice ever since voting began, despite the original intention for voting to be open to all. The Jim Crow voting laws of 1877 are a prime example of historical voter suppression targeting the black vote. 

Voter suppression happens in other countries too. For example, in 2019, the UK’s ruling Conservative Party introduced pilot schemes to require voters to produce photo identification at polling stations. Photo ID is not compulsory in the UK. Activists argued that this move would disenfranchise those sectors of society who do not hold a driving licence or passport; often the poor.    

Voter Suppression in 2020

We live in a technologically advanced age, but voter suppression hasn’t stopped. If anything, it has become even more of a risk to democratic processes worldwide. The internet, and social media in particular, has given new scope to voter suppression attempts. Now, it’s easy for hostile foreign powers to conduct voter suppression campaigns targeting the citizens of another nation.

Online voter suppression is highly likely in 2020. Firstly, since 2016, falsehoods, conspiracy theories and uncertainty have characterized our information environment. It’s become the new normal. Add to that a pandemic, during which people are constantly online while feeling fearful and paranoid. Then top things off with a critical presidential election in a deeply polarized United States. 

Foreign powers are also weighing in on the upcoming US election. China and Russia both have stakes in the outcome, but they’re pulling in opposite directions. According to a recent intelligence report, China is supporting Joe Biden, while Russia prefers a Trump win. Therefore, we can expect online voter suppression activity to be characterized along those dynamics.

Online Voter Suppression Examples

Let’s examine some examples of how voter suppression works in a social media environment. 

Facebook groups, especially private locally-focused ones, are likely to become key targets for voter suppression attempts in the run up to US 2020. Tactics may include the following: 

  • Using fear, e.g. claims that COVID-19 is circulating and people should not go out to vote; claims about violent street protests near polling stations  
  • Spreading disinformation about the voting process, e.g. giving false polling station opening hours; claiming that only pencils can be used to mark the ballot; claiming that photo ID is essential. 
  • Creating apathy, e.g. earlier in 2020, Russian media sources were spreading a narrative that both Biden and Trump are bad options for president, because they both ‘have mental health issues’. The goal here is to discourage people from voting for either candidate.  
  • Targeting a candidate with malinformation, disinformation or conspiracy theories, e.g. the Clinton email leak of 2016 was the starting point for a lot of long-running activity intended to harm Hillary Clinton’s candidacy. In 2020, we’ve seen narratives circulating that question Joe Biden’s mental health, e.g. accusing him of having dementia or Alzheimer’s. 
  • Using Facebook’s micro-targeting capabilities to reach specific parts of the voting population with ads only visible to them. This provides endless scope for targeting vulnerable subsections with suppressive messages that other voters (and the press) will probably never see.
  • Creating mistrust in the legitimacy of the voting process, e.g. by spreading online narratives about fraudulent behavior at the ballot box (e.g. when counting votes). NB: Dictators often use this tactic to discredit their opponents. For example, in 2019, Turkey’s president, Recep Tayyip Erdogan, ordered a recount of the Istanbul mayoral election vote after his party lost.
  • Combinations of any of the above, for example using Facebook micro targeted ads to deliver messaging about localized street violence or lockdowns, to discourage particular groups from going out to vote.  

Voter Manipulation Example

There are other ways to direct a vote, which don’t quite fall into the category of voter suppression, but are worth mentioning as they may overlap. For example, it’s possible to astroturf the perception of divisions between subgroups on one political side.

Here’s an example from the UK. In the general election of 2019, the pro-Remain side (primarily Labour and Liberal Democrats) may have been subjected to online efforts aimed at turning groups against one another. 

Creating this division would have been high stakes, because coordinated strategic voting for pro-Remain parties was considered the only way to oust sitting Conservative MPs. Done successfully, this could have prevented a Conservative (and pro-Brexit) majority in government. 

I observed frequent activity in local Facebook groups that was in line with this goal, e.g. lots of supposedly pro-Remain users stating they ‘could never vote for Jeremy Corbyn’, claims that Labour was ‘communist’, claims that the Liberal Democrats were simply Conservatives under another moniker (‘yellow Tories’), and so on. (NB: Astroturfing has not yet been proven in this case. This example represents an educated assessment based on first-hand observation and background knowledge).

Instead of cooperating, Labour and Liberal Democrat voters turned against one another. Many refused to vote tactically, citing ideological reasons. 

Those sentiments likely undermined the tactical voting initiative. They may have helped tilt the election outcome in favor of the Conservative Party/pro-Brexit side, which went on to win the election.  

The Psychology of Online Voter Suppression

Online voter suppression works by tapping into many of the same psychological biases that make social media users vulnerable to disinformation in general, especially within a deeply polarized political environment. 

Here, people are highly likely to interact online only with members of their own tribe, i.e. those who share their views. Interactions outside the tribe are likely to be hostile in nature, aimed at getting one up on the opponent. 

A range of cognitive biases help our brains make sense of the world and avoid information overload. But some of them make us more susceptible to manipulation. For example, thanks to confirmation bias, we’re far more likely to believe information that fits our existing worldview. On the other hand, we’re quick to doubt that which does not. 

Furthermore, we tend to engage positively with information shared by people we know, or who appear to be in our tribe. The problem is, badges of tribe membership (e.g. use of hashtags, flag emojis, or ideological statements in the bio) can easily be faked on social media. It’s not always easy to be sure if we’re interacting with a genuine person or someone with a political agenda. 

How You Can Fight Back 

  • Question the incentives of all voting-related content that you see on social media. 
  • Know the tell-tale signs of inauthentic social media users (hint: examine the source carefully).
  • Look for repetitive user behavior and single issue fixation. 
  • Be suspicious of users who try to stir up intergroup conflict over voting related issues. 
  • Think about tribalism, look out for those who use hyper-partisan language (e.g. lefty, commie, woke, libtard, MAGAT).  
  • Pause before sharing voting-related posts and think about what’s compelling you to share them. Is your need to share an emotion-driven knee-jerk response, or are you doing it in a more measured, rational way? 
  • Only pay attention to voting-related messages from trustworthy official sources (i.e. government websites or official social media accounts). 
  • Register to vote and go to your local polling station on voting day, no matter what happens. It’s NEVER a wasted exercise, no matter what some manipulators would like you to believe. 

Conclusion

In this post, I briefly present the history of voter suppression and how it has evolved in the 21st century. I also give examples of how it manifests on social media. Then I touch on the psychological factors making us susceptible to it. I end with some suggestions for how ordinary social media users can fight back against potential voter suppression attempts.

 

 

 

 

fake news and disinformation

4 Simple Ways To Stop Sharing Fake News and Disinformation

Introduction

Fake news, more specifically known as disinformation, is a major problem that shows no sign of going away. If anything, it’s evolving in new ways to become more nefarious than before. Elections are always major flashpoints for fake news, and the US presidential election of 2020 is no exception. 

Many individuals and organizations are working hard to come up with ways to fight fake news and disinformation. In the meantime, ordinary internet users can also do their part to help.

In this post, I’ll discuss four simple ways that you can stop sharing fake news and disinformation.

4 Simple Ways To Stop Sharing Fake News and Disinformation

Break Out Of Dopamine Loops

fake news and disinformation

What is a dopamine loop and how does it relate to fake news and disinformation?

Dopamine is a chemical in the brain that affects functions such as mood, attention and motivation. It also plays a key role in affecting our desire to seek out new things – like information. 

Dopamine loops consist of two parts: wanting and liking. ‘Wanting’ compels you to keep searching for new information, while ‘liking’ is the part that makes you feel satisfied once you find it.

The trouble is, in the dopamine loop, wanting is stronger than liking. This leads to us getting trapped, constantly compelled to seek new information. 

The original designers of social media knew all about dopamine loops. They designed the platforms with them in mind, knowing that the loops would keep users hooked on the platform. That would increase the likelihood of users viewing online ads.  

So how does the dopamine loop relate to fake news and disinformation? One major way that we get dopamine hits online is through receiving notifications on social media.

You know, those little red numbers in the top corner of Facebook. Think about how you feel when you open your profile and see a bunch of them waiting for you. You feel good, right? This is dopamine firing in your brain. 

Sharing content with your friends and followers is a great way to get notifications, which gives you even more dopamine. But this is where we find the danger from fake news and disinformation.

When we share to get dopamine hits, we’re less likely to take the time to check whether the information we’re sharing is actually true. After all, we’re constantly in a state of information overload. 

One way to stop spreading fake news and disinformation is to break our addiction to dopamine. It makes us vulnerable. We need to avoid getting stuck in dopamine loops, constantly refreshing our social media apps in the hunt for fresh information and notifications. 

Quick ways to break the dopamine loop:

  • Turning off your social media notifications altogether
  • Switching your device to grayscale mode (making it less likely to produce a dopamine hit)
  • Pausing to take a few deep breaths before sharing any content 

But there’s another critical way to stop sharing fake news and disinformation…

Avoid Heated Arguments Online

fake news and disinformation

The internet is full of trolls. No matter what you say online, it often feels like someone is always ready to attack you for it. Your natural instinct is to strike back. That’s a very human response. But it risks making you more likely to share fake news and disinformation.  

Why? Because arguing online is another way to get trapped in a dopamine loop. Your antagonist keeps responding, you keep getting more notifications. You keep arguing back, and the cycle continues.

Often, you’ll share a piece of online content, perhaps a news article, to prove your point and get ‘one up’ on your opponent. When doing so, you probably don’t take the time to fact-check the article. That’s where the danger is. 

What’s more, some online trolls are there deliberately. They’re part of coordinated inauthentic behavior campaigns designed to sow division and hostility around certain topics (usually political ones).

These campaigns usually involve fake news and disinformation too. By arguing with these political trolls, you’re giving them exactly what they want. 

Luckily, there’s an easy way to avoid being drawn into online political arguments. On Twitter, it’s the mute function (either mute conversation, or mute user). On Facebook, you can turn off notifications about a specific post.

These features are great, because they allow you to break out of the dopamine loop and the troll has no idea. They just carry on yelling into the void. Meanwhile, you carry on with your day and remain blissfully unaware.

Check Your Confirmation Biases

confirmation bias

Confirmation bias plays a key role in increasing our likelihood of sharing fake news and disinformation. But what exactly is it?

Confirmation bias is our natural tendency to search for, favor and easily believe information that fits with our existing worldview. 

Let’s look at how confirmation bias works in practice. For example, you see a tweet (falsely) claiming that US presidential candidate Joe Biden has dementia.

You’re a Trump supporter and you don’t like Biden. Thanks to confirmation bias, you’re very likely to hit retweet on this tweet without even stopping to question if it’s really true. 

You also know that your Twitter followers (who have similar worldviews) will appreciate your sharing this tweet. They’re likely to give it lots of attention, including retweets and favorites – i.e. plenty of extra dopamine for you. 

However, if you saw a similar tweet questioning Trump’s mental health, it’s far more likely that you’d be skeptical of it. Of course, this works in the other direction too. Confirmation bias is not unique to either end of the political spectrum. 

It’s dangerous, because it makes people automatically believe (and probably share) content that fits their worldviews, without stopping to check its veracity. 

If you really want to stop sharing fake news and disinformation, you have to approach your social media use knowing that you have confirmation bias. You have to consciously remind yourself what exactly it is that compels you to share a certain post.

It’s not easy, but it’s a necessary step to help stop sharing fake news and disinformation.

Consider Content Incentives For Fake News

incentives for fake news

Finally, I want to discuss the incentives of social media content. Every post and article on the internet has a certain incentive behind it. For many content creators, publishing articles is a way to drive traffic to their websites, to earn money from online ads. This is their main incentive.

But the social media space is noisy, so those articles need to stand out. That’s why you’ll see so many overblown clickbait titles that often bear little relevance to the actual content of the article.

In particular, politics is a highly charged and emotive topic, so it’s often used to catch attention and drive site traffic. That’s how the infamous Macedonian teenagers made money from pushing pro-Trump fake news in 2016.

Another incentive in content creation is to push a specific worldview, perhaps on behalf of a foreign government. The Kremlin uses this technique a lot.

Amid the early days of the 2020 pandemic, I found that Russia-linked news sites were pushing conspiracy theory narratives (e.g. the dangers of 5G, Bill Gates as responsible for coronavirus, coronavirus as hoax, etc). These showed up consistently on social media, for example in US and UK based Facebook groups.  

Before sharing something on social media, consider the incentives of its creator. Are you truly happy to help that clickbait website make more ad money, or to help a hostile foreign government promote its worldview to your fellow countrymen?

Summary

In this article, I presented four simple ways to stop sharing fake news and disinformation. I talked about the following points:

  • How to break out of dopamine loops 
  • How to avoid heated arguments online 
  • Why you should check your confirmation biases
  • Why you should consider the incentives of content

Are you doing any of these already? Let us know in the comments.

coordinated inauthentic behavior

Three Easy Ways to Spot Coordinated Inauthentic Behavior Online

After spending almost a year tracking coordinated inauthentic behavior on behalf of Facebook and Google, I’ve developed a good sense of how an inauthentic social media user looks and acts online. 

Each platform has its own nuances. But many universal patterns indicate the likelihood of coordinated activity. Here I’ll discuss three common indicators – and how you can effectively spot them across any social media platform. 

But first, let’s take a look at what coordinated inauthentic behavior actually means. We’ll also briefly explore some controversy around its definition.

What is coordinated inauthentic behavior?

Two years ago, Facebook first coined the phrase ‘coordinated inauthentic behavior’ (known in the industry as CIB).

Facebook defines CIB as follows:  “When groups of pages or people work together to mislead others about who they are or what they’re doing.”

Facebook (and other platforms) are keen to highlight the ‘behavior’ side of the phrase. This helps to shield the platform from accusations of being biased against any particular political ideology.

People would be quick to make those accusations if Facebook simply focused on removing inauthentic content. It would raise the question of which content should get removed and which should stay. This would trigger wider concerns about freedom of speech and the First Amendment.  

The double standards controversy

Writing for Slate, Harvard researcher Evelyn Douek is concerned that Facebook’s definition of coordinated inauthentic behavior is lacking in clarity.

She argues that certain groups will consider certain kinds of CIB acceptable, whereas others will not. Douek draws on the example of the TikTok video that caused hundreds of young people to artificially inflate attendance numbers at a Donald Trump rally by reserving tickets en masse.

Douek contrasts that real-life scenario with a hypothetical example of Qanon supporters doing the same to Joe Biden. She highlights the risk of applying double standards to CIB, as well as to disinformation.

That’s a real concern, especially in deeply polarized times. Polarization is the key driving force behind this issue. We assume that ‘our’ side is doing good, while ‘their’ side is doing bad. That view influences how we judge the motives of coordinated inauthentic behavior. 

For the purpose of this post, we’ll use the official CIB definition. It’s still the standard that most social media platforms use. But it’s important to know that the term is not perfect, and has attracted controversy.

Is coordinated inauthentic behavior the same as misinformation or disinformation?

No. But they certainly play a role in it. For example, members of a Twitter botnet might work together to constantly pump out and amplify misleading tweets about a political figure.

Or groups of paid operatives might enter Facebook groups and astroturf the discussion about coronavirus by posting lots of comments about the dangers of vaccines. Astroturfing is a common technique of CIB, i.e. to create an appearance of legitimate ‘grassroots’ consensus on certain topics. 

OK, I’ve answered some key questions about coordinated inauthentic behavior. Now let’s look at three ways to spot it.

What are some key indicators of coordinated inauthentic behavior?

Identity Signaling

The concept of identity is at the heart of many coordinated inauthentic behavior and disinformation efforts. CIB campaigns often play on existing social and political divisions within their target audience.

For example, they might astroturf a widespread sense of approval for a certain government policy, such as a tougher stance on immigration. Immigration is an emotive issue for many people, and has the potential to drive ingroup vs outgroup sentiments.

When examining accounts for signs of inauthenticity, I consider overt identity signals, especially political ones, to be a red flag. These could include national flags, divisive political hashtags (e.g. #MAGA (make America great again) or #FBPE (follow back pro-Europe)), or a bio stuffed with identity-promoting keywords like “Army vet, patriot, Trump supporter, family man, God lover”.

Taken together, those signs indicate that the profile primarily exists to promote a certain political identity – a common giveaway of astroturfing or coordinated inauthentic behavior.

Copy Paste Sharing

It’s common to find groups of accounts sharing links or posts accompanied by the exact same text (e.g. in a quoted tweet or a Facebook share). This isn’t normal behavior for an ‘organic’ social media user, so it’s a suspicious sign.

Copy paste sharing usually indicates a campaign designed to amplify a certain message. It is likely that Twitter accounts constantly tweeting the same messages in tandem are automated (i.e. bots).

Aggressive Political Agenda

When I’m seeking signs of coordinated inauthentic behavior, I always examine the posting history of a social media account. I check whether all its posts support a specific political agenda (usually in an aggressive and antagonistic way). If so, that’s another red flag.

Sure, regular people can also post aggressively in support of a political agenda. But it’s less likely that those posts will make up the whole of their posting history. A one-topic account is a key sign of coordinated inauthentic behavior. 

Summary

In this post we examined the origins of the term ‘coordinated inauthentic behavior’. Also, we explored one of the key debates around the validity of its definition. Finally, we looked at three simple ways to spot coordinated inauthentic behavior on social media platforms.

  • First, I looked at identity signaling, where accounts project a strong sense of a certain identity (usually political) via profile hashtags, profile imagery, bio information, or posting history. 
  • Second, I discussed copy paste posting, where multiple accounts share something with the exact same accompanying text. This is often a sign of automated coordinated inauthentic behavior. 
  • Finally, I highlighted the significance of one-topic accounts that support a certain political agenda, usually in an aggressive way.

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organisation. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularised the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites have become popular, such as The Onion and NewsThump.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account later attributed to the Kremlin shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, alt-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image was genuine, the context was not.  

Deepfakes

As a new form of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, porn has been the main area where deepfakes are being used. But in a handful of cases they’ve played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes. But they do have potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Alt-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are totally devoid of any nuanced explanation. The viewer gets a quick hit of emotion that has a triggering effect. This taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. Therein lies much of the danger. 

Analysing Trump’s Medical Disinformation on Facebook

US president Donald Trump shocked the world this week with his latest piece of medical disinformation.

Trump claimed that injecting disinfectant into the body could be an ‘interesting’ way to cure COVID-19.

He later tried to back-pedal, claiming he was being sarcastic. But that wasn’t how most of the world took it.

Dangers of medical disinformation

The mainstream media and the public widely lambasted this dangerous medical disinformation.

Amid the furore over Trump’s remarks, a major disinfectant firm issued a statement urging the public not to inject or drink any of their products.

However, members of pro-Trump Facebook groups dedicated to conspiracy theories displayed quite the opposite reaction. 

I examined some of these groups to provide comment for an article in CodaStory. I’d previously gathered this list because of the strong focus on various ‘corona disinformation conspiracies’.

These include 5G causing the virus, the virus being a US bioweapon, and Bill Gates as having orchestrated the ‘virus hoax’ in his ambition to enforce a worldwide vaccine programme. 

Many of the groups also centred around the Qanon conspiracy theory.

Pro-Trump Facebook reactions

You might expect the suggestion of injecting bleach to be a step too far even for these largely pro-Trump groups. Not so. 

In my initial observation of the groups, I noticed three distinct ways in which the members attempted to account for Trump’s bizarre medical disinformation.

First, that Trump was just ‘playing the media’. People must be stupid if they believe he meant what he said.

Commenters also attributed all the negative media coverage to ‘yet another’ MSM (mainstream media), liberal, or Democrat attempt to smear Trump.

Secondly, some commenters claimed that the media had quoted Trump ‘out of context’. According to them, he was speaking ‘more generally’ about possible ways to treat COVID-19.

Others highlighted a fact check article from far-right news outlet Breitbart. But no-one acknowledged the videos of Trump making these claims for everyone to see and hear. 

The third claim relates more closely to other COVID-19 medical disinformation, ‘miracle cures’. This commenter claimed that Trump must have been referring to those UV light therapy and ozone therapy, which already exist.

Things got more interesting when the commenter drew links between the medical disinformation about bleach and the popular narrative of ‘Vitamin C as miracle cure’.

They claimed that taking Vitamin C causes hydrogen peroxide to build up in the body. It followed that hydrogen peroxide has a disinfectant effect, so Trump’s comments have a basis in medical fact.

Rationalising medical disinformation

These three counter-narratives about Trump’s medical disinformation all attempt to rationalise an influential figure making a dangerous and irresponsible remark.

Tribal attitudes drive many of these rationalisations. For example, the claims that the media purposefully misinterpreted Trump’s comments in a ‘libs’ or ‘Dems’ smear attack. Once again, this reinforces the existing divide between populist pro-Trump narratives and the mainstream.

The question remains: How many of these Facebook group members are genuine American citizens? Facebook itself is the only entity that could properly attribute the accounts. And it doesn’t seem to be giving much away.

I suspect group members are a mix of genuine Trump supporters and astroturfers working to stir up tribal hatred of the ‘other side’.

Tribal attitudes can be dangerous, particularly in relation to public health. People in the pro-Trump tribe are more likely to challenge messages from the perceived ‘outgroup’ (‘experts’ and the ‘MSM’) such as critical public health advice from the WHO.

A similar dynamic has fuelled recent anti-lockdown protests across the US, which may already have spread the virus further and compromised the entire country. Astroturfing was certainly a factor there; there’s no reason why it couldn’t be influencing these groups too.