Home » Election Monitoring

Category: Election Monitoring

Coronavirus Conspiracy Theories, Tribalism And Public Health

During the pandemic, large crowds of Trump supporters took to the streets of US cities, demanding an end to coronavirus restrictions, such as lockdown and the wearing of masks. Britain saw similar issues, albeit on a smaller scale.

Why are some people so determined to ignore public health advice? Part of the answer may be found by examining political tribalism and coronavirus conspiracy theories.

In this post, I’ll explain how coronavirus conspiracy theories and disinformation leverage tribalism to influence people’s behaviour.

Divided societies, universal threat

When the pandemic first hit, some hoped that the shared experience of facing universal threat would bring warring political tribes together. But it seems the opposite is happening. This is partly driven by an organised and sustained campaign of disinformation and coronavirus conspiracy theories.

In the UK and US, government responses to the virus have been unlike those of many other countries. Portugal, Germany, New Zealand, Canada and South Korea have already managed to regain some level of control over its spread.

In contrast, both the UK and the US were slow to implement lockdown measures. Both gave their populations mixed messages about how to handle the pandemic. Both countries’ leaders have displayed a cavalier attitude towards the virus.

Political tribalism in the UK and the US is now affecting their populations’ response to the coronavirus crisis. This tribalism is a hangover from 2016, the same force that played a role in the election of Trump and the vote for Brexit – polarising the populations in the process.

Coronavirus conspiracy theories demonise groups

A sustained torrent of coronavirus disinformation has compounded these issues. In particular, numerous coronavirus conspiracy theories have eroded trust in public institutions among some segments of the population. Distrust of experts is nothing new. It’s been a central feature of tribal politics since 2016 and shows no sign of dissipating in this pandemic.

Common coronavirus conspiracy theories include:

Tribalism means viewing the world as ‘us vs them’, with ‘us’ being superior and ‘them’ being threatening. This perspective is inherent in these coronavirus conspiracy theories.

Many revolve around the demonisation of a particular group (e.g. elites, the establishment, experts, the WHO, China, and so on). True believers view anyone who supports the demonised group as being part of it. And so the tribal divisions persist.

These coronavirus conspiracy theories cast doubt on the public health situation. They promote distrust of expert advice and official organisations. The result is shifts in population behaviour, e.g, people refusing to follow lockdown, wear masks or practise social distancing.

From Facebook to the streets

The situation has become particularly dangerous in the US, with its current protests. Here the role of social media comes under the spotlight.

Private Facebook groups have been key sites for inciting and organising these protests. Some groups are large, such as ‘Michiganders Against Excessive Quarantine’, or ‘Reopen Virginia’ (the latter with over 18,000 members)

Both groups are full of talk of coronavirus conspiracy theories, such as the below from the Michigan group.

Source: https://twitter.com/willsommer/status/1250838111992647680

Below is an example comment from the ‘Reopen Virginia’ group. This user is calling for civil unrest, while also demonising the outgroup (‘leftist Democrats’). The post has attracted significant engagement, both comments and likes.

Source: https://twitter.com/jaredlholt/status/1250842215435337728/photo/3

These posts show how belief in tribal coronavirus conspiracy theories can lead to virus scepticism and denial. It can also trigger people to take real-life protest action, which risks turning violent.

Furthermore, it’s not easy to know who is producing these comments. Do they reflect the views of genuine American citizens? Or are some of the comments being astroturfed by those who seek to create social unrest?

Coronavirus conspiracy theories are a problem for other social media platforms too. YouTube hosts thousands of videos discussing all kinds of conspiracy theories in great detail. The platform recently changed its policies in an attempt to crack down on coronavirus conspiracy theories and 5G content. But it’s likely too little, too late.

The trouble is, platform takedowns are viewed as a sign of elite censorship in the minds of people already suspicious of experts and official organisations. This adds even more fuel to the fire of coronavirus conspiracy theories.

Local groups are key targets

Private local Facebook groups are a prime target for influence operations. They have already been identified as key battle sites for the US 2020 election, where influence operatives aim to manipulate the political narratives in key swing states.

Targeting local Facebook groups is an effective way to do this. As well as activity such as voter suppression in these groups, influence operations can also compel populations to protest on the streets.

It’s difficult for researchers and analysts to study private Facebook groups in aggregate, as tools such as CrowdTangle don’t allow access to private groups.

These groups are hotspots for US 2020 manipulation activities. Facebook should monitor them carefully. Its moderators should look out not only for signs of voter suppression, but also for coordinated attempts to incite populations to violence.

We must take coronavirus conspiracy theories seriously

These times of heightened fear offer a prime opportunity to for disinformation purveyors to influence the outcome of the US 2020 election.

When political tribalism is so entrenched, fact checking and counter disinformation messaging campaigns may be less effective on a large scale. Instead, they risk exacerbating people’s existing suspicions of the establishment and ‘elites’.

Coronavirus conspiracy theories are not trivial. They risk causing harm on a massive scale, by encouraging populations to ignore public health advice and instigate real life violence.

It’s essential that social media companies take coronavirus conspiracy theories seriously, particularly within private groups. Whether or not they do so may end up as a key deciding factor of the US 2020 election. 

6 Things I’ve Learned From Tracking Coronavirus Disinformation

Disinformation thrives on chaos, and a global pandemic is about as chaotic as it gets. For those who seek to disinform, the coronavirus presents a far grander opportunity than either the 2016 US election or the vote on Brexit. The upcoming 2020 US presidential election further fans the flames.

With that in mind, it’s important to regularly stop and take stock of lessons learned from the front lines of disinformation tracking. I’ve been studying cross-platform coronavirus narratives for the last month or so. Here are a few of the things I’ve found.

Main themes in coronavirus disinformation

1. Q is a key player in coronavirus disinformation

Qanon is a mega conspiracy narrative that encompasses a whole range of smaller ones. The basic premise of Qanon has Donald Trump in league with a shadowy figure called Q. Together, Trump and Q are fighting against a group of elite paedophiles entrenched within the mainstream media and the Democrat Party.

Previous presidential candidate Hillary Clinton and current one Joe Biden have both been major targets for Q’s accusations. Every so often, Q releases tantalising nuggets of new information (called ‘Q drops’) for his followers to chew over. These have sparked a whole ecosystem of pervasive social media content, from Twitter threads to entire YouTube channels.

Coronavirus disinformation is being well-leveraged by Q and his followers. Q related themes and activity underpin many of the most widely spread corona conspiracies, including coronavirus being either a hoax or a bioweapon, 5G causing the virus, a supposed plan to enforce mandatory vaccinations, and the imminent arrival of military martial law.

2. Mainstream media is pushing conspiracy narratives

Conservative media sources in the US, such as Fox News, play a significant role in promoting narratives that draw on conspiracies, including around coronavirus disinformation. They claim it’s ‘not a big deal’, or it’s ‘just like the flu’, or, ‘it’s all a big hoax’.

Although these stories may be less colourful than those of the average Q acolyte, they are still risky. Provenance in established media sources provides the necessary social proof to make the narratives more credible in the minds of their audiences.

What’s more, this scenario means less work for those who intend to manipulate public opinion around the coronavirus. They no longer have to waste time crafting convincing content, but can simply engage with organic content that already exists. And that’s exactly what they’re doing, with a firm eye on the US 2020 election.

3. Coronavirus tribalism is prevalent

Pitting ‘us’ against ‘them’ is at the core of most disinformation, including conspiracy theories. The narratives can take many forms, but always come down to one group (the ingroup) facing off against a predefined opposing group (the outgroup).

For Qanon, it’s Q’s followers who are the ‘enlightened’ ingroup, joining forces with him and Trump to battle the predatory elites. In British politics, we see ‘patriotic’ supporters of Brexit setting themselves against ‘treacherous’ Remainers (and vice versa).

Tribalism even filters down to matters of life or death, i.e. the coronavirus. On social media, I’ve noticed a recurring adversarial narrative emerging around how best to respond to the pandemic.

One camp downplays the severity of the virus, claiming measures such as the lockdown are an overreaction, while the other camp is strongly in favour of lockdown and promotes WHO advice to Stay At Home. Each camp supports their own and attacks the other, often in derogatory and aggressive ways.

When people are already suspicious of ‘elites’ and experts, there’s a real tendency to dismiss guidance from governments and public health organisations, which can lead to the flouting of virus mitigation measures. Real world harms can result.

4. Virus fears are being monetised 

The chaos and fear of a global pandemic has spawned many opportunities for leveraging the attention economy. As well as conspiracy theories, there are many examples of people making money via coronavirus disinformation, by tapping into people’s fear, boredom, and increased need for answers.

I’ve identified two main ways of doing this. The first is through creating highly clickable content about the virus. This content may or may not be factual; it doesn’t matter to the creator, as long as it brings in the clicks.  The content is published on websites festooned with online ads, where each click brings extra ad dollars to the site owner.

The second way is to create content on topics such as ‘miracle cures’, which then feeds into attempts to sell products. Vitamin C is a prime example. It’s a cynical exploitation of people’s fearfulness about the virus and their need to somehow regain a sense of control.

These ‘miracle cures’ are not scientifically proven. They provide a false sense of security, which may lead to individuals choosing not to self isolate and spreading the virus as a result.

5. Takedowns have a ‘backfire effect’ 

Takedowns are a necessary part of tackling the coronavirus disinformation problem. However, denying bad actors freedom of reach can also strengthen the impetus behind conspiracy theories by feeding into an existing sense of elite suppression. Here, the platforms are viewed as part of the elite, working together to keep the ‘truth’ hidden from the people.

Conspiracy theorists are quick to react to takedowns by working them into their coronavirus disinformation narratives. With 5G, a trend has sprung up of referring to it as ‘5gee’ or similar permutations, in an attempt to avoid the keyword being picked up by moderators or analysts who are tracking it.

For conspiracy adherents, this sense of persecution further reinforces their existing worldview, making them more likely to cling onto it. In this way, a ‘backfire effect’ has occurred. 

6. Platform responses are shifting 

Social media companies are frequently accused of not doing enough to reduce the flood of misleading content that overwhelms their platforms. I don’t believe they’re reluctant to do so, but they have to balance it with being seen as supportive of free speech. Finding that balance can be challenging when addressing conspiracy theories, as opposed to purely false information.

Most conspiracy theories are spun up like candy floss around a small kernel of truth. A typical post will build a whole story around how some real life event is of possible significance to the wider narrative arc. This creates murky territory for the platforms because the difference between opinion and actual false information is not always clear-cut.

But things have shifted after some conspiracy theories, such as the one about 5G causing coronavirus, triggered real life harms. A recent video by notorious conspiracy theorist David Icke was pulled from YouTube just days after it was released, heralding a change in approach.

A growing amount of research indicates that coronavirus conspiracy theories form a central part of coordinated influence operations.  We can no longer afford to overlook the role of conspiracy theories in influence operations. 

Behind the hashtag: Who’s tweeting about #SurrenderAct ?

If you’ve been following the latest news about Brexit, then you’ve probably heard about the so-called ‘Surrender Act’.

It’s Boris Johnson’s way of describing the Benn Act, passed by Parliament earlier this month to prevent No-Deal Brexit. This compels Johnson to seek an extension to Article 50 if he can’t reach a deal with the EU by October 19, 2019.

Johnson’s supporters didn’t approve of this legislation. They claimed that the Act would ‘undermine’ Britain’s negotiating power with the EU.

#SurrenderAct immediately started trending on Twitter. But who exactly was tweeting it? I jumped into the analytics to find out.

When did the hashtag start?

When analysing a hashtag, I usually begin by checking when it was first tweeted, and by whom. #SurrenderAct was first used by an account that really didn’t want to look like a bot…

Below we see a sharp spike in activity around the hashtag. It was tweeted over 3000 times over 12 hours (mainly during the UK night time).

So who else is tweeting about #SurrenderAct? Below are the top 10 most active hashtag users. In the rest of this post, I’ll put these accounts under the microscope.

Bot, cyborg, or organic human?

You’re probably wondering how many of these accounts are bots. Time for a quick reminder about what bots can (and can’t) do on Twitter. They’re pieces of code designed to amplify a particular hashtag, user or keyword. DFR Lab has a useful guide for spotting automated accounts.

The most obvious indicator of ‘bot-ness’ is high levels of activity, i.e. non-human tweeting patterns. Other top indicators are anonymity: e.g. no photo, or a generic one, a non-specific (usually political) bio, and a vague location, e.g. ‘England’, and amplification: only retweeting or liking other people’s tweets – i.e. boosting their messages in a quick and low-effort way.

Bots are less effective in human-to-human engagement, such as arguing with other Twitter users. That’s more likely to be human operators (or cyborgs, which mix bots with humans).

So, if boosting #SurrenderAct was the main purpose of these accounts, then we’d expect to find evidence of typical bot-like behaviours.

Let’s take a look at three interesting accounts within the top 10.

1. The Hyper-Prolific Tweeter

This account is new to Twitter, having joined in March this year. It has no photo (only the typical ‘egg’) and no bio. Definitely low effort.

But its rate of tweeting is impressive! During a short space of time, ‘christine’ has achieved a rate of over 1000 tweets per day.

Researchers cite a number of different benchmarks for identifying ‘bot-ness’. The Oxford Internet Institute says it’s an average of 50 tweets per day. DFR Lab is more generous. It claims that 72 tweets per day would be suspicious, and over 144 would be ‘highly suspicious’.

Remember too, that retweeting is faster and lower effort than creating replies or original tweets.

As shown above, ‘christine’ is going full bot. 100% of the account’s activity is retweets, all from the Twitter for iPhone app.

2. The Latent Islamophobe

‘Sue Reap’ is at number eight among those who most tweeted #SurrenderAct. There’s some interesting things going on with this account. Its bio is peppered with Tommy Robinson references and hashtags.

The account joined Twitter over seven years ago. But a couple of quick advanced searches shows that it didn’t tweet anything for most of 2012 or 2013.

Or, perhaps it did, but those tweets got deleted…It’s not easy to know.

Suddenly, ‘Susan’ springs into action in late 2013/early 2014 with a flurry of anti-Muslim tweets.

We can see that this account has a suspiciously high activity rate, producing 126.88 tweets per day, of which 22% is replies.

This rate puts the account close to the DFR Lab’s ‘highly suspicious’ bracket of 144 tweets per day.

So has ‘Susan’ given up on Tommy?

Not in the slightest. He’s still foremost in her mind, right up there with leaving the EU. It’s practically an obsession.

3. The ‘true-blue’ Brexiteer

This account is likely to be ‘organic’, i.e. a normal human user. It’s become quite Brexity in recent years, but still within the realms of normal human behaviour.

‘Pat’ was an early adopter of Twitter, joining in 2009, possibly when he/she was 55 (guessing from the handle). That would put them in their mid-60s now; the typical Brexit voter demographic.

At the beginning, ‘Pat’ tweeted everyday comments about garden parties and Michael Jackson. There was no sign of anything political.

In April 2016, when the referendum had been announced, ‘Pat’ was tweeting happily about normal things: celebrities, photography and TV shows.

But come May, as Britain inched closer to the date of the referendum, Pat’s political side suddenly became apparent. Out came the pro-Brexit tweets.

Despite this, the account is still within the realms of being normal. An activity rate of 33 tweets per day is nowhere near ‘botness’. What’s more, the 82% of replies shows that this account engages a lot with other users, rather than simply retweeting things blindly. This is not typical ‘bot’ behaviour.

It’s likely to be a typical older Brexit voter who has become somewhat radicalised by the debate’s tribal nature (it’s not just Brexit voters; but happens to both sides).

These accounts form just a tiny sample of the millions of accounts out there engaging with political content.

Key takeaway: Don’t just assume everyone is a bot; instead think critically before jumping to conclusions.

A Tale Of Two Leaders: Facebook, Astroturfing, And Social Proof

There’s something odd about the Prime Minister’s Facebook page.

Underneath every post, especially those about Brexit, are hundreds of responses. This isn’t unusual for the page of a public figure, but the style of the responses didn’t ring true.

They are all very similar; short utterances of praise for Boris Johnson, repeating words and phrases such as ‘brilliant’, ‘fantastic’, and ‘support Boris 100%’. Each comment is festooned with Facebook’s emojis, mainly representing positive sentiments of ‘like’, ‘love’ and ‘laugh’.

This behaviour feels odd. I’m not denying that a lot of genuine people do support Johnson, but it’s suspicious for so many to consistently comment on his posts in this distinctive and repetitive fashion.

Screenshot of Boris Johnson’s Facebook page, with a selection of comments about Brexit.

Let’s contrast this with the page of his predecessor, Theresa May, specifically her Brexit-related posts. Here we see a very different scenario.

Responses to May’s posts tend to be a lot more varied, in content, tone and length. Some disagree with her. Others support her. But most are expressed in more depth and sophistication of language than the short repetitive replies on Johnson’s.

In short, the responses on May’s page look far more likely to be ‘organic’ (i.e. produced by real people behaving naturally) than the majority of those on Johnson’s. It’s possible that Johnson’s page is using artificial amplification techniques, which may include fake followers.

Screenshot of Theresa May’s Facebook page showing a sample of typical comments about Brexit. Note the contrast with Johnson’s page.

Facebook locks its data down tight, so it’s hard to run further analysis to determine for certain whether the Johnson supporters are part of an organised campaign.

But we can draw from previous examples. Donald Trump used fake Facebook followers during the US presidential campaign. Researchers discovered that over half of the followers on his page came from countries known as hubs for Facebook ‘like farms’.

These ‘farms’ are often found in developing nations such as the Philippines and India, where the dollar stretches a long way. They offer customers the opportunity to buy fake Facebook likes to create the impression of popular support.

As well as likes, customers can purchase fake engagement, usually in the form of comments. This may explain the unusual commenting activity on Johnson’s page.

For political purposes, this type of artificial campaign is an important tool, because it generates the illusion of grassroots support for a particular figure or issue. It even has a name: astroturfing.

Illusion becomes reality when the fake engagement intersects with genuine users, who are more likely to engage with seemingly popular posts thanks to the effect of ‘social proof’ – a psychological phenomenon where people tend to follow the actions of the masses.

This can be leveraged to great effect in social media environments, where user attention spans are low, knee-jerk reactions are prevalent, and ‘likes’ are an addictive form of currency.

Tracking A Brexit Influence Campaign: #BritainIndependence

The atmosphere in Britain is becoming increasingly heated as October 31st inches ever closer. This is the date when the country will leave the European Union — if all goes to plan for Boris Johnson and the Conservative Party. Right now the political stakes are higher than ever.

Parliament has tried to protect the country from the potential catastrophe that may result from leaving without a deal. In the nick of time before prorogation, they passed a last-minute bill into law, designed to compel Johnson to seek an extension, if he can’t reach a suitable deal with the EU by the end of October.

Johnson has already lost multiple times in Parliament, despite his efforts to prorogue it for an unusually long time. Last week, a Scottish court ruled that this prorogation was unlawful. This week, the case goes to the English Supreme Court, where it will be challenged.

#BritainIndependence

In this highly-charged environment, Twitter has provided a constant source of lively political debate around Brexit. Many issues are bubbling up at present, some more relevant than others, but here I want to explore a particularly interesting hashtag.

#BritainIndependence has been popping up a lot lately. The first thing to do is to find out when it was first used, and who first used it. The hashtag came fairly late to the game, on September 9, via a user called ‘Trevor’, whose screen name is stuffed with pro-Brexit hashtags.

Signalling ingroup identity

A quick glance at Trevor’s bio is revealing. First, the bio is a strong indicator of self-professed identity on Twitter. In Trevor’s case, it contains words that reflect traditional values: conservative, nationalist, family-oriented, words such as ‘Christian’, ‘white’, ‘loyal British subject’, and ‘married’.

This creates a sense of group identity, designed to give Trevor’s profile immediate appeal to others who identify with similar values – i.e. signalling that he is part of an ingroup. In this case, the ingroup is pro-Brexit Twitter users.

The references to popular British football teams (Arsenal and Rangers), is likely designed to portray the account as belonging to a genuine working-class British citizen – another effort at signalling ingroup identity.

But the cultural references feel jumbled: Arsenal is an English team, while Rangers is a Scottish team. That plus the random mention of Northern Ireland means this bio doesn’t quite ring true. In fact, it feels like someone playing at being a Brit, perhaps for nefarious reasons.

What’s more, ‘neighbor’ is spelled in US English. No genuine speaker of British English would use US spelling; especially a man who seems so deeply committed to British patriotism. Clue, Trevor is likely not the grassroots British man that he pretends to be.

We could dig much deeper into Trevor’s account, especially his past tweet history. His account is devoted to tweeting about Brexit, even though it was created in September 2015, before Brexit existed. It would be interesting to see what Trevor was tweeting about between then and June 2016, but that’s a topic for another post…

Hashtag activity

Next up, let’s take a look at how Twitter users have been interacting with the #BritainIndependence hashtag, since ‘Trevor’ coined it on September 9th (coincidentally, just in time for the prorogation of Parliament).

All of the most retweeted tweets on the #BritainIndependence hashtag come from users with heavily pro-Brexit screen names (and usernames like @Brexit4me and @Feck_the_EU), suggesting one-topic accounts that exist simply for the purpose of engaging with Brexit-related discussions.

Retweets have two main functions, 1) they spread a message across Twitter, 2) they create validation for the message via social proof (i.e. if other people have engaged with this tweet, then it must be worth engaging with).

Liking (or favouriting) tweets reinforces the sense of social proof, while also increasing the likelihood of the tweet being seen in other users’ feeds.

The below tweets contain strong evidence of typical adversarial narratives, designed to promote a sense of tribalism, i.e. “us vs them”.

Examples include:

  • ‘Unelected judges’ and ‘hijacked by extremists’ (fits into narrative of the EU/Remain supporters being ‘anti-democratic’)
  • ‘Tattooed Eastern European thug’, and ‘brutal rape’ (fits into narrative of foreigners and ‘The Other’ as being threatening, especially to women)
  • ‘Me, just a patriot’ (supports the narrative of pro-Brexit voters as being especially patriotic. This is a similar notion to that conveyed by Trump’s ‘Make America Great Again (#MAGA’), which coincidentally appears in many of the user bios tweeting the #BritainIndependence hashtag.

Clearly, the #BritainIndependence hashtag exists to stoke further divides between the two sides in the Brexit debate, while creating the illusion of widespread support for the pro-Leave side. It’s highly likely that the hashtag was initialised for that very purpose, as the nature of ‘Trevor’s’ account suggests.

Furthermore, it’s telling that this hashtag coincides with several significant real-life events in the Brexit timeline that could threaten the pro-Brexit side, including the beginning of (unlawful) prorogation and the case going to the Supreme Court.

But why are so many of the top posting accounts so similar, with their bios stuffed with tribal keywords ? And why are so many of them blatantly US-centric, or with such obvious cultural errors (such as the spelling of ‘favorite’)?

This could indicate an organised social media manipulation campaign aiming to create the illusion of widespread support for the pro-Brexit side while deepening social and political divisions at a critical juncture for the country.

As October 31 inches closer, the discussion is certain to get even more heated – and we’re sure to see lots of interesting social media activity.

I’ll post further analyses here between now and October 31.