Home » Influence Ops

Tag: Influence Ops

psychology of disinformation

How Disinformation Hacks Your Brain

Today I’m going to explain how disinformation hacks your brain.

In our fast-moving online world, even the most aware of us can be taken in by disinformation. As humans, our minds work in certain ways that can leave us vulnerable to deception – and no-one is immune.

Our perceptions are not as reliable as we might like to imagine, and the online environment amplifies these flaws. In this post, I’ll discuss five important psychological traits that dictate how disinformation hacks your brain.

5 Ways Disinformation Hacks Your Brain

Confirmation Bias

Search engines give us access to all the world’s information simply by typing a few words into a search bar.

Because of confirmation bias, people tend to search only for information that reinforces their beliefs. Furthermore, even if what they find is disinformation, the effect of confirmation bias makes them less likely to question its veracity.

For example, let’s take someone who already dislikes Donald Trump. Perhaps they might search Google for “why is Trump still president?”

This search produces a slew of articles critical of Trump, feeding into the person’s existing beliefs. Consequently, this becomes a vulnerable moment during which disinformation can easily find a foothold in the mind.

Social Proof

The term ‘social proof’ was first used by Robert Cialdini in his seminal marketing book, Influence. It’s a way of building trust in a person, a product or a message, by demonstrating that many people approve of it.

The bandwagon effect is the motivating force driving social proof. It dictates that if something seems popular, people will feel compelled to join in.

Social proof is especially important in today’s environment of information overload. Because there are so many options available to us, we need a shortcut to help us cut through the noise and determine which ones to trust.

For marketers, social proof is an essential tool. It’s also a powerful weapon in the arsenal of disinformation. Devices such as bots play a major role in building social proof around certain messages, including false ones.

Liking, sharing and replying to these messages creates an illusion of widespread approval, which attracts more people to trust them. This may snowball, causing the message to go viral.

There’s a lot more to say about the role of social proof in disinformation. I’ll explore it in more detail in a follow up post. For now, remember that it’s easy to fake online popularity, so likes and retweets aren’t always a reliable indicator of grassroots public opinion.

The Consensus Effect

We like to think that many other people share our beliefs, preferences, values and habits, even when that’s not actually the case. Behavioural psychology calls this overestimation the false consensus effect. It relates to our self-esteem and the desire to conform as part of a social group, meaning we need to fit in.

Online, the false consensus effect is amplified in two main ways:

  • By means of algorithms that show us opinions reflecting our own (filter bubble effect),
  • By our habit of engaging only with others who support our views (echo chamber effect).

Disinformation that taps into the false consensus effect can find a fertile environment to take root, grow and mutate. Social media helps this happen. No matter how convinced you are of a certain view, you should never forget that other people may well think differently.

Tribalism

Humans are social animals. Consequently, gaining the approval of a likeminded group is important for boosting our self-esteem. We reinforce this sense of self-esteem by behaving in ways that favour our own group (known as the in-group).

For example, we might post on social media about the positive traits of our in-group, which is relatively harmless in itself. However, every in-group needs an out-group.

Furthermore, where there’s in-group loyalty there may also be out-group derogation – negative attitudes and behaviour towards the out-group. This conflict between groups of all kinds is a form of tribalism. It plays a huge role in how disinformation hacks your brain.

In emotive issues like politics, which tap into aspects of people’s identities, tribalism can morph into a force of dangerous power. Violence can easily follow. In fact, tribalism is the driving force behind many human conflicts.

Disinformation leverages the human tendency for tribalism by creating and disseminating adversarial narratives. These inflame existing divisions, creating a sense of ‘us vs them’. We can observe many cases of this in recent political events.

Examples include Trump supporters vs Clinton supporters in the US, Leavers vs Remainers in the UK, Muslims vs Buddhists in Myanmar, Han fans vs Tsai fans in Taiwan’s recent presidential election.

The Backfire Effect

You might expect people would stop believing in disinformation if they are told it’s untrue. This seems logical, however human psychology doesn’t always work that way. The root of the problem is found (once again) in our self-esteem.

Furthermore, when certain beliefs become embedded in our worldview, they also become part of our identity. If one of those beliefs is challenged, it’s as if someone is shaking up the very foundations of that identity.

Challenges to our identity can be psychologically painful. In response, we may cling tighter to the original belief, making it even stronger. The attempt to correct backfires, therefore this process is known as the backfire effect.

Summary: How Disinformation Hacks Your Brain

  • Human psychology makes us susceptible to disinformation
  • In a world of information overload, we seek shortcuts to help us navigate. However, these can be gamed, such as social proof.
  • Much of online behaviour has its roots in aspects of self-esteem and identity.
  • Simply ‘debunking’ disinformation may not be effective, due to the backfire effect.
  • Adversarial narratives are a common feature of disinformation, found in many situations worldwide. They can lead to tribalism, which risks real-life violence.

Behind the hashtag: Who’s tweeting about #SurrenderAct ?

In the world of all things Brexit, ‘Surrender Act’ was just another catchphrase.

Coined by Boris Johnson, it was his way of describing legislation passed by UK parliament in September 2019 to prevent a no-deal Brexit. The legislation compelled Johnson to seek an extension to the process, if he hadn’t reached a deal with the EU by October of that year.

Johnson’s supporters didn’t approve of this legislation. They claimed that the Act would ‘undermine’ Britain’s negotiating power with the EU.

#SurrenderAct immediately started trending on Twitter. But who exactly was tweeting it? I jumped into the analytics to find out.

When did the hashtag start?

When analysing a hashtag, I usually begin by checking when it was first tweeted, and by whom. #SurrenderAct was first used by an account that really didn’t want to look like a bot…

Below we see a sharp spike in activity around the hashtag. It was tweeted over 3000 times over 12 hours (mainly during the UK night time).

So who else is tweeting about #SurrenderAct? Below are the top 10 most active hashtag users. In the rest of this post, I’ll put these accounts under the microscope.

Bot, cyborg, or organic human?

You’re probably wondering how many of these accounts are bots. Time for a quick reminder about what bots can (and can’t) do on Twitter. They’re pieces of code designed to amplify a particular hashtag, user or keyword. DFR Lab has a useful guide for spotting automated accounts.

The most obvious indicator of ‘bot-ness’ is high levels of activity, i.e. non-human tweeting patterns. Other top indicators are anonymity: e.g. no photo, or a generic one, a non-specific (usually political) bio, and a vague location, e.g. ‘England’, and amplification: only retweeting or liking other people’s tweets – i.e. boosting their messages in a quick and low-effort way.

Bots are less effective in human-to-human engagement, such as arguing with other Twitter users. That’s more likely to be human operators (or cyborgs, which mix bots with humans).

So, if boosting #SurrenderAct was the main purpose of these accounts, then we’d expect to find evidence of typical bot-like behaviours.

Let’s take a look at three interesting accounts within the top 10.

1. The Hyper-Prolific Tweeter

This account is new to Twitter, having joined in March this year. It has no photo (only the typical ‘egg’) and no bio. Definitely low effort.

But its rate of tweeting is impressive! During a short space of time, ‘christine’ has achieved a rate of over 1000 tweets per day.

Researchers cite a number of different benchmarks for identifying ‘bot-ness’. The Oxford Internet Institute says it’s an average of 50 tweets per day. DFR Lab is more generous. It claims that 72 tweets per day would be suspicious, and over 144 would be ‘highly suspicious’.

Remember too, that retweeting is faster and lower effort than creating replies or original tweets.

As shown above, ‘christine’ is going full bot. 100% of the account’s activity is retweets, all from the Twitter for iPhone app.

2. The Latent Islamophobe

‘Sue Reap’ is at number eight among those who most tweeted #SurrenderAct. There’s some interesting things going on with this account. Its bio is peppered with Tommy Robinson references and hashtags.

The account joined Twitter over seven years ago. But a couple of quick advanced searches shows that it didn’t tweet anything for most of 2012 or 2013.

Or, perhaps it did, but those tweets got deleted…It’s not easy to know.

Suddenly, ‘Susan’ springs into action in late 2013/early 2014 with a flurry of anti-Muslim tweets.

We can see that this account has a suspiciously high activity rate, producing 126.88 tweets per day, of which 22% is replies.

This rate puts the account close to the DFR Lab’s ‘highly suspicious’ bracket of 144 tweets per day.

So has ‘Susan’ given up on Tommy?

Not in the slightest. He’s still foremost in her mind, right up there with leaving the EU. It’s practically an obsession.

3. The ‘true-blue’ Brexiteer

This account is likely to be ‘organic’, i.e. a normal human user. It’s become quite Brexity in recent years, but still within the realms of normal human behaviour.

‘Pat’ was an early adopter of Twitter, joining in 2009, possibly when he/she was 55 (guessing from the handle). That would put them in their mid-60s now; the typical Brexit voter demographic.

At the beginning, ‘Pat’ tweeted everyday comments about garden parties and Michael Jackson. There was no sign of anything political.

In April 2016, when the referendum had been announced, ‘Pat’ was tweeting happily about normal things: celebrities, photography and TV shows.

But come May, as Britain inched closer to the date of the referendum, Pat’s political side suddenly became apparent. Out came the pro-Brexit tweets.

Despite this, the account is still within the realms of being normal. An activity rate of 33 tweets per day is nowhere near ‘botness’. What’s more, the 82% of replies shows that this account engages a lot with other users, rather than simply retweeting things blindly. This is not typical ‘bot’ behaviour.

It’s likely to be a typical older Brexit voter who has become somewhat radicalised by the debate’s tribal nature (it’s not just Brexit voters; but happens to both sides).

These accounts form just a tiny sample of the millions of accounts out there engaging with political content.

Key takeaway: Don’t just assume everyone is a bot; instead think critically before jumping to conclusions.

astroturfing

Astroturfing: A Quick Example from Facebook

What is Astroturfing?

Astroturfing is not new. Its history stretches back to the days of newspapers and pamphlets. But astroturfing has become a major important concern in today’s ‘post-truth’ information environment.

The Guardian defines astroturfing as “the attempt to create an impression of widespread grassroots support for a policy, individual, or product, where little such support exists.”

The ‘grassroots’ part is where the name comes from; that bright green fake grass. You might remember it from the school sports field.

astroturfing

Social media is a prime environment for astroturfing campaigns. User attention spans are low, knee-jerk reactions are prevalent, and ‘likes’ are an addictive form of currency.

Illusion becomes reality when fake engagement intersects with genuine social media users. They are more likely to engage with seemingly popular posts because of social proof – a psychological effect in which people like or support things that already seem popular with others.

An Example of Astroturfing

Let’s take a look at an example of suspected astroturfing on Facebook. Our starting point is the official Facebook page of the UK’s current prime minister, Boris Johnson.

Underneath every post on his page, especially those about Brexit, we can see hundreds of responses. That’s not unusual to find on the page of a public figure. But the style of those responses seemed artificial.

astroturfing
Screenshot of Boris Johnson’s Facebook page, with a selection of comments about Brexit.

They are all very similar; short utterances of praise for Boris Johnson, repeating words and phrases such as ‘brilliant’, ‘fantastic’, and ‘support Boris 100%’. On each comment, we can also see a lot of response emojis of positive sentiments ‘like’, ‘love’ and ‘laugh’.

This behaviour is odd. Genuine people do support Johnson, of course. But it’s suspicious for so many to comment on his posts in this distinctive and repetitive way. This looks very much like an astroturfing campaign.

More genuine engagement

Now let’s contrast this with the page of his predecessor, Theresa May, specifically her Brexit-related posts. Here we can see a very different scenario, which immediately feels far more genuine.

astroturfing
Screenshot of Theresa May’s Facebook page showing a sample of typical comments about Brexit. Note the contrast with Johnson’s page.

Responses to May’s posts are more varied in content, tone and length. Some commenters disagree with her. Others support her. But most commenters use more depth and sophistication of language than the short repetitive replies to posts on Johnson’s page.

The responses on May’s page are more likely to be ‘organic’ (i.e. from real people who behave naturally). In contrast, it appears that Johnson’s page is the subject of astroturfing techniques, which may include fake comments and even fake followers.

Facebook locks its data down tight, so it’s hard to run further analysis to determine for certain whether the Johnson supporters are part of an organised campaign. But we can draw insights from previous recent examples. 

Donald Trump used fake Facebook followers during the US presidential campaign. Researchers discovered that over half of the followers on his page came from countries known as hubs for Facebook ‘like farms’.

It is common for like farms to exist in developing countries such as the Philippines and India, where much of the population speaks English and the US dollar stretches a long way.

The farms offer customers the opportunity to buy fake Facebook likes and Twitter follows, to use for astroturfing the impression of popular support.

As well as likes, customers can purchase fake engagement, usually in the form of comments. This may explain the unusual commenting activity on Johnson’s page.

Why astroturfing matters

Astroturfing matters because it’s a deliberate attempt to manipulate perceptions of popular opinion, with potentially dangerous results.

Although astroturfing has been a feature of political campaigning for decades, the social media environment gives it enormous power. Social media users have become far more susceptible to its effects than newspaper readers ever were.

When combined with disinformation and conspiracy theories, astroturfing has the potential to cause all sorts of social and political chaos. Many would argue that it already has.

Tracking A Brexit Influence Campaign: #BritainIndependence

The atmosphere in Britain is becoming increasingly heated as October 31st inches ever closer. This is the date when the country will leave the European Union — if all goes to plan for Boris Johnson and the Conservative Party. Right now the political stakes are higher than ever.

Parliament has tried to protect the country from the potential catastrophe that may result from leaving without a deal. In the nick of time before prorogation, they passed a last-minute bill into law, designed to compel Johnson to seek an extension, if he can’t reach a suitable deal with the EU by the end of October.

Johnson has already lost multiple times in Parliament, despite his efforts to prorogue it for an unusually long time. Last week, a Scottish court ruled that this prorogation was unlawful. This week, the case goes to the English Supreme Court, where it will be challenged.

#BritainIndependence

In this highly-charged environment, Twitter has provided a constant source of lively political debate around Brexit. Many issues are bubbling up at present, some more relevant than others, but here I want to explore a particularly interesting hashtag.

#BritainIndependence has been popping up a lot lately. The first thing to do is to find out when it was first used, and who first used it. The hashtag came fairly late to the game, on September 9, via a user called ‘Trevor’, whose screen name is stuffed with pro-Brexit hashtags.

Signalling ingroup identity

A quick glance at Trevor’s bio is revealing. First, the bio is a strong indicator of self-professed identity on Twitter. In Trevor’s case, it contains words that reflect traditional values: conservative, nationalist, family-oriented, words such as ‘Christian’, ‘white’, ‘loyal British subject’, and ‘married’.

This creates a sense of group identity, designed to give Trevor’s profile immediate appeal to others who identify with similar values – i.e. signalling that he is part of an ingroup. In this case, the ingroup is pro-Brexit Twitter users.

The references to popular British football teams (Arsenal and Rangers), is likely designed to portray the account as belonging to a genuine working-class British citizen – another effort at signalling ingroup identity.

But the cultural references feel jumbled: Arsenal is an English team, while Rangers is a Scottish team. That plus the random mention of Northern Ireland means this bio doesn’t quite ring true. In fact, it feels like someone playing at being a Brit, perhaps for nefarious reasons.

What’s more, ‘neighbor’ is spelled in US English. No genuine speaker of British English would use US spelling; especially a man who seems so deeply committed to British patriotism. Clue, Trevor is likely not the grassroots British man that he pretends to be.

We could dig much deeper into Trevor’s account, especially his past tweet history. His account is devoted to tweeting about Brexit, even though it was created in September 2015, before Brexit existed. It would be interesting to see what Trevor was tweeting about between then and June 2016, but that’s a topic for another post…

Hashtag activity

Next up, let’s take a look at how Twitter users have been interacting with the #BritainIndependence hashtag, since ‘Trevor’ coined it on September 9th (coincidentally, just in time for the prorogation of Parliament).

All of the most retweeted tweets on the #BritainIndependence hashtag come from users with heavily pro-Brexit screen names (and usernames like @Brexit4me and @Feck_the_EU), suggesting one-topic accounts that exist simply for the purpose of engaging with Brexit-related discussions.

Retweets have two main functions, 1) they spread a message across Twitter, 2) they create validation for the message via social proof (i.e. if other people have engaged with this tweet, then it must be worth engaging with).

Liking (or favouriting) tweets reinforces the sense of social proof, while also increasing the likelihood of the tweet being seen in other users’ feeds.

The below tweets contain strong evidence of typical adversarial narratives, designed to promote a sense of tribalism, i.e. “us vs them”.

Examples include:

  • ‘Unelected judges’ and ‘hijacked by extremists’ (fits into narrative of the EU/Remain supporters being ‘anti-democratic’)
  • ‘Tattooed Eastern European thug’, and ‘brutal rape’ (fits into narrative of foreigners and ‘The Other’ as being threatening, especially to women)
  • ‘Me, just a patriot’ (supports the narrative of pro-Brexit voters as being especially patriotic. This is a similar notion to that conveyed by Trump’s ‘Make America Great Again (#MAGA’), which coincidentally appears in many of the user bios tweeting the #BritainIndependence hashtag.

Clearly, the #BritainIndependence hashtag exists to stoke further divides between the two sides in the Brexit debate, while creating the illusion of widespread support for the pro-Leave side. It’s highly likely that the hashtag was initialised for that very purpose, as the nature of ‘Trevor’s’ account suggests.

Furthermore, it’s telling that this hashtag coincides with several significant real-life events in the Brexit timeline that could threaten the pro-Brexit side, including the beginning of (unlawful) prorogation and the case going to the Supreme Court.

But why are so many of the top posting accounts so similar, with their bios stuffed with tribal keywords ? And why are so many of them blatantly US-centric, or with such obvious cultural errors (such as the spelling of ‘favorite’)?

This could indicate an organised social media manipulation campaign aiming to create the illusion of widespread support for the pro-Brexit side while deepening social and political divisions at a critical juncture for the country.

As October 31 inches closer, the discussion is certain to get even more heated – and we’re sure to see lots of interesting social media activity.

I’ll post further analyses here between now and October 31.