Home » The Psychology of Disinformation » Page 2

Category: The Psychology of Disinformation

Tribalism In The Time Of Coronavirus

As I write this, the world has descended into a major crisis, with effects more far-reaching than anything I’ve experienced in my lifetime. A powerful virus has swept onto the scene and is now ripping its way through the world. Barely any country has been spared.

Here in the UK, the coronavirus crisis is getting worse by the day. But merely observing the city streets on this sunny spring Sunday would give no indication of the gravity of the situation. Indeed, some UK tourist spots, notably Snowdon, experienced their ‘busiest day in living memory’. That’s quite something at a time when a highly contagious virus is on the loose.

In contrast, the streets of Paris, Lisbon and Barcelona are deserted. Most EU countries have issued a decisive response, putting their populations under strict lockdown to try and curb the spread of the virus. The UK government hasn’t followed suit.

Britain is saddled with unfortunate leadership in such a time of crisis. Messages from central government have been unclear and have arrived far too late. Many people have died. Amid the frenzied warnings from other countries, tribalism, rooted in the impulses that drove Brexit, still bisects British society — even influencing how we perceive the choice between life and health, or possible death. 

Brexit tribalism could be seen as a barometer for who will approve or disapprove of Boris Johnson’s handling of the coronavirus situation. No scientific study has yet been conducted to prove or disprove this, but research from Cambridge has shown that Leave (and Trump) voters have a strong tendency to believe conspiracy theories.

So if I may hypothesise for a moment, it would go as follows.

Those who believe Johnson is doing well and don’t believe in the necessity of self isolation — more likely to be Leave voters. Those who believe Johnson is doing the wrong thing and we should follow the majority of the EU (and the world) into lockdown — more likely to be Remain voters. 

I can’t help but wonder if these divided attitudes are linked to the government’s aggressively anti-EU narrative. Could it possibly be that our leaders are reluctant to implement lockdown because it would mean them falling into line with the EU? The British government can’t possibly be seen to do that. On the contrary, it must do the exact opposite. After all, there’s a voter base to keep happy.

This tribal stance has filtered down to the population. People’s cavalier real-life behaviour at a critical juncture risks the health and safety of us all.

We’ve gone beyond Brexit concerns now. Freedom of movement is no longer the most important thing at stake. Continued tribal attitudes in the UK could now lead to significant numbers of deaths. The reckoning has arrived. No matter what side of the political spectrum we’re on, we must ensure that tribalism does not cloud our actions on tackling the virus, as the New European so rightly points out.

There’s another factor influencing public opinion around coronavirus: online disinformation. It’s been a key part of turbocharging existing tribal divisions.

Based on my research so far, I’ve seen the following positions solidifying into recurring narratives. Many are from sources that originate in the United States, but the shared language and overlapping ideologies mean they can mostly be considered as UK-relevant too.  

Narratives primarily from conservative/right-wing/pro-Leave sources:

  • The coronavirus is a hoax used as a smokescreen for elites to take control of society
  • It’s no worse than the flu, so there’s no need to believe WHO or UN advice (in fact we shouldn’t trust them because they may be part of the elite conspiracy)
  • Social distancing is unnecessary / too extreme
  • China is to blame for all this. To quote Trump, coronavirus is ‘the Chinese virus’ 

Narratives primarily from liberal/left-wing/centrist/pro-Remain sources:

  • The coronavirus is real, serious, and affects everyone 
  • It can’t be compared to flu
  • We should trust advice from WHO/UN and other legitimate experts
  • Social distancing and possibly lockdown is necessary to save lives across the wider population. 

Most of the disinformation that I’ve observed so far plays on the core narrative strands in the first group. People targeted by these narratives might well be less likely to take the virus seriously and more likely to carry on with a semblance of normal life, thus continuing the pandemic. This unhelpful behaviour is exacerbated by the population spending more time at home and hence online, seeking out constant updates on this critical global threat.

In the next post, I will unravel the coronavirus disinformation narratives in more detail, providing data-driven examples. It’s critical to understand the why behind the seeding of this disinformation, so I’ll also discuss the various incentives that are driving it.

Social Proof and Online Manipulation

What is social proof?

Every day, countless online sources compete for our attention. The concept of social proof can help us cut through the noise. In this post, I’ll explain how it works, and discuss how it can be gamed.

To avoid information overload and possible burnout, it’s essential to zero in on the important parts and sort them into categories that make sense. But how do we know which parts are important?

The human brain uses many shortcuts to understand the complex world around us. In social proof, we use the approval of others as a shortcut. We evaluate the significance and quality of a piece of information based on how many other people like it.

Social proof is part of the psychology of persuasion, used to great effect by marketers and PR specialists. We rely on it all the time when making consumer decisions.

On Amazon, for example, customer reviews are a shortcut to guide us when choosing between a large number of possible products to purchase.

A quick glance at the reviews allows us to avoid wasting time and energy conducting our own research on various products.

We also rely on social proof when judging the importance of a message on social media.

Almost instinctively, we tend to evaluate a post with many likes or shares more favourably. We assume a lot of other people have already approved of it, so we’re happy to jump on the bandwagon.

But relying too heavily on these shortcuts may leave us vulnerable to the risk of them being manipulated.

Manipulating Social Proof

Social engagement

It’s easy to convey social proof on social media. Liking, favouriting or upvoting is the quickest and most basic form of online social proof.

When we see a post with lots of likes, we instinctually view that information as important.

The act of online sharing also taps into social proof. If other people believe a post is worth sharing, then it must be of value. We may decide to share it too. This can easily lead us into the trap of sharing disinformation.

Example of social proof in a false tweet about Britney Spears
Hackers break into Sony Music account and tweet falsely about death of Britney Spears

But online likes and shares are also pretty easy to game. On Twitter, for example, a few lines of code can produce a bot that can automatically favourite a tweet containing a particular keyword (which can be anything).

A large network of automated accounts can then mass favourite (or mass retweet) any tweet, giving it a false appearance of significance, via artificial social proof.

Another way to convey social proof is via the user profile itself. Twitter is the most obvious example of this. We perceive users with more followers as being more important than those with fewer followers. The followers provide social proof, like an exclusive club.

In contrast, when an account has many friends (accounts it has followed) but few followers, it diminishes the effect of social proof. Again, you can use automated accounts to game this.

By mass following an account and then following each other back, it is possible to create the illusion of an account being popular.

Amazon reviews

Social media isn’t the only place where gaming social proof happens. It also happens across the web, in areas such as online shopping. Take Amazon, which has hundreds of options for anything we want to buy.

How do we know which ones are worth buying? We rely on the ratings and reviews from other buyers.

Ratings and reviews are a form of social proof for products, acting as an essential shortcut for navigating through the mass of available options.

You can even filter search results by the number of rating stars each product has gained. Ratings directly affect the seller’s bottom line. For Amazon’s third-party sellers, ratings can make or break their business.

This random product has great ratings. It looks enticing, but how many are actually genuine?

This is prime territory for gaming. And Amazon’s fake review economy is massive and growing.

Trending topics

Trending topics are another major area where social proof holds sway. The trending topics list shows whatever keywords or hashtags people are tweeting most widely at any point in time.

Whenever big news breaks – such as a terrorist attack, plane crash or celebrity death – it usually appears immediately in Twitter’s trending topics, often before the mainstream media can produce coverage.

There’s a strong incentive to game trending topics. For individual tweeters, tweeting using a keyword or hashtag from the trending list makes their tweet more visible.

It’s common to see Twitter accounts for brands ‘hashtag-jacking’ (or ‘trend-jacking’). This is where brand managers (or spammers) shoehorn the trending hashtag into the tweet, to get their brand in front of a larger audience.

Hashtag-jacking can be done skilfully, if the hashtag is relevant to the brand, but it tends to be the domain of spammers.

This is gaming trending topics on a relatively small scale. But things become more insidious when you take into account bot armies. In these cases, a large number of artificial accounts, perhaps controlled by just one person (the ‘bot herder’), tweet coordinated messages around the same hashtag.

Done properly, this can push the hashtag into the trending topics list, where human users will engage with it, giving it further mileage. It’s an effective way to mainline disinformation into the public arena.

The ultimate goal of gaming social proof in tweets is for mainstream media to pick up the false story. With so many journalists using Twitter to find potential stories, this is not so difficult to achieve.

Google search results

Lastly, we’ll take a quick look at the effects of social proof in Google search results. When searching for something, most internet users don’t click beyond beyond the first page of Google.

So the higher your link shows up, the more likely it is to be influential. SEO experts make a lot of money out of getting client links onto the first page of Google results. What’s more, we are more likely to trust links that show up higher in search results.

Google’s PageRank algorithms work in mysterious ways. The company is constantly adapting to make them harder to manipulate. But it’s still possible to game Google search.

For example, reputation management companies create a large amount of positive content, to help clients push negative articles off the first page of Google.

This happens in politics too. In the run-up to the UK’s general election of 2019, people speculated that Boris Johnson’s campaign team may have gamed Google by seeding bizarre stories about him to make negative coverage less prominent in searches.

In 2016, extremist websites manipulated Google search to make their hate filled propaganda, such as Holocaust denial, rank highly in search. Although Google later fixed this, savvy disinformation purveyors can still find ways to leverage its power to deceive vast swathes of the population.

Social proof: Key takeaways

We should not ignore social proof. It is both a cornerstone of how we navigate the online environment and a prime factor for manipulation.

Social proof is not just on social media. People use it all over the internet, from Amazon reviews to Twitter trends. Even Google search results, which many people trust more than social media, can be gamed.

The best way forward is to keep the concept of social proof in mind when engaging with online content. In particular, think about who might have an incentive to make the article or post that you’re reading look popular.

psychology of disinformation

How Disinformation Hacks Your Brain

Today I’m going to explain how disinformation hacks your brain.

In our fast-moving online world, even the most aware of us can be taken in by disinformation. As humans, our minds work in certain ways that can leave us vulnerable to deception – and no-one is immune.

Our perceptions are not as reliable as we might like to imagine, and the online environment amplifies these flaws. In this post, I’ll discuss five important psychological traits that dictate how disinformation hacks your brain.

5 Ways Disinformation Hacks Your Brain

Confirmation Bias

Search engines give us access to all the world’s information simply by typing a few words into a search bar.

Because of confirmation bias, people tend to search only for information that reinforces their beliefs. Furthermore, even if what they find is disinformation, the effect of confirmation bias makes them less likely to question its veracity.

For example, let’s take someone who already dislikes Donald Trump. Perhaps they might search Google for “why is Trump still president?”

This search produces a slew of articles critical of Trump, feeding into the person’s existing beliefs. Consequently, this becomes a vulnerable moment during which disinformation can easily find a foothold in the mind.

Social Proof

The term ‘social proof’ was first used by Robert Cialdini in his seminal marketing book, Influence. It’s a way of building trust in a person, a product or a message, by demonstrating that many people approve of it.

The bandwagon effect is the motivating force driving social proof. It dictates that if something seems popular, people will feel compelled to join in.

Social proof is especially important in today’s environment of information overload. Because there are so many options available to us, we need a shortcut to help us cut through the noise and determine which ones to trust.

For marketers, social proof is an essential tool. It’s also a powerful weapon in the arsenal of disinformation. Devices such as bots play a major role in building social proof around certain messages, including false ones.

Liking, sharing and replying to these messages creates an illusion of widespread approval, which attracts more people to trust them. This may snowball, causing the message to go viral.

There’s a lot more to say about the role of social proof in disinformation. I’ll explore it in more detail in a follow up post. For now, remember that it’s easy to fake online popularity, so likes and retweets aren’t always a reliable indicator of grassroots public opinion.

The Consensus Effect

We like to think that many other people share our beliefs, preferences, values and habits, even when that’s not actually the case. Behavioural psychology calls this overestimation the false consensus effect. It relates to our self-esteem and the desire to conform as part of a social group, meaning we need to fit in.

Online, the false consensus effect is amplified in two main ways:

  • By means of algorithms that show us opinions reflecting our own (filter bubble effect),
  • By our habit of engaging only with others who support our views (echo chamber effect).

Disinformation that taps into the false consensus effect can find a fertile environment to take root, grow and mutate. Social media helps this happen. No matter how convinced you are of a certain view, you should never forget that other people may well think differently.

Tribalism

Humans are social animals. Consequently, gaining the approval of a likeminded group is important for boosting our self-esteem. We reinforce this sense of self-esteem by behaving in ways that favour our own group (known as the in-group).

For example, we might post on social media about the positive traits of our in-group, which is relatively harmless in itself. However, every in-group needs an out-group.

Furthermore, where there’s in-group loyalty there may also be out-group derogation – negative attitudes and behaviour towards the out-group. This conflict between groups of all kinds is a form of tribalism. It plays a huge role in how disinformation hacks your brain.

In emotive issues like politics, which tap into aspects of people’s identities, tribalism can morph into a force of dangerous power. Violence can easily follow. In fact, tribalism is the driving force behind many human conflicts.

Disinformation leverages the human tendency for tribalism by creating and disseminating adversarial narratives. These inflame existing divisions, creating a sense of ‘us vs them’. We can observe many cases of this in recent political events.

Examples include Trump supporters vs Clinton supporters in the US, Leavers vs Remainers in the UK, Muslims vs Buddhists in Myanmar, Han fans vs Tsai fans in Taiwan’s recent presidential election.

The Backfire Effect

You might expect people would stop believing in disinformation if they are told it’s untrue. This seems logical, however human psychology doesn’t always work that way. The root of the problem is found (once again) in our self-esteem.

Furthermore, when certain beliefs become embedded in our worldview, they also become part of our identity. If one of those beliefs is challenged, it’s as if someone is shaking up the very foundations of that identity.

Challenges to our identity can be psychologically painful. In response, we may cling tighter to the original belief, making it even stronger. The attempt to correct backfires, therefore this process is known as the backfire effect.

Summary: How Disinformation Hacks Your Brain

  • Human psychology makes us susceptible to disinformation
  • In a world of information overload, we seek shortcuts to help us navigate. However, these can be gamed, such as social proof.
  • Much of online behaviour has its roots in aspects of self-esteem and identity.
  • Simply ‘debunking’ disinformation may not be effective, due to the backfire effect.
  • Adversarial narratives are a common feature of disinformation, found in many situations worldwide. They can lead to tribalism, which risks real-life violence.

astroturfing

Astroturfing: A Quick Example from Facebook

What is Astroturfing?

Astroturfing is not new. Its history stretches back to the days of newspapers and pamphlets. But astroturfing has become a major important concern in today’s ‘post-truth’ information environment.

The Guardian defines astroturfing as “the attempt to create an impression of widespread grassroots support for a policy, individual, or product, where little such support exists.”

The ‘grassroots’ part is where the name comes from; that bright green fake grass. You might remember it from the school sports field.

astroturfing

Social media is a prime environment for astroturfing campaigns. User attention spans are low, knee-jerk reactions are prevalent, and ‘likes’ are an addictive form of currency.

Illusion becomes reality when fake engagement intersects with genuine social media users. They are more likely to engage with seemingly popular posts because of social proof – a psychological effect in which people like or support things that already seem popular with others.

An Example of Astroturfing

Let’s take a look at an example of suspected astroturfing on Facebook. Our starting point is the official Facebook page of the UK’s current prime minister, Boris Johnson.

Underneath every post on his page, especially those about Brexit, we can see hundreds of responses. That’s not unusual to find on the page of a public figure. But the style of those responses seemed artificial.

astroturfing
Screenshot of Boris Johnson’s Facebook page, with a selection of comments about Brexit.

They are all very similar; short utterances of praise for Boris Johnson, repeating words and phrases such as ‘brilliant’, ‘fantastic’, and ‘support Boris 100%’. On each comment, we can also see a lot of response emojis of positive sentiments ‘like’, ‘love’ and ‘laugh’.

This behaviour is odd. Genuine people do support Johnson, of course. But it’s suspicious for so many to comment on his posts in this distinctive and repetitive way. This looks very much like an astroturfing campaign.

More genuine engagement

Now let’s contrast this with the page of his predecessor, Theresa May, specifically her Brexit-related posts. Here we can see a very different scenario, which immediately feels far more genuine.

astroturfing
Screenshot of Theresa May’s Facebook page showing a sample of typical comments about Brexit. Note the contrast with Johnson’s page.

Responses to May’s posts are more varied in content, tone and length. Some commenters disagree with her. Others support her. But most commenters use more depth and sophistication of language than the short repetitive replies to posts on Johnson’s page.

The responses on May’s page are more likely to be ‘organic’ (i.e. from real people who behave naturally). In contrast, it appears that Johnson’s page is the subject of astroturfing techniques, which may include fake comments and even fake followers.

Facebook locks its data down tight, so it’s hard to run further analysis to determine for certain whether the Johnson supporters are part of an organised campaign. But we can draw insights from previous recent examples. 

Donald Trump used fake Facebook followers during the US presidential campaign. Researchers discovered that over half of the followers on his page came from countries known as hubs for Facebook ‘like farms’.

It is common for like farms to exist in developing countries such as the Philippines and India, where much of the population speaks English and the US dollar stretches a long way.

The farms offer customers the opportunity to buy fake Facebook likes and Twitter follows, to use for astroturfing the impression of popular support.

As well as likes, customers can purchase fake engagement, usually in the form of comments. This may explain the unusual commenting activity on Johnson’s page.

Why astroturfing matters

Astroturfing matters because it’s a deliberate attempt to manipulate perceptions of popular opinion, with potentially dangerous results.

Although astroturfing has been a feature of political campaigning for decades, the social media environment gives it enormous power. Social media users have become far more susceptible to its effects than newspaper readers ever were.

When combined with disinformation and conspiracy theories, astroturfing has the potential to cause all sorts of social and political chaos. Many would argue that it already has.