Home » Disinformation In Politics

Category: Disinformation In Politics

shows a glowing human brain

How Disinformation Hacks Your Brain

Today I’m going to explain how disinformation hacks your brain.

In our fast-moving online world, even the most aware of us can be taken in by disinformation. As humans, our minds work in certain ways that can leave us vulnerable to deception – and no-one is immune.

Our perceptions are not as reliable as we might like to imagine, and the online environment amplifies these flaws. In this post, I’ll discuss five important psychological traits that dictate how disinformation hacks your brain.

5 Ways Disinformation Hacks Your Brain

Confirmation Bias

Search engines give us access to all the world’s information simply by typing a few words into a search bar.

Because of confirmation bias, people tend to search only for information that reinforces their beliefs. Furthermore, even if what they find is disinformation, the effect of confirmation bias makes them less likely to question its veracity.

For example, let’s take someone who already dislikes Donald Trump. Perhaps they might search Google for “why is Trump still president?”

This search produces a slew of articles critical of Trump, feeding into the person’s existing beliefs. Consequently, this becomes a vulnerable moment during which disinformation can easily find a foothold in the mind.

Social Proof

The term ‘social proof’ was first used by Robert Cialdini in his seminal marketing book, Influence. It’s a way of building trust in a person, a product or a message, by demonstrating that many people approve of it.

The bandwagon effect is the motivating force driving social proof. It dictates that if something seems popular, people will feel compelled to join in.

Social proof is especially important in today’s environment of information overload. Because there are so many options available to us, we need a shortcut to help us cut through the noise and determine which ones to trust.

For marketers, social proof is an essential tool. It’s also a powerful weapon in the arsenal of disinformation. Devices such as bots play a major role in building social proof around certain messages, including false ones.

Liking, sharing and replying to these messages creates an illusion of widespread approval, which attracts more people to trust them. This may snowball, causing the message to go viral.

There’s a lot more to say about the role of social proof in disinformation. I’ll explore it in more detail in a follow up post. For now, remember that it’s easy to fake online popularity, so likes and retweets aren’t always a reliable indicator of grassroots public opinion.

The Consensus Effect

We like to think that many other people share our beliefs, preferences, values and habits, even when that’s not actually the case. Behavioural psychology calls this overestimation the false consensus effect. It relates to our self-esteem and the desire to conform as part of a social group, meaning we need to fit in.

Online, the false consensus effect is amplified in two main ways:

  • By means of algorithms that show us opinions reflecting our own (filter bubble effect),
  • By our habit of engaging only with others who support our views (echo chamber effect).

Disinformation that taps into the false consensus effect can find a fertile environment to take root, grow and mutate. Social media helps this happen. No matter how convinced you are of a certain view, you should never forget that other people may well think differently.

Tribalism

Humans are social animals. Consequently, gaining the approval of a likeminded group is important for boosting our self-esteem. We reinforce this sense of self-esteem by behaving in ways that favour our own group (known as the in-group).

For example, we might post on social media about the positive traits of our in-group, which is relatively harmless in itself. However, every in-group needs an out-group.

Furthermore, where there’s in-group loyalty there may also be out-group derogation – negative attitudes and behaviour towards the out-group. This conflict between groups of all kinds is a form of tribalism. It plays a huge role in how disinformation hacks your brain.

In emotive issues like politics, which tap into aspects of people’s identities, tribalism can morph into a force of dangerous power. Violence can easily follow. In fact, tribalism is the driving force behind many human conflicts.

Disinformation leverages the human tendency for tribalism by creating and disseminating adversarial narratives. These inflame existing divisions, creating a sense of ‘us vs them’. We can observe many cases of this in recent political events.

Examples include Trump supporters vs Clinton supporters in the US, Leavers vs Remainers in the UK, Muslims vs Buddhists in Myanmar, Han fans vs Tsai fans in Taiwan’s recent presidential election.

The Backfire Effect

You might expect people would stop believing in disinformation if they are told it’s untrue. This seems logical, however human psychology doesn’t always work that way. The root of the problem is found (once again) in our self-esteem.

Furthermore, when certain beliefs become embedded in our worldview, they also become part of our identity. If one of those beliefs is challenged, it’s as if someone is shaking up the very foundations of that identity.

Challenges to our identity can be psychologically painful. In response, we may cling tighter to the original belief, making it even stronger. The attempt to correct backfires, therefore this process is known as the backfire effect.

How Disinformation Hacks Your Brain: Key Takeaways

  • Human psychology makes us susceptible to disinformation
  • In a world of information overload, we seek shortcuts to help us navigate. However, these can be gamed, such as social proof.
  • Much of online behaviour has its roots in aspects of self-esteem and identity.
  • Simply ‘debunking’ disinformation may not be effective, due to the backfire effect.
  • Adversarial narratives are a common feature of disinformation, found in many situations worldwide. They can lead to tribalism, which risks real-life violence.

A Tale Of Two Leaders: Facebook, Astroturfing, And Social Proof

There’s something odd about the Prime Minister’s Facebook page.

Underneath every post, especially those about Brexit, are hundreds of responses. This isn’t unusual for the page of a public figure, but the style of the responses didn’t ring true.

They are all very similar; short utterances of praise for Boris Johnson, repeating words and phrases such as ‘brilliant’, ‘fantastic’, and ‘support Boris 100%’. Each comment is festooned with Facebook’s emojis, mainly representing positive sentiments of ‘like’, ‘love’ and ‘laugh’.

This behaviour feels odd. I’m not denying that a lot of genuine people do support Johnson, but it’s suspicious for so many to consistently comment on his posts in this distinctive and repetitive fashion.

Screenshot of Boris Johnson’s Facebook page, with a selection of comments about Brexit.

Let’s contrast this with the page of his predecessor, Theresa May, specifically her Brexit-related posts. Here we see a very different scenario.

Responses to May’s posts tend to be a lot more varied, in content, tone and length. Some disagree with her. Others support her. But most are expressed in more depth and sophistication of language than the short repetitive replies on Johnson’s.

In short, the responses on May’s page look far more likely to be ‘organic’ (i.e. produced by real people behaving naturally) than the majority of those on Johnson’s. It’s possible that Johnson’s page is using artificial amplification techniques, which may include fake followers.

Screenshot of Theresa May’s Facebook page showing a sample of typical comments about Brexit. Note the contrast with Johnson’s page.

Facebook locks its data down tight, so it’s hard to run further analysis to determine for certain whether the Johnson supporters are part of an organised campaign.

But we can draw from previous examples. Donald Trump used fake Facebook followers during the US presidential campaign. Researchers discovered that over half of the followers on his page came from countries known as hubs for Facebook ‘like farms’.

These ‘farms’ are often found in developing nations such as the Philippines and India, where the dollar stretches a long way. They offer customers the opportunity to buy fake Facebook likes to create the impression of popular support.

As well as likes, customers can purchase fake engagement, usually in the form of comments. This may explain the unusual commenting activity on Johnson’s page.

For political purposes, this type of artificial campaign is an important tool, because it generates the illusion of grassroots support for a particular figure or issue. It even has a name: astroturfing.

Illusion becomes reality when the fake engagement intersects with genuine users, who are more likely to engage with seemingly popular posts thanks to the effect of ‘social proof’ – a psychological phenomenon where people tend to follow the actions of the masses.

This can be leveraged to great effect in social media environments, where user attention spans are low, knee-jerk reactions are prevalent, and ‘likes’ are an addictive form of currency.

Tracking A Brexit Influence Campaign: #BritainIndependence

The atmosphere in Britain is becoming increasingly heated as October 31st inches ever closer. This is the date when the country will leave the European Union — if all goes to plan for Boris Johnson and the Conservative Party. Right now the political stakes are higher than ever.

Parliament has tried to protect the country from the potential catastrophe that may result from leaving without a deal. In the nick of time before prorogation, they passed a last-minute bill into law, designed to compel Johnson to seek an extension, if he can’t reach a suitable deal with the EU by the end of October.

Johnson has already lost multiple times in Parliament, despite his efforts to prorogue it for an unusually long time. Last week, a Scottish court ruled that this prorogation was unlawful. This week, the case goes to the English Supreme Court, where it will be challenged.

#BritainIndependence

In this highly-charged environment, Twitter has provided a constant source of lively political debate around Brexit. Many issues are bubbling up at present, some more relevant than others, but here I want to explore a particularly interesting hashtag.

#BritainIndependence has been popping up a lot lately. The first thing to do is to find out when it was first used, and who first used it. The hashtag came fairly late to the game, on September 9, via a user called ‘Trevor’, whose screen name is stuffed with pro-Brexit hashtags.

Signalling ingroup identity

A quick glance at Trevor’s bio is revealing. First, the bio is a strong indicator of self-professed identity on Twitter. In Trevor’s case, it contains words that reflect traditional values: conservative, nationalist, family-oriented, words such as ‘Christian’, ‘white’, ‘loyal British subject’, and ‘married’.

This creates a sense of group identity, designed to give Trevor’s profile immediate appeal to others who identify with similar values – i.e. signalling that he is part of an ingroup. In this case, the ingroup is pro-Brexit Twitter users.

The references to popular British football teams (Arsenal and Rangers), is likely designed to portray the account as belonging to a genuine working-class British citizen – another effort at signalling ingroup identity.

But the cultural references feel jumbled: Arsenal is an English team, while Rangers is a Scottish team. That plus the random mention of Northern Ireland means this bio doesn’t quite ring true. In fact, it feels like someone playing at being a Brit, perhaps for nefarious reasons.

What’s more, ‘neighbor’ is spelled in US English. No genuine speaker of British English would use US spelling; especially a man who seems so deeply committed to British patriotism. Clue, Trevor is likely not the grassroots British man that he pretends to be.

We could dig much deeper into Trevor’s account, especially his past tweet history. His account is devoted to tweeting about Brexit, even though it was created in September 2015, before Brexit existed. It would be interesting to see what Trevor was tweeting about between then and June 2016, but that’s a topic for another post…

Hashtag activity

Next up, let’s take a look at how Twitter users have been interacting with the #BritainIndependence hashtag, since ‘Trevor’ coined it on September 9th (coincidentally, just in time for the prorogation of Parliament).

All of the most retweeted tweets on the #BritainIndependence hashtag come from users with heavily pro-Brexit screen names (and usernames like @Brexit4me and @Feck_the_EU), suggesting one-topic accounts that exist simply for the purpose of engaging with Brexit-related discussions.

Retweets have two main functions, 1) they spread a message across Twitter, 2) they create validation for the message via social proof (i.e. if other people have engaged with this tweet, then it must be worth engaging with).

Liking (or favouriting) tweets reinforces the sense of social proof, while also increasing the likelihood of the tweet being seen in other users’ feeds.

The below tweets contain strong evidence of typical adversarial narratives, designed to promote a sense of tribalism, i.e. “us vs them”.

Examples include:

  • ‘Unelected judges’ and ‘hijacked by extremists’ (fits into narrative of the EU/Remain supporters being ‘anti-democratic’)
  • ‘Tattooed Eastern European thug’, and ‘brutal rape’ (fits into narrative of foreigners and ‘The Other’ as being threatening, especially to women)
  • ‘Me, just a patriot’ (supports the narrative of pro-Brexit voters as being especially patriotic. This is a similar notion to that conveyed by Trump’s ‘Make America Great Again (#MAGA’), which coincidentally appears in many of the user bios tweeting the #BritainIndependence hashtag.

Clearly, the #BritainIndependence hashtag exists to stoke further divides between the two sides in the Brexit debate, while creating the illusion of widespread support for the pro-Leave side. It’s highly likely that the hashtag was initialised for that very purpose, as the nature of ‘Trevor’s’ account suggests.

Furthermore, it’s telling that this hashtag coincides with several significant real-life events in the Brexit timeline that could threaten the pro-Brexit side, including the beginning of (unlawful) prorogation and the case going to the Supreme Court.

But why are so many of the top posting accounts so similar, with their bios stuffed with tribal keywords ? And why are so many of them blatantly US-centric, or with such obvious cultural errors (such as the spelling of ‘favorite’)?

This could indicate an organised social media manipulation campaign aiming to create the illusion of widespread support for the pro-Brexit side while deepening social and political divisions at a critical juncture for the country.

As October 31 inches closer, the discussion is certain to get even more heated – and we’re sure to see lots of interesting social media activity.

I’ll post further analyses here between now and October 31.

How Personal Branding Paved The Way For The Post-Truth Era

Over a decade ago, an idea was born that seemed innocent at the time, even ground-breaking. It was the idea of personal branding; marketing one’s own skills like a product. In this piece, I’m going to reflect on how the personal branding mindset has played a role in creating today’s polarised and tribal online environment.

In his original Fast Company article,‘The Brand Called You’, author Tom Peters urges his readers to develop their personal brands by delivering talks and developing word-of-mouth marketing around their unique skills. He briefly mentions the importance of showing familiarity with new technology (such as email), but as a rather minor consideration. After all, it was 1997; the digital world hadn’t yet become an inextricable part of everyone’s lives.

Fast forward a few years to the early 2000s, where people had started publishing their own content using blogs and personal websites. The social media platform MySpace was launched in 2003, followed a couple of years later by Facebook. These tools were powerful and they allowed ordinary people to broadcast their message, whatever that might be, to large audiences. It was a whole new way to build the brand called you.

Digital tribalism

The growth of social media and blogs spawned a whole generation of online content creators, some successful, many not. People could now reinvent themselves personally and professionally simply by producing relevant online content and sharing it with audiences via social media. The trick to success was finding a bunch of people with whom your message resonated, i.e. your tribe.

The idea of ‘finding your tribe’ is central in branding strategy, both for commercial marketers and personal branders. Personal branding gurus often stress the importance of being bold and even divisive in the content you choose to publish. The goal in doing so is to eliminate those who aren’t on board with your opinions, leaving only your loyal, like-minded tribe remaining.

Arguably, this tribal approach has instilled in the digital generation a habit of being strongly opinionated online. It’s all too easy to be bold and divisive when you’re safely behind a screen. You can blog, make videos and write ebooks to your hearts’ content.

But creating effective content for personal branding takes up a lot of time and mental energy. Not everyone wants (or has the skills/motivation) to write original blog posts about their key career interests. Luckily, there’s another approach: content curation.

This popular and effortless alternative for building a personal brand community involves sharing other people’s content with your target audience, sometimes (but not necessarily) adding your own quote or original take.

Curation can be done quickly and with the minimum of effort; an appealing strategy in a time-pressed world. For example, content curation on Twitter could be as simple as retweeting articles and tweets relevant to the personal brand you wish to create. By doing this consistently, you can attract like-minded people, which then gives you a tribe, or brand community.

Another relevant factor in the development and solidification of personal branding is the deliberate design of online social networks. This encourages users to take actions which generate more likes, clicks and engagement from their audience.

Content curation and social networks’ design are symbiotic processes intended to complement one another, leading to a cycle where people create (or curate) content, gain approval from their tribe, experience a positive self-esteem boost, and repeat. This ongoing process generates increased traffic for the social networks and more revenue for their vital advertisers.

Personal branding meets politics

In 2008, online social networks made their big debut in politics as part of Barack Obama’s presidential campaign. Obama won the presidency, and followed up in 2012 with another win and another dose of digital political campaigning.

By then, more and more people were using social media and the first signs of manipulation were began to emerge. On top of that, attention spans were beginning to erode as people became used to a lifestyle lived almost wholly online. The introduction of Apple’s iPhone, and the resulting explosion in smartphone use exacerbated this shift, giving people access to social media in their pockets at all times.

It created the beginnings of a world where everyone on a bus or train would have their head down staring at a smartphone. Once we gained the possibility of sharing content at the touch of a ‘share’ button, content curation as part of maintaining a self image would soon become habitual for many. By 2016, social network use was prevalent, most people had a smartphone, and information was flowing non-stop.

Politics had firmly entered the personal branding arena, and campaign managers deployed increasingly clever strategies, such as digital profiling and social ads, to win over voting populations. This practice came to a head with the EU referendum in the UK, closely followed by the 2016 election of Donald Trump as US president.

Going tribal

To better understand what drove these events, it’s useful to first consider the innate human tendency to see the world in terms of ‘us vs them’. This is well demonstrated by the work of behavioural psychologist Henri Tajfel on what he called the ‘social identity theory of group conflict’, in particular the ‘minimal group paradigm’.

The minimal group paradigm shows that people define themselves in opposing groups over the most trivial of matters, such as a coin flip, grouping themselves into ‘Heads’ and ‘ Tails’. Once divided into groups, people tend to favour their own ingroup while disadvantaging, and even derogating, the outgroup. If people can get tribal over a simple coin flip, imagine what they’d be like over political ideology.

Further research has shown that not only do people tend to strongly favour their ingroup, but they also have a tendency to derogate the outgroup. This us vs them mentality manifests in many areas of life, from harmless rivalry over cities and sports (e.g. Boston vs New York or Manchester vs Liverpool), to more serious issues of racism, xenophobia and nationalism.

It also manifests in the digital world, exacerbated by today’s entrenched tendency for personal branding and ‘finding one’s tribe’. People receive positive reinforcement as part of the in-group whenever they broadcast their identity to their fellow brand community members. They usually do this by sharing content, whether their own, or, more commonly, curated from others.

Two infamous political examples are the behaviour of Trump supporters versus Clinton supporters, or Leave versus Remain supporters. Both sides commonly derogate the other (e.g. libtard, Brexiteer) and view their ingroup as superior.

That’s not the only way social identity theory manifests itself in contemporary digital politics. In addition to derogating each others’ perceived political outgroup, it’s become common practice to derogate, and even dehumanise, certain outgroups in wider society, normally minorities such as Muslims, refugees or immigrants.

These groups have become easy targets because of an array of social and political events over recent decades that have put them squarely in the firing line. Ever since the terrorist attacks of 9/11, the British and US mainstream media has consistently highlighted attacks committed by Muslim perpetrators while downplaying similar ones conducted by non-Muslims.

What’s more, the Syrian civil war and the rise of ISIS triggered a massive influx of refugees from Syria and Iraq into Europe. Together, these events produced a climate of fear and uncertainty; fertile territory for innate ‘us and them’ attitudes to thrive in a digital sphere where online tribalism (in the personal branding sense) had long been a common practice.

Tribes before truth

This leads to a very current concern: the rise of online misinformation, often known as ‘fake news’. With such a huge flood of information now available via our smartphones, we don’t always have time to read everything in detail. So we take shortcuts and get lazy about processing information properly. We simply don’t have the time or inclination to think deeply about every piece of content we interact with online.

Nevertheless, we crave engagement and approval from our ‘tribe’. Perhaps we’ve become somewhat addicted to it, to the extent that we sometimes share articles without even reading them. Recent research found that between 50–70% of all URLs on Twitter are shared without being opened, suggesting that people share them based only on the headline. This has heavy implications for the spread of misinformation, and suggests too, that fact-checking probably won’t work.

In an online space rife with misinformation, why would someone share an article without reading it first? Arguably, broadcasting our affiliation to our digital tribe matters more to us than veracity. More critically, broadcasting this affiliation to our ingroup is likely to involve derogation of an outgroup. After all, we really want those likes and shares, and that’s often the best way to get them.

One of the key goals in sharing content on social media (especially Twitter) seems to be to signal that ‘we’ (the ingroup) are different from ‘them’ (the outgroup). This dichotomy shows up most disturbingly in stories about ‘Muslim rape gangs’, refugee ‘sex mobs’, and terrorist attacks that never happened (e.g. the fictitious Bowling Green massacre).

In this tense milieu, it’s easy for misinformation to get picked up and spread as part of the ‘tribal broadcasting’ process, or ‘content curation’ in personal branding parlance. If a certain news story fits people’s ingroup vs outgroup narratives, they’re probably going to share it on social media. Truth may come second to tribalism.

The real danger comes when this digital tribalism plays out in real world scenarios, such as an uptick in anti-immigrant hate crime, or violent events such as ‘Pizzagate’. Both have been linked to online misinformation.

You might ask what the social media giants are doing to address this issue. They’ve made various efforts to implement reporting tools so that users can report hate speech. They have also shut down particularly heinous accounts such as InfoWars, that exist purely to peddle misinformation and hate.

But digital tribalism in fact boosts all the metrics that spell success for social media firms, creating a self-reinforcing situation. One can’t help but wonder how far they’ll actually go to rein this in.

If only we could all quit social media, en masse. Would that solve the problem? Or does it run deeper than that?

How Blogging Is Used In Disinformation Campaigns

Just a few days ago, Wired Magazine published an article revealing that the ongoing Russian disinformation campaign also involved a number of long form blog posts published on Medium.

In this post, I want to talk about why this seemingly new move from Russian info ops is actually not surprising at all and makes perfect logical sense in an overall content strategy. Blogs play a key role in an effective strategy, with content posted on Twitter, Facebook and other social platforms acting as ‘outposts’, all emanating from the blog, which is placed at the heart.

Some of the Medium bloggers had tens of thousands of followers. They published content around predictable topics such as ‘race, science and politics, indoctrination on US college campuses’ – and that good old chestnut, ‘lies from Hillary Clinton’. The fake blogger identities were carefully crafted to appeal to their target audience, with bios claiming to be conservative, Christian, and family-oriented.

This approach reflects that used on Twitter by many bot and sock puppet accounts; designed to instil a sense of community between the fake accounts and their Trump-supporting human target audiences. It’s an attempt to create an in-group, define its identity in opposition to the ‘out-group’ and build trust between its members. This makes the messages seem more believable in the target audience’s minds.

The most famous and prolific disinformation blogger of all is ‘Jenna Abrams’, the Kremlin operated account that fooled Americans for a number of years. From 2014 onwards, the ‘freewheeling American blogger’ who just happened to push a return to segregation and decried ‘PC culture run amok’ had plenty of time to build a significant target audience. The account was perfectly placed to propel divisive messages into the mainstream media, becoming an influential voice just in time for Trump’s election in 2016.

How did this fake identity become so influential? Firstly, ‘Abrams’ touted a lot of controversial opinions. Pushing openly xenophobic and racist opinions riled up a lot of journalists and celebrities on social media, which helped to boost Abram’s messages and help them gain more traction and visibility. Many of her tweets and posts went viral. When something goes viral it’s quite common for it to reach the attention of the mainstream media, and this is what happened with Abrams. The Abrams account was featured in many of the biggest media outlets and this in turn fed back into her following, attracting more people to view, share and comment on her content.

Conventional marketing wisdom has always claimed blogging as a highly effective way to build a personal or professional brand. During its early days, a whole host of blogging gurus and advice sites sprung up, such as Problogger, Neil Patel and Chris Brogan. They taught people how to harness the power of online content to define their brand and attract better professional opportunities. Indeed, many Internet celebrities such as Tim Ferriss, Gala Darling and Ash Ambirge of the Middle Finger Project came into existence thanks to the power of blogging.

I did it myself; spending around four years building up a professional identity in the field of nation brand strategy, by creating content about the topic and publishing it online. This, combined with a lot of social media outreach and networking, plus a bit of well-placed SEO, brought me a modest range of opportunities. They ranged from an invitation to keynote at an international conference, to an interview with CNN and a number of consulting opportunities based on my supposed expertise. This expertise was solely built through blogging.

My nation branding blog wasn’t pushing disinformation. But nevertheless it’s another case in point that shows just how easy it is to build a believable personal brand and gain an audience purely through online content creation. With this in mind, it doesn’t surprise me in the least that Russia’s expert info ops strategists are using blogging as part of their disinformation campaigns. I’d wager we can expect others like these in the future.

What’s more, the line between a personal blog and a professional news source has become blurred. Anyone can set up a website and promote their own content to the world. If it attracts enough viewers and enough engagement, then the site will start to gain value and become a trusted source. With enough traction, a simple blog might even turn into something like Breitbart, with the influence to significantly affect public opinion and bring toxic debates into the mainstream.