Tribalism helps spread online disinformation. Here’s how to stop it.

Online disinformation in Europe

Despite the much-publicised role of online disinformation in bringing Donald Trump to power, disinformation is not just a US problem. It affects European countries too.

Over the past five years, disinformation campaigns on social media have exploited social cracks and pitted various identity groups against each other. We only have to look at the example of Brexit to see the damaging results of disinformation writ large.

More recently, the COVID-19 pandemic has created a prime environment for online disinformation, which has helped propel a rise in harmful anti-vaccine and anti-mask attitudes. In particular, the QAnon grand conspiracy narrative, and its many sub-strands, have jumped across the Atlantic and spread into European populations.

This disinformation has triggered a wave of real-life harms, including numerous arson attacks on 5G masts in the UK, the Netherlands and France. The European External Action Service (EEAS) has warned of ‘coordinated influence campaigns’ led by several foreign sources.

The threat of political tribalism

Online disinformation campaigns such as these get much of their potency from our human tendency towards tribalism. Understanding this influence is a critical step in fighting back. Tribalism is an evolutionary defence mechanism, in which being accepted by the tribe protects us from external aggressors. Getting kicked out of the tribe leaves us vulnerable to attack.

To reinforce our identity as part of the tribe (or ingroup), we need to define an opposing tribe (or outgroup). This opposite number becomes the target of hostility and derogation, which plays an important role in differentiating ‘them’ from ‘us’. On social media, we can see this tribal dynamic play out around countless political topics, such as pro vs anti vaccine, or pro vs anti-EU.

Social media platforms have specific design features that encourage tribalism, primarily echo chambers and filter bubbles. In the former, groups of like-minded users gather around shared interests, such as in Facebook groups. In the latter, platform algorithms continually serve up more of what people want, creating individually tailored lenses on reality. 

What’s more, one core objective of social media platforms is to keep people using them for as long as possible, to generate more advertising revenue. It’s a simple business decision, but one with far-reaching repercussions in terms of disinformation.

Tribalism is a threat because it makes online populations more susceptible to disinformation. We tend to believe without question information shared within our own tribe, while being suspicious of information shared by opposing tribes.

Disinformation operatives understand this all too well. That’s why online disinformation campaigns have tribal dynamics at their core. They tap into and exacerbate existing social and political divides, usually around emotive issues in which people’s identities are deeply rooted.

Acting tribal on social media makes us more susceptible to sharing disinformation. This happens in two main ways. First, we share content as a way to ‘one-up’ the outgroup, to prove that our viewpoint is correct. But, under these knee-jerk reaction-type circumstances, we’re less likely to stop and check if the content is actually true.

Second, we tend to share content that resonates with our tribe, which brings us approval in the form of likes and comments. Those reinforce self-esteem and deliver a nice dopamine hit, while also being an effective way to broadcast our tribal identity. In turn, that attracts more approval, cementing our ingroup status.

How to fight online disinformation?

In an ideal scenario, social media platforms would fundamentally rework their design features to get rid of filter bubbles and stop rewarding engagement. But, as these features were baked in at the very beginning, it may be disproportionately challenging to remove them.

What’s more, removing these features would compromise the platforms’ business model of selling ads. As things stand now, they’re unlikely to take this route of their own volition, although regulation may force fundamental changes in the future. Meanwhile, as ordinary social media users, we can take steps to reduce susceptibility to tribalism and, consequently, disinformation.

Here are a few suggestions. For starters, try to avoid getting stuck in circular arguments online. Your opponent could well be a paid troll, tasked to bait social media users into endless tribal arguments about political issues. Avoiding this sort of engagement can be challenging. The best way to shut it down is by immediately muting or blocking them, instead of responding in a knee-jerk way without stopping to think.

Next, whenever an item of content resonates with you, stop and think about why. Because of confirmation bias, you’re more likely to accept information that reflects your existing views without questioning it. Does that resonance stop you from caring about whether the content is actually true? Should you cross-check with reliable sources elsewhere online to make sure it is?

Finally, consider the incentives driving someone to share content online. Could it be part of a divisive political agenda? A quick check of their posting history and profile features is usually enough to confirm this. Are they simply trolling, or are they trying to drive attention to certain websites? This tactic is commonly used to make money from online ad clicks, just like the young Macedonians who leveraged tribal fake news around the 2016 US election.

As humans, we can’t change our natural inclinations towards tribalism. But we can still acknowledge and understand how the current social media environment triggers those inclinations. With that in mind, we’re better equipped to control the effect of tribalism on how we engage with information online. This is a key step in combatting the harmful effects of coordinated online disinformation campaigns.


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.