Home » Blog

fake news and disinformation

4 Simple Ways To Stop Sharing Fake News and Disinformation

Introduction

Fake news, more specifically known as disinformation, is a major problem that shows no sign of going away. If anything, it’s evolving in new ways to become more nefarious than before. Elections are always major flashpoints for fake news, and the US presidential election of 2020 is no exception. 

Many individuals and organizations are working hard to come up with ways to fight fake news and disinformation. In the meantime, ordinary internet users can also do their part to help.

In this post, I’ll discuss four simple ways that you can stop sharing fake news and disinformation.

4 Simple Ways To Stop Sharing Fake News and Disinformation

Break Out Of Dopamine Loops

fake news and disinformation

What is a dopamine loop and how does it relate to fake news and disinformation?

Dopamine is a chemical in the brain that affects functions such as mood, attention and motivation. It also plays a key role in affecting our desire to seek out new things – like information. 

Dopamine loops consist of two parts: wanting and liking. ‘Wanting’ compels you to keep searching for new information, while ‘liking’ is the part that makes you feel satisfied once you find it.

The trouble is, in the dopamine loop, wanting is stronger than liking. This leads to us getting trapped, constantly compelled to seek new information. 

The original designers of social media knew all about dopamine loops. They designed the platforms with them in mind, knowing that the loops would keep users hooked on the platform. That would increase the likelihood of users viewing online ads.  

So how does the dopamine loop relate to fake news and disinformation? One major way that we get dopamine hits online is through receiving notifications on social media.

You know, those little red numbers in the top corner of Facebook. Think about how you feel when you open your profile and see a bunch of them waiting for you. You feel good, right? This is dopamine firing in your brain. 

Sharing content with your friends and followers is a great way to get notifications, which gives you even more dopamine. But this is where we find the danger from fake news and disinformation.

When we share to get dopamine hits, we’re less likely to take the time to check whether the information we’re sharing is actually true. After all, we’re constantly in a state of information overload. 

One way to stop spreading fake news and disinformation is to break our addiction to dopamine. It makes us vulnerable. We need to avoid getting stuck in dopamine loops, constantly refreshing our social media apps in the hunt for fresh information and notifications. 

Quick ways to break the dopamine loop:

  • Turning off your social media notifications altogether
  • Switching your device to grayscale mode (making it less likely to produce a dopamine hit)
  • Pausing to take a few deep breaths before sharing any content 

But there’s another critical way to stop sharing fake news and disinformation…

Avoid Heated Arguments Online

fake news and disinformation

The internet is full of trolls. No matter what you say online, it often feels like someone is always ready to attack you for it. Your natural instinct is to strike back. That’s a very human response. But it risks making you more likely to share fake news and disinformation.  

Why? Because arguing online is another way to get trapped in a dopamine loop. Your antagonist keeps responding, you keep getting more notifications. You keep arguing back, and the cycle continues.

Often, you’ll share a piece of online content, perhaps a news article, to prove your point and get ‘one up’ on your opponent. When doing so, you probably don’t take the time to fact-check the article. That’s where the danger is. 

What’s more, some online trolls are there deliberately. They’re part of coordinated inauthentic behavior campaigns designed to sow division and hostility around certain topics (usually political ones).

These campaigns usually involve fake news and disinformation too. By arguing with these political trolls, you’re giving them exactly what they want. 

Luckily, there’s an easy way to avoid being drawn into online political arguments. On Twitter, it’s the mute function (either mute conversation, or mute user). On Facebook, you can turn off notifications about a specific post.

These features are great, because they allow you to break out of the dopamine loop and the troll has no idea. They just carry on yelling into the void. Meanwhile, you carry on with your day and remain blissfully unaware.

Check Your Confirmation Biases

confirmation bias

Confirmation bias plays a key role in increasing our likelihood of sharing fake news and disinformation. But what exactly is it?

Confirmation bias is our natural tendency to search for, favor and easily believe information that fits with our existing worldview. 

Let’s look at how confirmation bias works in practice. For example, you see a tweet (falsely) claiming that US presidential candidate Joe Biden has dementia.

You’re a Trump supporter and you don’t like Biden. Thanks to confirmation bias, you’re very likely to hit retweet on this tweet without even stopping to question if it’s really true. 

You also know that your Twitter followers (who have similar worldviews) will appreciate your sharing this tweet. They’re likely to give it lots of attention, including retweets and favorites – i.e. plenty of extra dopamine for you. 

However, if you saw a similar tweet questioning Trump’s mental health, it’s far more likely that you’d be skeptical of it. Of course, this works in the other direction too. Confirmation bias is not unique to either end of the political spectrum. 

It’s dangerous, because it makes people automatically believe (and probably share) content that fits their worldviews, without stopping to check its veracity. 

If you really want to stop sharing fake news and disinformation, you have to approach your social media use knowing that you have confirmation bias. You have to consciously remind yourself what exactly it is that compels you to share a certain post.

It’s not easy, but it’s a necessary step to help stop sharing fake news and disinformation.

Consider Content Incentives For Fake News

incentives for fake news

Finally, I want to discuss the incentives of social media content. Every post and article on the internet has a certain incentive behind it. For many content creators, publishing articles is a way to drive traffic to their websites, to earn money from online ads. This is their main incentive.

But the social media space is noisy, so those articles need to stand out. That’s why you’ll see so many overblown clickbait titles that often bear little relevance to the actual content of the article.

In particular, politics is a highly charged and emotive topic, so it’s often used to catch attention and drive site traffic. That’s how the infamous Macedonian teenagers made money from pushing pro-Trump fake news in 2016.

Another incentive in content creation is to push a specific worldview, perhaps on behalf of a foreign government. The Kremlin uses this technique a lot.

Amid the early days of the 2020 pandemic, I found that Russia-linked news sites were pushing conspiracy theory narratives (e.g. the dangers of 5G, Bill Gates as responsible for coronavirus, coronavirus as hoax, etc). These showed up consistently on social media, for example in US and UK based Facebook groups.  

Before sharing something on social media, consider the incentives of its creator. Are you truly happy to help that clickbait website make more ad money, or to help a hostile foreign government promote its worldview to your fellow countrymen?

Summary

In this article, I presented four simple ways to stop sharing fake news and disinformation. I talked about the following points:

  • How to break out of dopamine loops 
  • How to avoid heated arguments online 
  • Why you should check your confirmation biases
  • Why you should consider the incentives of content

Are you doing any of these already? Let us know in the comments.

coordinated inauthentic behavior

Three Easy Ways to Spot Coordinated Inauthentic Behavior Online

After spending almost a year tracking coordinated inauthentic behavior on behalf of Facebook and Google, I’ve developed a good sense of how an inauthentic social media user looks and acts online. 

Each platform has its own nuances. But many universal patterns indicate the likelihood of coordinated activity. Here I’ll discuss three common indicators – and how you can effectively spot them across any social media platform. 

But first, let’s take a look at what coordinated inauthentic behavior actually means. We’ll also briefly explore some controversy around its definition.

What is coordinated inauthentic behavior?

Two years ago, Facebook first coined the phrase ‘coordinated inauthentic behavior’ (known in the industry as CIB).

Facebook defines CIB as follows:  “When groups of pages or people work together to mislead others about who they are or what they’re doing.”

Facebook (and other platforms) are keen to highlight the ‘behavior’ side of the phrase. This helps to shield the platform from accusations of being biased against any particular political ideology.

People would be quick to make those accusations if Facebook simply focused on removing inauthentic content. It would raise the question of which content should get removed and which should stay. This would trigger wider concerns about freedom of speech and the First Amendment.  

The double standards controversy

Writing for Slate, Harvard researcher Evelyn Douek is concerned that Facebook’s definition of coordinated inauthentic behavior is lacking in clarity.

She argues that certain groups will consider certain kinds of CIB acceptable, whereas others will not. Douek draws on the example of the TikTok video that caused hundreds of young people to artificially inflate attendance numbers at a Donald Trump rally by reserving tickets en masse.

Douek contrasts that real-life scenario with a hypothetical example of Qanon supporters doing the same to Joe Biden. She highlights the risk of applying double standards to CIB, as well as to disinformation.

That’s a real concern, especially in deeply polarized times. Polarization is the key driving force behind this issue. We assume that ‘our’ side is doing good, while ‘their’ side is doing bad. That view influences how we judge the motives of coordinated inauthentic behavior. 

For the purpose of this post, we’ll use the official CIB definition. It’s still the standard that most social media platforms use. But it’s important to know that the term is not perfect, and has attracted controversy.

Is coordinated inauthentic behavior the same as misinformation or disinformation?

No. But they certainly play a role in it. For example, members of a Twitter botnet might work together to constantly pump out and amplify misleading tweets about a political figure.

Or groups of paid operatives might enter Facebook groups and astroturf the discussion about coronavirus by posting lots of comments about the dangers of vaccines. Astroturfing is a common technique of CIB, i.e. to create an appearance of legitimate ‘grassroots’ consensus on certain topics. 

OK, I’ve answered some key questions about coordinated inauthentic behavior. Now let’s look at three ways to spot it.

What are some key indicators of coordinated inauthentic behavior?

Identity Signaling

The concept of identity is at the heart of many coordinated inauthentic behavior and disinformation efforts. CIB campaigns often play on existing social and political divisions within their target audience.

For example, they might astroturf a widespread sense of approval for a certain government policy, such as a tougher stance on immigration. Immigration is an emotive issue for many people, and has the potential to drive ingroup vs outgroup sentiments.

When examining accounts for signs of inauthenticity, I consider overt identity signals, especially political ones, to be a red flag. These could include national flags, divisive political hashtags (e.g. #MAGA (make America great again) or #FBPE (follow back pro-Europe)), or a bio stuffed with identity-promoting keywords like “Army vet, patriot, Trump supporter, family man, God lover”.

Taken together, those signs indicate that the profile primarily exists to promote a certain political identity – a common giveaway of astroturfing or coordinated inauthentic behavior.

Copy Paste Sharing

It’s common to find groups of accounts sharing links or posts accompanied by the exact same text (e.g. in a quoted tweet or a Facebook share). This isn’t normal behavior for an ‘organic’ social media user, so it’s a suspicious sign.

Copy paste sharing usually indicates a campaign designed to amplify a certain message. It is likely that Twitter accounts constantly tweeting the same messages in tandem are automated (i.e. bots).

Aggressive Political Agenda

When I’m seeking signs of coordinated inauthentic behavior, I always examine the posting history of a social media account. I check whether all its posts support a specific political agenda (usually in an aggressive and antagonistic way). If so, that’s another red flag.

Sure, regular people can also post aggressively in support of a political agenda. But it’s less likely that those posts will make up the whole of their posting history. A one-topic account is a key sign of coordinated inauthentic behavior. 

Summary

In this post we examined the origins of the term ‘coordinated inauthentic behavior’. Also, we explored one of the key debates around the validity of its definition. Finally, we looked at three simple ways to spot coordinated inauthentic behavior on social media platforms.

  • First, I looked at identity signaling, where accounts project a strong sense of a certain identity (usually political) via profile hashtags, profile imagery, bio information, or posting history. 
  • Second, I discussed copy paste posting, where multiple accounts share something with the exact same accompanying text. This is often a sign of automated coordinated inauthentic behavior. 
  • Finally, I highlighted the significance of one-topic accounts that support a certain political agenda, usually in an aggressive way.

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organisation. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularised the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites have become popular, such as The Onion and NewsThump.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account later attributed to the Kremlin shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, alt-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image was genuine, the context was not.  

Deepfakes

As a new form of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, porn has been the main area where deepfakes are being used. But in a handful of cases they’ve played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes. But they do have potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Alt-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are totally devoid of any nuanced explanation. The viewer gets a quick hit of emotion that has a triggering effect. This taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. Therein lies much of the danger. 

4 Things I’ve Learned From Analysing Russia-Aligned COVID-19 Coverage

Much social unrest has emerged amid COVID-19, such as anti-lockdown protests, attacks on 5G masts, and violent reactions when asked to wear masks. As I write this, a murky far-right group called ‘UK Freedom Movement’ is organising a new spate of anti-lockdown protests around the UK.

This month I’ve been reviewing Russia-aligned news sites. I’ve been looking for key narratives on COVID-19 and the US election. I’ve examined two types of sites: those directly linked to the Russian state, and those with a similar political stance. Many sites share the same core group of authors.

Here are some of my findings, related to the current discussions on social unrest, conspiracy theories and the infodemic.

COVID-19 narratives are consistent across websites

Topics covered on these sites reflect COVID-19 conspiracy narratives found on social media since the pandemic began. Here are three prime examples.

Bill Gates the ‘criminal globalist’

The Microsoft boss features regularly, from the Kremlin-funded news outlet InfoRos to the Russia-aligned news site Fort Russ. Narratives unfold along similar lines.

They claim that Gates is the ‘criminal globalist’ ringleader of a cabal using coronavirus as a smokescreen to impose mandatory tracking and mandatory vaccines.

Justifications for singling out Gates are usually his prescient 2015 talk, in which he highlighted the global risk of a pandemic, or the Gates Foundation’s funding of WHO.

Herd immunity vs lockdown

Another key narrative centres on the benefits of herd immunity, often juxtaposed against the negatives of lockdown. Sweden is the poster child for herd immunity. Lockdown is presented as a corrupt government-led attempt to remove people’s basic freedoms.

It’s not hard to imagine how this framing could trigger people who value freedom above all else – and cause events like the anti-lockdown protests that have been cropping up across the US and UK.

The smouldering culture war of Trump and Brexit has extended into new battle lines of ‘lockdown vs herd immunity’. As a result, pandemic control efforts are at risk.

Scapegoating China

China is presented as an innocent player in the pandemic. The US is blamed for targeting China with information warfare in order to blame it for the coronavirus.

In some articles, the authors claim that the pandemic could create a ‘New Cold War’ between the US and China, with severe consequences for the global economy.

Other sites take it even further, claiming that COVID-19 could spark a nuclear war between the US and a newly formed Russia/China alliance.

Narratives claim that COVID-19 will reshape the world 

Another popular theme is how the outcome of the US 2020 election, plus the effects of coronavirus will cause the US to lose hegemony. The result will be a shift into multilateralism.

Some sites claim coronavirus will cause Western governments to “face a legitimacy crisis like never before”, eventually causing so much chaos that it will reshape the global order.

To reinforce this point they highlight how the US has failed to protect its people from coronavirus, so it can no longer be called a superpower. Multilateralism is presented as inevitable, due to the unprecedented crisis the world now faces.

Anti-imperialism has been a key feature of pro-Russian media for decades. It overlaps with certain far-left lines of thinking, especially among those who critique Western military actions around the world.

They don’t support Trump

“Voters now must choose between Donald Trump, an unstable, incompetent president whose blatant narcissism has been on full display as the nation suffers from coronavirus, and the former vice-president who will diligently represent the rich and govern for their good above all others.”

American Herald Tribune

We often assume that Russia-aligned media is pro-Trump. In fact, many of these news sources criticise Trump as much as Biden. Criticisms of Trump include poor handling of the pandemic, and ‘imperialist shenanigans’ in foreign policy.

Framing of Biden often paints him as sleazy, citing the recent Tara Reade case as evidence. Some articles suggest he may have dementia. Such framing of both candidates as hopeless choices could be a subtle attempt at voter suppression. 

They frame themselves as ‘independent’ thinkers

Most of these websites present themselves as bastions of independent thought. They encourage readers to go beyond the mainstream and discover ‘new’ perspectives.

It reflects a common refrain among social media conspiracy theorists, who often talk about the need to “do your own research” . Often, that translates as “using Google or YouTube to find content that reinforces one’s existing views”.

Pro-Russia news sites tap into this way of thinking. They use it as a defining aspect of their reporting. It’s a message likely to resonate with the exact kind of person who questions everything.

What’s the link to real life unrest? 

Looking at these websites in aggregate, it’s easy to see how their typical narratives link to social unrest during the pandemic.

I’ve noticed the same themes popping up over and over on social media. Ordinary citizens share them in mainstream Facebook groups (e.g. local news and discussion groups).

These ideas have become rooted in public consciousness. They drive a growing sense of distrust in Western governments, particularly in the UK and US, where populations are already polarised. Both countries have handled the pandemic badly, so it’s easier to create scepticism among a fearful population.

If we were to survey the beliefs of anti-lockdown protesters, 5G mast attackers, and mask-related violence, I bet we’d find echoes of the same narratives found across these ‘alternative’ news websites, many of them either Russian government funded, or publishing work from the same authors.

Dopamine and Disinformation: How To Quit Feeding The Trolls

Dopamine and disinformation are intrinsically linked. In this article, I’ll explain how this works, and look at some ways to lessen the effects.

I used to spend a lot of time arguing with strangers on the Internet.

It normally went like this: I would post something political and the trolls would arrive. I’d get stuck in endless back-and-forths with them. It was a futile endeavour. But somehow I felt compelled to keep on engaging.

On Twitter or Facebook, new notifications would keep coming in, alerting me of another new comment. Then I’d engage again. It wasted my time, raised my anxiety levels, and made me feel more aggressive.

I’d gotten stuck in a dopamine loop.

This happens to us a lot, for example when we can’t stop refreshing our social media feeds. But what exactly is dopamine and what does it do?

How dopamine creates addiction

Dopamine is a chemical created in the brain. It affects many functions, such as thinking, mood, attention and motivation. It also plays a critical role in triggering our desire to seek out and search. In short, dopamine makes us want to search out information.

Two parts are at play in the dopamine loop. There’s the ‘wanting’ part and the ‘liking’ part. The wanting part makes you take action (i.e. searching), while the liking part satisfies you and makes you stop searching. But the wanting part is stronger than the liking part. And that’s how we get trapped in endless scroll mode.

Another important part of the dopamine system relates directly to the issue of engagement with trolling comments.

The system is very sensitive to cues that indicate the possibility of a reward. On social media, notifications are the cues. They make the addiction even stronger. Anticipation is key. If we got a reward every time we logged in, it would weaken the addiction. It’s the uncertainty that gets us hooked.

So how did these behavioural psychology elements get incorporated into our Internet experience in the first place?

Engineered addiction and trolling

It all started with a researcher called BJ Fogg and his Persuasive Technology Lab at Stanford University. In September 2007, Fogg and his students began work on a formula that would change the world — and all of our daily lives.

They built Facebook apps using techniques from behavioural psychology, techniques that could engineer addiction, such as the dopamine loop that I described above. Fogg developed a formula, suggesting that people will act when three forces converge: motivation, trigger, and ability.

Let’s apply this formula to a hostile online interaction. The motivation is your desire to convince the other person that your opinion is right, or to get ‘one up’ on them; the trigger is seeing another notification on your social media app, indicating that your opponent has responded; and the ability is having your phone to hand, which lets you check right away and respond.

These techniques tap directly into the workings of the human brain, making them extremely powerful.

So why do social media platforms leverage such addictive and manipulative design practices?

Simple: their profitability depends on it.

The ad-supported business model means that users spending more time on the app leads to more profit for the company. All that time you spend arguing with trolls on Facebook is making the platform more valuable to potential advertisers.

Dopamine and disinformation

Arguing online also relates to dopamine and disinformation. It can make us more susceptible to the latter. The dopamine loop gives a powerful tool to those who seek to divide us. It perpetuates a sense of an adversarial environment and keeps us always on the attack.

When we divide ourselves into tribes and adopt a hostile attitude towards a perceived outgroup, we risk becoming more susceptible to harmful disinformation. We are more likely to share content that is cognitively resonant with our existing beliefs and which reflects the attitudes of our tribe.

The dopamine loop also affects our interactions with agreeable content. When we post something that our tribe members approve of, we’ll receive dopamine hits via our notifications of likes and encouraging comments. That boosts our self-esteem and keeps us coming back for more.

So what can we do to fix the trolling problem and reduce our susceptibility to disinformation?

Breaking out of the loop

Short-term practical solutions mainly involve adapting our devices to help break the dopamine loop. For example, we could make our phones less appealing by changing the screen to grayscale mode, or switching off all notifications.

But we can also tackle adversarial online behaviour in another way.

‘Don’t feed the trolls’ has become an Internet cliché. But it starts to make sense when thinking about the dopamine loop and anticipatory reward.

Everyone who posts online is looking for a response. They want to perpetuate the dopamine loop. If we can maintain our self control by denying them that response, then we can break the cycle.

I’ve managed to break my own addiction to engaging with trolls. On Twitter, I simply mute the conversation. I can’t see the person anymore, and I receive no notifications of any of their comments. It makes the troll invisible to me, breaking the dopamine loop and allowing my brain to focus on other things.

On Facebook, I simply turn off notifications for a particular post. This has the same effect as muting on Twitter. Both platforms also offer a blocking option. I don’t normally use this because it gives trolls the satisfaction of knowing they’ve got a response. Muting is better, because it means they’re left wondering if I’m ignoring them. They just keep on yelling into the void.

Battleground or community

If we could all break the cycle and quit feeding the trolls, then adversarial disinformation and influence ops could lose much of their power. The online environment would feel like more of a community instead of a battleground. In turn, this may help reduce polarisation.

But it has to be done en masse. A handful of people breaking the cycle won’t be enough to change the overall environment. As social media is designed to be addictive, a wider intervention would be necessary to encourage people to do this.

Of course, the social media platforms have all the power. They could redesign their structures to destroy these dopamine loops and disincentivise disinformation. But their ad driven business model means they don’t have an incentive to do so.

Nevertheless, we can still improve our individual online experiences by taking steps to break the dopamine and disinformation cycle. Part of doing this is to disengage with users who aim to trap us into never-ending adversarial debates based around polarising topics.