Home » Social Media

Category: Social Media

Disinformation, Dopamine, And How To Quit Feeding The Trolls

I used to spend a lot of time arguing with strangers on the Internet.

It normally went like this: I would post something political, the trolls would arrive, and I’d get stuck in endless back-and-forths with them. It was a futile endeavour. But somehow I felt compelled to keep on engaging.

On Twitter or Facebook, new notifications would keep coming in, alerting me of another new comment. Then I’d engage again. It wasted my time, raised my anxiety levels, and made me feel more aggressive.

I’d gotten stuck in a dopamine loop.

This happens to us a lot, for example when we can’t stop refreshing our social media feeds. But what exactly is dopamine and what does it do?

How dopamine creates addiction

Dopamine is a chemical created in the brain. It affects many functions, such as thinking, mood, attention and motivation. It also plays a critical role in triggering our desire to seek out and search. In short, dopamine makes us want to search out information.

Two parts are at play in the dopamine loop. There’s the ‘wanting’ part and the ‘liking’ part. The wanting part makes you take action (i.e. searching), while the liking part satisfies you and makes you stop searching. But the wanting part is stronger than the liking part. And that’s how we get trapped in endless scroll mode.

Another important part of the dopamine system relates directly to the issue of engagement with trolling comments.

The system is very sensitive to cues that indicate the possibility of a reward. On social media, notifications are the cues. They make the addiction even stronger. Anticipation is key. If we got a reward every time we logged in, it would weaken the addiction. It’s the uncertainty that gets us hooked.

So how did these behavioural psychology elements get incorporated into our Internet experience in the first place?

Engineered addiction and trolling

It all started with a researcher called BJ Fogg and his Persuasive Technology Lab at Stanford University. In September 2007, Fogg and his students began work on a formula that would change the world — and all of our daily lives.

They built Facebook apps using techniques from behavioural psychology, techniques that could engineer addiction, such as the dopamine loop that I described above. Fogg developed a formula, suggesting that people will act when three forces converge: motivation, trigger, and ability.

Let’s apply this formula to a hostile online interaction. The motivation is your desire to convince the other person that your opinion is right, or to get ‘one up’ on them; the trigger is seeing another notification on your social media app, indicating that your opponent has responded; and the ability is having your phone to hand, which lets you check right away and respond.

These techniques tap directly into the workings of the human brain, making them extremely powerful.

So why do social media platforms leverage such addictive and manipulative design practices?

Simple: their profitability depends on it.

The ad-supported business model means that users spending more time on the app leads to more profit for the company. All that time you spend arguing with trolls on Facebook is making the platform more valuable to potential advertisers.

Dopamine and disinformation

Arguing online also affects our susceptibility to disinformation. The dopamine loop gives a powerful tool to those who seek to divide us. It perpetuates a sense of an adversarial environment and keeps us always on the attack.

When we divide ourselves into tribes and adopt a hostile attitude towards a perceived outgroup, we risk becoming more susceptible to harmful disinformation. We are more likely to share content that is cognitively resonant with our existing beliefs and which reflects the attitudes of our tribe.

The dopamine loop also affects our interactions with agreeable content. When we post something that our tribe members approve of, we’ll receive dopamine hits via our notifications of likes and encouraging comments. That boosts our self-esteem and keeps us coming back for more.

So what can we do to fix the trolling problem and reduce our susceptibility to disinformation?

Breaking out of the loop

Short-term practical solutions mainly involve adapting our devices to help break the dopamine loop. For example, we could make our phones less appealing by changing the screen to grayscale mode, or switching off all notifications.

But we can also tackle adversarial online behaviour in another way.

‘Don’t feed the trolls’ has become an Internet cliché. But it starts to make sense when thinking about the dopamine loop and anticipatory reward.

Everyone who posts online is looking for a response. They want to perpetuate the dopamine loop. If we can maintain our self control by denying them that response, then we can break the cycle.

I’ve managed to break my own addiction to engaging with trolls. On Twitter, I simply mute the conversation. I can’t see the person anymore, and I receive no notifications of any of their comments. It makes the troll invisible to me, breaking the dopamine loop and allowing my brain to focus on other things.

On Facebook, I simply turn off notifications for a particular post. This has the same effect as muting on Twitter. Both platforms also offer a blocking option. I don’t normally use this because it gives trolls the satisfaction of knowing they’ve got a response. Muting is better, because it means they’re left wondering if I’m ignoring them. They just keep on yelling into the void.

Battleground or community

If we could all break the cycle and quit feeding the trolls, then adversarial disinformation and influence ops could lose much of their power. The online environment would feel like more of a community instead of a battleground. In turn, this may help reduce polarisation.

But it has to be done en masse. A handful of people breaking the cycle won’t be enough to change the overall environment. As social media is designed to be addictive, a wider intervention would be necessary to encourage people to do this.

Of course, the social media platforms have all the power. They could redesign their structures to destroy these dopamine loops and disincentivise disinformation. But their ad driven business model means they don’t have an incentive to do so.

Nevertheless, we can still improve our individual online experiences by taking steps to break the dopamine loop and smash our susceptibility to disinformation. Part of doing this is to disengage with users who aim to trap us into never-ending adversarial debates based around polarising topics.

Analysing ‘Bleachgate’ Responses in Pro-Trump Facebook Groups

Much of the world was shocked this week as Donald Trump claimed injecting disinfectant into the body could be an ‘interesting’ way to cure COVID-19. He later tried to back-pedal, claiming he was being sarcastic. But that wasn’t how most of the world took it.

The dangerous comments were widely lambasted across the mainstream media and among much of the ordinary public. Such was the furore over Trump’s remarks that a major disinfectant firm even issued a statement urging the public not to inject or drink any of their products.

But members of Facebook groups dedicated to conspiracy theories displayed quite the opposite reaction. 

I examined some of these groups to provide comment for an article in CodaStory. I’d previously gathered this list because of the strong focus on various ‘corona conspiracies’.

These include 5G causing the virus, the virus being a US bioweapon, and Bill Gates as having orchestrated the ‘virus hoax’ in his ambition to enforce a worldwide vaccine programme. Many of the groups also centred around the Qanon conspiracy theory.

You might expect the suggestion of injecting bleach to be a step too far even for these largely pro-Trump groups. Not so. 

In my initial observation of the groups, I noticed three distinct ways in which the members attempted to account for Trump’s bizarre statement.

First, that Trump was just ‘playing the media’. Anyone who believes he meant what he said must be stupid. Commenters also attributed all the negative media coverage to ‘yet another’ MSM (mainstream media), liberal, or Democrat attempt to smear Trump.

Secondly, some commenters claimed Trump had been quoted ‘out of context’. According to them, he was speaking ‘more generally’ about possible ways to treat COVID-19.

Some highlighted a fact check article from far-right news outlet Breitbart. But nowhere did anyone acknowledge that Trump had been videoed making these claims for everyone to see and hear. 

The third claim relates more closely to the other COVID-19 ‘miracle cures’. This commenter claimed that Trump must have been referring to those UV light therapy and ozone therapy, which already exist.

Things got more interesting when the commenter linked the injecting bleach comments to the popular ‘Vitamin C as miracle cure’ narrative.

They claimed that taking Vitamin C causes hydrogen peroxide to build up in the body. As hydrogen peroxide has a disinfectant effect, then actually Trump’s comments have a basis in medical fact.

These three counter-narratives about Trump’s comments all attempt to rationalise what would normally be seen as an influential figure making a dangerous and irresponsible remark.

Rationalisations like these are rooted in tribal attitudes. For example, claims that Trump’s comments were purposefully misinterpreted in a ‘libs’ or ‘Dems’ smear attack. Once again, this reinforces the existing divide between populist pro-Trump narratives and the mainstream.

The question remains: How many of these Facebook group members are genuine American citizens? Facebook itself is the only entity that could properly attribute the accounts. And it doesn’t seem to be giving much away.

I suspect group members are a mix of genuine Trump supporters and astroturfers working to stir up tribal hatred of the ‘other side’.

Tribal attitudes can be dangerous, particularly in relation to public health. People in the pro-Trump tribe are more likely to challenge messages from the perceived ‘outgroup’ (‘experts’ and the ‘MSM’) such as critical public health advice from the WHO.

A similar dynamic has fuelled recent anti-lockdown protests across the US, which may already have spread the virus further and compromised the entire country. Astroturfing was certainly a factor there; there’s no reason why it couldn’t be influencing these groups too.

Tribalism In The Time Of Coronavirus

As I write this, the world has descended into a major crisis, with effects more far-reaching than anything I’ve experienced in my lifetime. A powerful virus has swept onto the scene and is now ripping its way through the world. Barely any country has been spared.

Here in the UK, the coronavirus crisis is getting worse by the day. But merely observing the city streets on this sunny spring Sunday would give no indication of the gravity of the situation. Indeed, some UK tourist spots, notably Snowdon, experienced their ‘busiest day in living memory’. That’s quite something at a time when a highly contagious virus is on the loose.

In contrast, the streets of Paris, Lisbon and Barcelona are deserted. Most EU countries have issued a decisive response, putting their populations under strict lockdown to try and curb the spread of the virus. The UK government hasn’t followed suit.

Britain is saddled with unfortunate leadership in such a time of crisis. Messages from central government have been unclear and have arrived far too late. Many people have died. Amid the frenzied warnings from other countries, tribalism, rooted in the impulses that drove Brexit, still bisects British society — even influencing how we perceive the choice between life and health, or possible death. 

Brexit tribalism could be seen as a barometer for who will approve or disapprove of Boris Johnson’s handling of the coronavirus situation. No scientific study has yet been conducted to prove or disprove this, but research from Cambridge has shown that Leave (and Trump) voters have a strong tendency to believe conspiracy theories.

So if I may hypothesise for a moment, it would go as follows.

Those who believe Johnson is doing well and don’t believe in the necessity of self isolation — more likely to be Leave voters. Those who believe Johnson is doing the wrong thing and we should follow the majority of the EU (and the world) into lockdown — more likely to be Remain voters. 

I can’t help but wonder if these divided attitudes are linked to the government’s aggressively anti-EU narrative. Could it possibly be that our leaders are reluctant to implement lockdown because it would mean them falling into line with the EU? The British government can’t possibly be seen to do that. On the contrary, it must do the exact opposite. After all, there’s a voter base to keep happy.

This tribal stance has filtered down to the population. People’s cavalier real-life behaviour at a critical juncture risks the health and safety of us all.

We’ve gone beyond Brexit concerns now. Freedom of movement is no longer the most important thing at stake. Continued tribal attitudes in the UK could now lead to significant numbers of deaths. The reckoning has arrived. No matter what side of the political spectrum we’re on, we must ensure that tribalism does not cloud our actions on tackling the virus, as the New European so rightly points out.

There’s another factor influencing public opinion around coronavirus: online disinformation. It’s been a key part of turbocharging existing tribal divisions.

Based on my research so far, I’ve seen the following positions solidifying into recurring narratives. Many are from sources that originate in the United States, but the shared language and overlapping ideologies mean they can mostly be considered as UK-relevant too.  

Narratives primarily from conservative/right-wing/pro-Leave sources:

  • The coronavirus is a hoax used as a smokescreen for elites to take control of society
  • It’s no worse than the flu, so there’s no need to believe WHO or UN advice (in fact we shouldn’t trust them because they may be part of the elite conspiracy)
  • Social distancing is unnecessary / too extreme
  • China is to blame for all this. To quote Trump, coronavirus is ‘the Chinese virus’ 

Narratives primarily from liberal/left-wing/centrist/pro-Remain sources:

  • The coronavirus is real, serious, and affects everyone 
  • It can’t be compared to flu
  • We should trust advice from WHO/UN and other legitimate experts
  • Social distancing and possibly lockdown is necessary to save lives across the wider population. 

Most of the disinformation that I’ve observed so far plays on the core narrative strands in the first group. People targeted by these narratives might well be less likely to take the virus seriously and more likely to carry on with a semblance of normal life, thus continuing the pandemic. This unhelpful behaviour is exacerbated by the population spending more time at home and hence online, seeking out constant updates on this critical global threat.

In the next post, I will unravel the coronavirus disinformation narratives in more detail, providing data-driven examples. It’s critical to understand the why behind the seeding of this disinformation, so I’ll also discuss the various incentives that are driving it.

Social Proof And How To Game It

Every day, countless online sources compete for our attention. To avoid information overload and possible burnout, it’s essential to zero in on the important parts and sort them into categories that make sense. But how do we know which parts are important?

The human brain uses many shortcuts to understand the complex world around us. With social proof, we use the approval of others as a shortcut. We evaluate the significance and quality of a piece of information based on how many other people like it.

Social proof is part of the psychology of persuasion, used to great effect by marketers and PR specialists. We rely on it all the time when making consumer decisions. On Amazon, for example, customer reviews are a shortcut to guide us when choosing between a large number of possible products to purchase. A quick glance at the reviews allows us to avoid wasting time and energy conducting our own research on various products.

We also rely on social proof when judging the importance of a message on social media. Almost instinctively, we tend to evaluate a post with many likes or shares more favourably. We assume a lot of other people have already approved of it, so we’re happy to jump on the bandwagon.

But relying too heavily on these shortcuts may leave us vulnerable to the risk of them being manipulated.

How Social Proof is Gamed Online

Social engagement

It’s easy to convey social proof on social media. Liking, favouriting or upvoting is the quickest and most basic form of online social proof. When we see a post with lots of likes, we instinctually view that information as important. The act of online sharing also taps into social proof. If other people believe a post is worth sharing, then it must be of value. We may decide to share it too. This is bad news for disinformation.

Hackers break into Sony Music account and tweet falsely about death of Britney Spears

But online likes and shares are also pretty easy to game. On Twitter, for example, a few lines of code can produce a bot that can automatically favourite a tweet containing a particular keyword (which can be anything). A large network of automated accounts can then mass favourite (or mass retweet) any tweet, giving it a false appearance of significance, via artificial social proof.

Another way to convey social proof is via the user profile itself. Twitter is the most obvious example of this. We perceive users with more followers as being more important than those with fewer followers. The followers provide social proof, like an exclusive club.

On the other hand, if an account has a lot of friends (accounts it has followed) but few followers, the social proof effect is diminished. Again, automated accounts can be used to game this. By mass following an account and then following each other back, they maintain the illusion of that account being popular.

Amazon reviews

Gaming social proof online isn’t only confined to social media. It happens across the web, in areas such as online shopping. Take Amazon. It has hundreds of options for anything we want to buy. So how do we know which ones are worth buying? We rely on the ratings and reviews from other buyers.

Ratings and reviews are a form of social proof for products, acting as an essential shortcut for navigating through the mass of available options. You can even filter search results by the number of rating stars each product has gained. Ratings directly affect the seller’s bottom line. For Amazon’s third-party sellers, ratings can make or break their business.

This random product has great ratings. It looks enticing, but how many are actually genuine?

This is prime territory for gaming. And Amazon’s fake review economy is massive and growing.

Trending topics

Trending topics are another major area where social proof holds sway. The trending topics list shows whatever keywords or hashtags are being most widely tweeted at any point in time. Whenever big news breaks – such as a terrorist attack, plane crash or celebrity death – it usually appears immediately in Twitter’s trending topics, often before the mainstream media can produce coverage.

There’s a strong incentive to game trending topics. For individual tweeters, tweeting using a keyword or hashtag from the trending list makes their tweet more visible. It’s common to see Twitter accounts for brands performing ‘hashtag-jacking’ (or ‘trendjacking’), where the trending hashtag is shoehorned into the tweet to get it in front of a larger audience. Hashtag-jacking can be done skilfully, if the hashtag is relevant to the brand, but it tends to be the domain of spammers.

This is gaming trending topics on a relatively small scale. But things get more insidious when bot armies are involved. Here, a large number of artificial accounts, perhaps controlled by just one person (the ‘bot herder’), tweet coordinated messages around the same hashtag. Done properly, this can push the hashtag into the trending topics list, where human users will engage with it, giving it further mileage. It’s an effective way to mainline disinformation. The holy grail is to get the false story picked up in the mainstream media. With so many journalists using Twitter to find potential stories, this is not beyond the realms of possibility.

Google search results

Lastly, we’ll take a quick look at the effects of social proof in Google search results. When searching for something, most internet users don’t click beyond beyond the first page of Google. So the higher your link shows up, the more likely it is to be influential. SEO experts make a lot of money out of getting client links onto the first page of Google results. What’s more, links that show up higher are considered more trustworthy.

Google’s PageRank algorithms work in mysterious ways. The company is constantly adapting to make them harder to manipulate. But it’s still possible to game Google search. For example, reputation management companies create a slew of positive content to help clients push negative articles off the first page of Google.

This happens in politics too. In the run-up to the UK’s general election of 2019, people speculated that Boris Johnson’s campaign team may have gamed Google by seeding bizarre stories about him to make negative coverage less prominent in searches. In 2016, extremist websites manipulated Google search to make their hate filled propaganda, such as Holocaust denial, rank highly in search. Although Google later fixed this, savvy disinformation purveyors can still find ways to leverage its power to deceive vast chunks of the population.

Key takeaways

Social proof matters because it’s both a cornerstone of how we navigate the online environment and a prime target for manipulation. It’s not just confined to social media but used all over the internet, from Amazon reviews to Twitter trends. Even Google search results, which many people trust more than social media, can be gamed.

Reputation Risk: When Disinformation Attacks Brands

We hear a lot about disinformation in politics, but what happens when similar techniques are used to target brands? Coordinated disinformation campaigns can devastate a brand’s reputation. In 2019, 78% of US consumers said that disinformation would cause them to lose trust in a brand (according to research from New Knowledge).

As well as trust, disinformation can cause brands to lose business. In the same survey, 40% of consumers said they’d either boycott or stop doing business with the brand or switch over to a competitor. Combined with the risk of long-term reputation damage, these figures highlight just how vulnerable brands can be to disinformation.

Ideology and Pranks

Those who attack the reputation of brands do it for a variety of reasons. It may be an ideological crusade against a brand whose values clash with those of the attacker.

For example, in 2017, a 4chan user launched a disinformation campaign, known as ‘Dreamer Day’, which offered illegal immigrants a 40% discount at their local Starbucks. The campaign had its own hashtag #borderfreecoffee, plus a realistic looking ad designed by other 4chan users. But the campaign objective was simply to lure immigrants out in public and then report them to the police. The 4chan trolls chose to target Starbucks because of its perceived liberal brand values, which were antithetic to their own.

In the UK, an Indian restaurant lost half its revenue after it became the target of disinformation. In this case, attackers posted a fake article online claiming that the restaurant used human meat in its dishes. The story hit Twitter and Facebook where it quickly gained traction. The restaurant was subjected to abusive phone calls and online harassment, and lost many bookings as a result. The motive for the disinformation was unclear, but it was likely a prank, done just for the ‘lols’.

The Quest for Profit

Just as with political disinformation, some attackers target brands for the sake of ideology, while others do it for entertainment value. Still others do it purely for the sake of making money.

In politics, disinformation for profit has primarily been studied in the arena of online advertising. Here, people create websites full of junk political news and disinformation, then promote them on social media. They know that divisive, emotionally triggering content attracts more clicks and hence more money. By loading the sites with ads, their creators can make a big profit from disinformation.

Disinformation for profit can also be aimed at brands. In this situation, one of the most common ways to make money is via the stock market. Fake articles that are given enough traction on social media can crash or skyrocket the value of a company’s stocks. Manipulating the process gives the attacker a prime opportunity to cash in on these artificial shifts.

And the problem is set to get worse. Many investment firms now rely on algorithms to scan news articles, press releases and social media for keywords to help them make investment decisions. Gaming this system is potentially easy to do, simply by creating fake accounts and fake content stuffed with keywords that get picked up by the algorithms.

Disinformation can also be used to make profit by manipulating the online review ecosystem. This tactic could easily be leveraged by unscrupulous brands to get an unfair advantage over their competitors.

It’s easy and cheap to hire firms that specialise in writing convincing fake reviews, on sites like Amazon, TripAdvisor or even on Google itself. With so many consumers now relying on reviews to make buying decisions, a sustained campaign of fake ones can devastate a brand’s reputation.

How brands can protect themselves

In a world where disinformation is prevalent, brands don’t have to be sitting ducks. As a brand manager, you can’t stop someone launching a disinformation campaign against your brand, but you can certainly fight back.

The first step involves arming yourself with knowledge about the extent of the disinformation. That involves finding answers to key questions about what’s being said, where it’s being said, and what sort of people are saying it. Who is the campaign most likely to influence? Will it reach the eyes of your key target audiences, or is it confined to less relevant groups and/or geographic areas? Once you have the answers, you can start to craft a crisis plan or reputation management strategy.

But it’s easy to feel lost among the masses of data out there. That’s why brand managers not only need the right tools, but also the right experts to interpret the data and provide actionable insights.

In a follow-up post, I’ll present some case studies about brands targeted by disinformation and discuss in more detail about how they fought back.