Home » Social Media

Category: Social Media

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organisation. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularised the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites have become popular, such as The Onion and NewsThump.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account later attributed to the Kremlin shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, alt-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image was genuine, the context was not.  

Deepfakes

As a new form of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, porn has been the main area where deepfakes are being used. But in a handful of cases they’ve played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes. But they do have potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Alt-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are totally devoid of any nuanced explanation. The viewer gets a quick hit of emotion that has a triggering effect. This taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. Therein lies much of the danger. 

Dopamine and Disinformation: How To Quit Feeding The Trolls

Dopamine and disinformation are intrinsically linked. In this article, I’ll explain how this works, and look at some ways to lessen the effects.

I used to spend a lot of time arguing with strangers on the Internet.

It normally went like this: I would post something political and the trolls would arrive. I’d get stuck in endless back-and-forths with them. It was a futile endeavour. But somehow I felt compelled to keep on engaging.

On Twitter or Facebook, new notifications would keep coming in, alerting me of another new comment. Then I’d engage again. It wasted my time, raised my anxiety levels, and made me feel more aggressive.

I’d gotten stuck in a dopamine loop.

This happens to us a lot, for example when we can’t stop refreshing our social media feeds. But what exactly is dopamine and what does it do?

How dopamine creates addiction

Dopamine is a chemical created in the brain. It affects many functions, such as thinking, mood, attention and motivation. It also plays a critical role in triggering our desire to seek out and search. In short, dopamine makes us want to search out information.

Two parts are at play in the dopamine loop. There’s the ‘wanting’ part and the ‘liking’ part. The wanting part makes you take action (i.e. searching), while the liking part satisfies you and makes you stop searching. But the wanting part is stronger than the liking part. And that’s how we get trapped in endless scroll mode.

Another important part of the dopamine system relates directly to the issue of engagement with trolling comments.

The system is very sensitive to cues that indicate the possibility of a reward. On social media, notifications are the cues. They make the addiction even stronger. Anticipation is key. If we got a reward every time we logged in, it would weaken the addiction. It’s the uncertainty that gets us hooked.

So how did these behavioural psychology elements get incorporated into our Internet experience in the first place?

Engineered addiction and trolling

It all started with a researcher called BJ Fogg and his Persuasive Technology Lab at Stanford University. In September 2007, Fogg and his students began work on a formula that would change the world — and all of our daily lives.

They built Facebook apps using techniques from behavioural psychology, techniques that could engineer addiction, such as the dopamine loop that I described above. Fogg developed a formula, suggesting that people will act when three forces converge: motivation, trigger, and ability.

Let’s apply this formula to a hostile online interaction. The motivation is your desire to convince the other person that your opinion is right, or to get ‘one up’ on them; the trigger is seeing another notification on your social media app, indicating that your opponent has responded; and the ability is having your phone to hand, which lets you check right away and respond.

These techniques tap directly into the workings of the human brain, making them extremely powerful.

So why do social media platforms leverage such addictive and manipulative design practices?

Simple: their profitability depends on it.

The ad-supported business model means that users spending more time on the app leads to more profit for the company. All that time you spend arguing with trolls on Facebook is making the platform more valuable to potential advertisers.

Dopamine and disinformation

Arguing online also relates to dopamine and disinformation. It can make us more susceptible to the latter. The dopamine loop gives a powerful tool to those who seek to divide us. It perpetuates a sense of an adversarial environment and keeps us always on the attack.

When we divide ourselves into tribes and adopt a hostile attitude towards a perceived outgroup, we risk becoming more susceptible to harmful disinformation. We are more likely to share content that is cognitively resonant with our existing beliefs and which reflects the attitudes of our tribe.

The dopamine loop also affects our interactions with agreeable content. When we post something that our tribe members approve of, we’ll receive dopamine hits via our notifications of likes and encouraging comments. That boosts our self-esteem and keeps us coming back for more.

So what can we do to fix the trolling problem and reduce our susceptibility to disinformation?

Breaking out of the loop

Short-term practical solutions mainly involve adapting our devices to help break the dopamine loop. For example, we could make our phones less appealing by changing the screen to grayscale mode, or switching off all notifications.

But we can also tackle adversarial online behaviour in another way.

‘Don’t feed the trolls’ has become an Internet cliché. But it starts to make sense when thinking about the dopamine loop and anticipatory reward.

Everyone who posts online is looking for a response. They want to perpetuate the dopamine loop. If we can maintain our self control by denying them that response, then we can break the cycle.

I’ve managed to break my own addiction to engaging with trolls. On Twitter, I simply mute the conversation. I can’t see the person anymore, and I receive no notifications of any of their comments. It makes the troll invisible to me, breaking the dopamine loop and allowing my brain to focus on other things.

On Facebook, I simply turn off notifications for a particular post. This has the same effect as muting on Twitter. Both platforms also offer a blocking option. I don’t normally use this because it gives trolls the satisfaction of knowing they’ve got a response. Muting is better, because it means they’re left wondering if I’m ignoring them. They just keep on yelling into the void.

Battleground or community

If we could all break the cycle and quit feeding the trolls, then adversarial disinformation and influence ops could lose much of their power. The online environment would feel like more of a community instead of a battleground. In turn, this may help reduce polarisation.

But it has to be done en masse. A handful of people breaking the cycle won’t be enough to change the overall environment. As social media is designed to be addictive, a wider intervention would be necessary to encourage people to do this.

Of course, the social media platforms have all the power. They could redesign their structures to destroy these dopamine loops and disincentivise disinformation. But their ad driven business model means they don’t have an incentive to do so.

Nevertheless, we can still improve our individual online experiences by taking steps to break the dopamine and disinformation cycle. Part of doing this is to disengage with users who aim to trap us into never-ending adversarial debates based around polarising topics.

Analysing Trump’s Medical Disinformation on Facebook

US president Donald Trump shocked the world this week with his latest piece of medical disinformation.

Trump claimed that injecting disinfectant into the body could be an ‘interesting’ way to cure COVID-19.

He later tried to back-pedal, claiming he was being sarcastic. But that wasn’t how most of the world took it.

Dangers of medical disinformation

The mainstream media and the public widely lambasted this dangerous medical disinformation.

Amid the furore over Trump’s remarks, a major disinfectant firm issued a statement urging the public not to inject or drink any of their products.

However, members of pro-Trump Facebook groups dedicated to conspiracy theories displayed quite the opposite reaction. 

I examined some of these groups to provide comment for an article in CodaStory. I’d previously gathered this list because of the strong focus on various ‘corona disinformation conspiracies’.

These include 5G causing the virus, the virus being a US bioweapon, and Bill Gates as having orchestrated the ‘virus hoax’ in his ambition to enforce a worldwide vaccine programme. 

Many of the groups also centred around the Qanon conspiracy theory.

Pro-Trump Facebook reactions

You might expect the suggestion of injecting bleach to be a step too far even for these largely pro-Trump groups. Not so. 

In my initial observation of the groups, I noticed three distinct ways in which the members attempted to account for Trump’s bizarre medical disinformation.

First, that Trump was just ‘playing the media’. People must be stupid if they believe he meant what he said.

Commenters also attributed all the negative media coverage to ‘yet another’ MSM (mainstream media), liberal, or Democrat attempt to smear Trump.

Secondly, some commenters claimed that the media had quoted Trump ‘out of context’. According to them, he was speaking ‘more generally’ about possible ways to treat COVID-19.

Others highlighted a fact check article from far-right news outlet Breitbart. But no-one acknowledged the videos of Trump making these claims for everyone to see and hear. 

The third claim relates more closely to other COVID-19 medical disinformation, ‘miracle cures’. This commenter claimed that Trump must have been referring to those UV light therapy and ozone therapy, which already exist.

Things got more interesting when the commenter drew links between the medical disinformation about bleach and the popular narrative of ‘Vitamin C as miracle cure’.

They claimed that taking Vitamin C causes hydrogen peroxide to build up in the body. It followed that hydrogen peroxide has a disinfectant effect, so Trump’s comments have a basis in medical fact.

Rationalising medical disinformation

These three counter-narratives about Trump’s medical disinformation all attempt to rationalise an influential figure making a dangerous and irresponsible remark.

Tribal attitudes drive many of these rationalisations. For example, the claims that the media purposefully misinterpreted Trump’s comments in a ‘libs’ or ‘Dems’ smear attack. Once again, this reinforces the existing divide between populist pro-Trump narratives and the mainstream.

The question remains: How many of these Facebook group members are genuine American citizens? Facebook itself is the only entity that could properly attribute the accounts. And it doesn’t seem to be giving much away.

I suspect group members are a mix of genuine Trump supporters and astroturfers working to stir up tribal hatred of the ‘other side’.

Tribal attitudes can be dangerous, particularly in relation to public health. People in the pro-Trump tribe are more likely to challenge messages from the perceived ‘outgroup’ (‘experts’ and the ‘MSM’) such as critical public health advice from the WHO.

A similar dynamic has fuelled recent anti-lockdown protests across the US, which may already have spread the virus further and compromised the entire country. Astroturfing was certainly a factor there; there’s no reason why it couldn’t be influencing these groups too.

Tribalism In The Time Of Coronavirus

As I write this, the world has descended into a major crisis, with effects more far-reaching than anything I’ve experienced in my lifetime. A powerful virus has swept onto the scene and is now ripping its way through the world. Barely any country has been spared.

Here in the UK, the coronavirus crisis is getting worse by the day. But merely observing the city streets on this sunny spring Sunday would give no indication of the gravity of the situation. Indeed, some UK tourist spots, notably Snowdon, experienced their ‘busiest day in living memory’. That’s quite something at a time when a highly contagious virus is on the loose.

In contrast, the streets of Paris, Lisbon and Barcelona are deserted. Most EU countries have issued a decisive response, putting their populations under strict lockdown to try and curb the spread of the virus. The UK government hasn’t followed suit.

Britain is saddled with unfortunate leadership in such a time of crisis. Messages from central government have been unclear and have arrived far too late. Many people have died. Amid the frenzied warnings from other countries, tribalism, rooted in the impulses that drove Brexit, still bisects British society — even influencing how we perceive the choice between life and health, or possible death. 

Brexit tribalism could be seen as a barometer for who will approve or disapprove of Boris Johnson’s handling of the coronavirus situation. No scientific study has yet been conducted to prove or disprove this, but research from Cambridge has shown that Leave (and Trump) voters have a strong tendency to believe conspiracy theories.

So if I may hypothesise for a moment, it would go as follows.

Those who believe Johnson is doing well and don’t believe in the necessity of self isolation — more likely to be Leave voters. Those who believe Johnson is doing the wrong thing and we should follow the majority of the EU (and the world) into lockdown — more likely to be Remain voters. 

I can’t help but wonder if these divided attitudes are linked to the government’s aggressively anti-EU narrative. Could it possibly be that our leaders are reluctant to implement lockdown because it would mean them falling into line with the EU? The British government can’t possibly be seen to do that. On the contrary, it must do the exact opposite. After all, there’s a voter base to keep happy.

This tribal stance has filtered down to the population. People’s cavalier real-life behaviour at a critical juncture risks the health and safety of us all.

We’ve gone beyond Brexit concerns now. Freedom of movement is no longer the most important thing at stake. Continued tribal attitudes in the UK could now lead to significant numbers of deaths. The reckoning has arrived. No matter what side of the political spectrum we’re on, we must ensure that tribalism does not cloud our actions on tackling the virus, as the New European so rightly points out.

There’s another factor influencing public opinion around coronavirus: online disinformation. It’s been a key part of turbocharging existing tribal divisions.

Based on my research so far, I’ve seen the following positions solidifying into recurring narratives. Many are from sources that originate in the United States, but the shared language and overlapping ideologies mean they can mostly be considered as UK-relevant too.  

Narratives primarily from conservative/right-wing/pro-Leave sources:

  • The coronavirus is a hoax used as a smokescreen for elites to take control of society
  • It’s no worse than the flu, so there’s no need to believe WHO or UN advice (in fact we shouldn’t trust them because they may be part of the elite conspiracy)
  • Social distancing is unnecessary / too extreme
  • China is to blame for all this. To quote Trump, coronavirus is ‘the Chinese virus’ 

Narratives primarily from liberal/left-wing/centrist/pro-Remain sources:

  • The coronavirus is real, serious, and affects everyone 
  • It can’t be compared to flu
  • We should trust advice from WHO/UN and other legitimate experts
  • Social distancing and possibly lockdown is necessary to save lives across the wider population. 

Most of the disinformation that I’ve observed so far plays on the core narrative strands in the first group. People targeted by these narratives might well be less likely to take the virus seriously and more likely to carry on with a semblance of normal life, thus continuing the pandemic. This unhelpful behaviour is exacerbated by the population spending more time at home and hence online, seeking out constant updates on this critical global threat.

In the next post, I will unravel the coronavirus disinformation narratives in more detail, providing data-driven examples. It’s critical to understand the why behind the seeding of this disinformation, so I’ll also discuss the various incentives that are driving it.

Social Proof and Online Manipulation

What is social proof?

Every day, countless online sources compete for our attention. The concept of social proof can help us cut through the noise. In this post, I’ll explain how it works, and discuss how it can be gamed.

To avoid information overload and possible burnout, it’s essential to zero in on the important parts and sort them into categories that make sense. But how do we know which parts are important?

The human brain uses many shortcuts to understand the complex world around us. In social proof, we use the approval of others as a shortcut. We evaluate the significance and quality of a piece of information based on how many other people like it.

Social proof is part of the psychology of persuasion, used to great effect by marketers and PR specialists. We rely on it all the time when making consumer decisions.

On Amazon, for example, customer reviews are a shortcut to guide us when choosing between a large number of possible products to purchase.

A quick glance at the reviews allows us to avoid wasting time and energy conducting our own research on various products.

We also rely on social proof when judging the importance of a message on social media.

Almost instinctively, we tend to evaluate a post with many likes or shares more favourably. We assume a lot of other people have already approved of it, so we’re happy to jump on the bandwagon.

But relying too heavily on these shortcuts may leave us vulnerable to the risk of them being manipulated.

Manipulating Social Proof

Social engagement

It’s easy to convey social proof on social media. Liking, favouriting or upvoting is the quickest and most basic form of online social proof.

When we see a post with lots of likes, we instinctually view that information as important.

The act of online sharing also taps into social proof. If other people believe a post is worth sharing, then it must be of value. We may decide to share it too. This can easily lead us into the trap of sharing disinformation.

Example of social proof in a false tweet about Britney Spears
Hackers break into Sony Music account and tweet falsely about death of Britney Spears

But online likes and shares are also pretty easy to game. On Twitter, for example, a few lines of code can produce a bot that can automatically favourite a tweet containing a particular keyword (which can be anything).

A large network of automated accounts can then mass favourite (or mass retweet) any tweet, giving it a false appearance of significance, via artificial social proof.

Another way to convey social proof is via the user profile itself. Twitter is the most obvious example of this. We perceive users with more followers as being more important than those with fewer followers. The followers provide social proof, like an exclusive club.

In contrast, when an account has many friends (accounts it has followed) but few followers, it diminishes the effect of social proof. Again, you can use automated accounts to game this.

By mass following an account and then following each other back, it is possible to create the illusion of an account being popular.

Amazon reviews

Social media isn’t the only place where gaming social proof happens. It also happens across the web, in areas such as online shopping. Take Amazon, which has hundreds of options for anything we want to buy.

How do we know which ones are worth buying? We rely on the ratings and reviews from other buyers.

Ratings and reviews are a form of social proof for products, acting as an essential shortcut for navigating through the mass of available options.

You can even filter search results by the number of rating stars each product has gained. Ratings directly affect the seller’s bottom line. For Amazon’s third-party sellers, ratings can make or break their business.

This random product has great ratings. It looks enticing, but how many are actually genuine?

This is prime territory for gaming. And Amazon’s fake review economy is massive and growing.

Trending topics

Trending topics are another major area where social proof holds sway. The trending topics list shows whatever keywords or hashtags people are tweeting most widely at any point in time.

Whenever big news breaks – such as a terrorist attack, plane crash or celebrity death – it usually appears immediately in Twitter’s trending topics, often before the mainstream media can produce coverage.

There’s a strong incentive to game trending topics. For individual tweeters, tweeting using a keyword or hashtag from the trending list makes their tweet more visible.

It’s common to see Twitter accounts for brands ‘hashtag-jacking’ (or ‘trend-jacking’). This is where brand managers (or spammers) shoehorn the trending hashtag into the tweet, to get their brand in front of a larger audience.

Hashtag-jacking can be done skilfully, if the hashtag is relevant to the brand, but it tends to be the domain of spammers.

This is gaming trending topics on a relatively small scale. But things become more insidious when you take into account bot armies. In these cases, a large number of artificial accounts, perhaps controlled by just one person (the ‘bot herder’), tweet coordinated messages around the same hashtag.

Done properly, this can push the hashtag into the trending topics list, where human users will engage with it, giving it further mileage. It’s an effective way to mainline disinformation into the public arena.

The ultimate goal of gaming social proof in tweets is for mainstream media to pick up the false story. With so many journalists using Twitter to find potential stories, this is not so difficult to achieve.

Google search results

Lastly, we’ll take a quick look at the effects of social proof in Google search results. When searching for something, most internet users don’t click beyond beyond the first page of Google.

So the higher your link shows up, the more likely it is to be influential. SEO experts make a lot of money out of getting client links onto the first page of Google results. What’s more, we are more likely to trust links that show up higher in search results.

Google’s PageRank algorithms work in mysterious ways. The company is constantly adapting to make them harder to manipulate. But it’s still possible to game Google search.

For example, reputation management companies create a large amount of positive content, to help clients push negative articles off the first page of Google.

This happens in politics too. In the run-up to the UK’s general election of 2019, people speculated that Boris Johnson’s campaign team may have gamed Google by seeding bizarre stories about him to make negative coverage less prominent in searches.

In 2016, extremist websites manipulated Google search to make their hate filled propaganda, such as Holocaust denial, rank highly in search. Although Google later fixed this, savvy disinformation purveyors can still find ways to leverage its power to deceive vast swathes of the population.

Social proof: Key takeaways

We should not ignore social proof. It is both a cornerstone of how we navigate the online environment and a prime factor for manipulation.

Social proof is not just on social media. People use it all over the internet, from Amazon reviews to Twitter trends. Even Google search results, which many people trust more than social media, can be gamed.

The best way forward is to keep the concept of social proof in mind when engaging with online content. In particular, think about who might have an incentive to make the article or post that you’re reading look popular.