Month: February 2020

Social Proof and How to Game It

Every day, countless online sources compete for our attention. To avoid information overload and possible burnout, it’s essential to zero in on the important parts and sort them into categories that make sense. But how do we know which parts are important?

The human brain uses many shortcuts to understand the complex world around us. With social proof, we use the approval of others as a shortcut. We evaluate the significance and quality of a piece of information based on how many other people like it.

Social proof is part of the psychology of persuasion, used to great effect by marketers and PR specialists. We rely on it all the time when making consumer decisions. On Amazon, for example, customer reviews are a shortcut to guide us when choosing between a large number of possible products to purchase. A quick glance at the reviews allows us to avoid wasting time and energy conducting our own research on various products.

We also rely on social proof when judging the importance of a message on social media. Almost instinctively, we tend to evaluate a post with many likes or shares more favourably. We assume a lot of other people have already approved of it, so we’re happy to jump on the bandwagon.

But relying too heavily on these shortcuts may leave us vulnerable to the risk of them being manipulated.

How Social Proof is Gamed Online

Social engagement

It’s easy to convey social proof on social media. Liking, favouriting or upvoting is the quickest and most basic form of online social proof. When we see a post with lots of likes, we instinctually view that information as important. The act of online sharing also taps into social proof. If other people believe a post is worth sharing, then it must be of value. We may decide to share it too. This is bad news for disinformation.

Hackers break into Sony Music account and tweet falsely about death of Britney Spears

But online likes and shares are also pretty easy to game. On Twitter, for example, a few lines of code can produce a bot that can automatically favourite a tweet containing a particular keyword (which can be anything). A large network of automated accounts can then mass favourite (or mass retweet) any tweet, giving it a false appearance of significance, via artificial social proof.

Another way to convey social proof is via the user profile itself. Twitter is the most obvious example of this. We perceive users with more followers as being more important than those with fewer followers. The followers provide social proof, like an exclusive club.

On the other hand, if an account has a lot of friends (accounts it has followed) but few followers, the social proof effect is diminished. Again, automated accounts can be used to game this. By mass following an account and then following each other back, they maintain the illusion of that account being popular.

Amazon reviews

Gaming social proof online isn’t only confined to social media. It happens across the web, in areas such as online shopping. Take Amazon. It has hundreds of options for anything we want to buy. So how do we know which ones are worth buying? We rely on the ratings and reviews from other buyers.

Ratings and reviews are a form of social proof for products, acting as an essential shortcut for navigating through the mass of available options. You can even filter search results by the number of rating stars each product has gained. Ratings directly affect the seller’s bottom line. For Amazon’s third-party sellers, ratings can make or break their business.

This random product has great ratings. It looks enticing, but how many are actually genuine?

This is prime territory for gaming. And Amazon’s fake review economy is massive and growing.

Trending topics

Trending topics are another major area where social proof holds sway. The trending topics list shows whatever keywords or hashtags are being most widely tweeted at any point in time. Whenever big news breaks – such as a terrorist attack, plane crash or celebrity death – it usually appears immediately in Twitter’s trending topics, often before the mainstream media can produce coverage.

There’s a strong incentive to game trending topics. For individual tweeters, tweeting using a keyword or hashtag from the trending list makes their tweet more visible. It’s common to see Twitter accounts for brands performing ‘hashtag-jacking’ (or ‘trendjacking’), where the trending hashtag is shoehorned into the tweet to get it in front of a larger audience. Hashtag-jacking can be done skilfully, if the hashtag is relevant to the brand, but it tends to be the domain of spammers.

This is gaming trending topics on a relatively small scale. But things get more insidious when bot armies are involved. Here, a large number of artificial accounts, perhaps controlled by just one person (the ‘bot herder’), tweet coordinated messages around the same hashtag. Done properly, this can push the hashtag into the trending topics list, where human users will engage with it, giving it further mileage. It’s an effective way to mainline disinformation. The holy grail is to get the false story picked up in the mainstream media. With so many journalists using Twitter to find potential stories, this is not beyond the realms of possibility.

Google search results

Lastly, we’ll take a quick look at the effects of social proof in Google search results. When searching for something, most internet users don’t click beyond beyond the first page of Google. So the higher your link shows up, the more likely it is to be influential. SEO experts make a lot of money out of getting client links onto the first page of Google results. What’s more, links that show up higher are considered more trustworthy.

Google’s PageRank algorithms work in mysterious ways. The company is constantly adapting to make them harder to manipulate. But it’s still possible to game Google search. For example, reputation management companies create a slew of positive content to help clients push negative articles off the first page of Google.

This happens in politics too. In the run-up to the UK’s general election of 2019, people speculated that Boris Johnson’s campaign team may have gamed Google by seeding bizarre stories about him to make negative coverage less prominent in searches. In 2016, extremist websites manipulated Google search to make their hate filled propaganda, such as Holocaust denial, rank highly in search. Although Google later fixed this, savvy disinformation purveyors can still find ways to leverage its power to deceive vast chunks of the population.

Key takeaways

Social proof matters because it’s both a cornerstone of how we navigate the online environment and a prime target for manipulation. It’s not just confined to social media but used all over the internet, from Amazon reviews to Twitter trends. Even Google search results, which many people trust more than social media, can be gamed.

Reputation Risk: When Disinformation Attacks Brands

We hear a lot about disinformation in politics, but what happens when similar techniques are used to target brands? Coordinated disinformation campaigns can devastate a brand’s reputation. In 2019, 78% of US consumers said that disinformation would cause them to lose trust in a brand (according to research from New Knowledge).

As well as trust, disinformation can cause brands to lose business. In the same survey, 40% of consumers said they’d either boycott or stop doing business with the brand or switch over to a competitor. Combined with the risk of long-term reputation damage, these figures highlight just how vulnerable brands can be to disinformation.

Ideology and Pranks

Those who attack the reputation of brands do it for a variety of reasons. It may be an ideological crusade against a brand whose values clash with those of the attacker.

For example, in 2017, a 4chan user launched a disinformation campaign, known as ‘Dreamer Day’, which offered illegal immigrants a 40% discount at their local Starbucks. The campaign had its own hashtag #borderfreecoffee, plus a realistic looking ad designed by other 4chan users. But the campaign objective was simply to lure immigrants out in public and then report them to the police. The 4chan trolls chose to target Starbucks because of its perceived liberal brand values, which were antithetic to their own.

In the UK, an Indian restaurant lost half its revenue after it became the target of disinformation. In this case, attackers posted a fake article online claiming that the restaurant used human meat in its dishes. The story hit Twitter and Facebook where it quickly gained traction. The restaurant was subjected to abusive phone calls and online harassment, and lost many bookings as a result. The motive for the disinformation was unclear, but it was likely a prank, done just for the ‘lols’.

The Quest for Profit

Just as with political disinformation, some attackers target brands for the sake of ideology, while others do it for entertainment value. Still others do it purely for the sake of making money.

In politics, disinformation for profit has primarily been studied in the arena of online advertising. Here, people create websites full of junk political news and disinformation, then promote them on social media. They know that divisive, emotionally triggering content attracts more clicks and hence more money. By loading the sites with ads, their creators can make a big profit from disinformation.

Disinformation for profit can also be aimed at brands. In this situation, one of the most common ways to make money is via the stock market. Fake articles that are given enough traction on social media can crash or skyrocket the value of a company’s stocks. Manipulating the process gives the attacker a prime opportunity to cash in on these artificial shifts.

And the problem is set to get worse. Many investment firms now rely on algorithms to scan news articles, press releases and social media for keywords to help them make investment decisions. Gaming this system is potentially easy to do, simply by creating fake accounts and fake content stuffed with keywords that get picked up by the algorithms.

Disinformation can also be used to make profit by manipulating the online review ecosystem. This tactic could easily be leveraged by unscrupulous brands to get an unfair advantage over their competitors.

It’s easy and cheap to hire firms that specialise in writing convincing fake reviews, on sites like Amazon, TripAdvisor or even on Google itself. With so many consumers now relying on reviews to make buying decisions, a sustained campaign of fake ones can devastate a brand’s reputation.

How brands can protect themselves

In a world where disinformation is prevalent, brands don’t have to be sitting ducks. As a brand manager, you can’t stop someone launching a disinformation campaign against your brand, but you can certainly fight back.

The first step involves arming yourself with knowledge about the extent of the disinformation. That involves finding answers to key questions about what’s being said, where it’s being said, and what sort of people are saying it. Who is the campaign most likely to influence? Will it reach the eyes of your key target audiences, or is it confined to less relevant groups and/or geographic areas? Once you have the answers, you can start to craft a crisis plan or reputation management strategy.

But it’s easy to feel lost among the masses of data out there. That’s why brand managers not only need the right tools, but also the right experts to interpret the data and provide actionable insights.

In a follow-up post, I’ll present some case studies about brands targeted by disinformation and discuss in more detail about how they fought back.

5 Ways Our Minds Make Us Susceptible to Online Disinformation

In our fast-moving online world, even the most aware of us can be taken in by disinformation. As humans, our minds work in certain ways that can leave us vulnerable to deception – and no-one is immune.

Our perceptions are not as reliable as we might like to imagine, and the online environment amplifies these flaws. In this post, I’ll discuss five important psychological traits that affect how we process information and subsequently affect our behaviour online.

Confirmation Bias

Search engines give us access to all the world’s information simply by typing a few words into a search bar.

But thanks to confirmation bias, people tend to search only for information that reinforces their beliefs. Even if what they find is disinformation, confirmation bias makes them less likely to question its veracity.

For example, take someone who already dislikes Donald Trump. They might search Google for “why is Trump still president?” This produces a slew of articles critical of Trump, feeding into the person’s existing beliefs. It’s a vulnerable moment during which disinformation can easily permeate.

Social Proof

The term ‘social proof’ was first used by Robert Cialdini in his seminal marketing book, Influence. It’s a way of building trust in a person, a product or a message, by demonstrating that many people approve of it. The bandwagon effect is the motivating force driving social proof. It dictates that if something seems popular, people will feel compelled to join in.

Social proof is especially important in today’s environment of information overload. With so many options available to us, we need a shortcut to help us cut through the noise and determine which ones to trust.

For marketers, social proof is an essential tool. But it’s also a powerful weapon in the arsenal of disinformation. Bots play a major role in building social proof around certain messages, including false ones. Liking, sharing and replying to these messages creates an illusion of widespread approval, which attracts more people to trust them. This may snowball, causing the message to go viral.

There’s a lot more to say about the role of social proof in disinformation. I’ll explore it in more detail in a follow up post. For now, remember that online popularity can easily be faked, and isn’t always a reliable indicator of grassroots public opinion.

False Consensus Effect

We all like to think that our beliefs, preferences, values and habits are widely shared, even though this may not be so. This overestimation is known as the false consensus effect. It relates to our self-esteem and the desire to conform as part of a social group, meaning we need to fit in.

Online, the false consensus effect is amplified in two main ways: 1) by means of algorithms that show us opinions reflecting our own (filter bubble effect), and 2) our habit of engaging only with others who support our views (echo chamber effect).

Disinformation that taps into the false consensus effect can find a fertile environment to take root, grow and mutate. Social media helps this happen. No matter how wedded you are to a certain view, always keep in mind that other people might think very differently.

Tribalism

Humans are social animals, so gaining the approval of a likeminded group is important for boosting our self-esteem. We reinforce this sense of self-esteem by behaving in ways that favour our own group (known as the in-group).

For example, we might post on social media about the positive traits of our in-group. This is relatively harmless in itself. But every in-group needs an out-group. Where there’s in-group loyalty there may also be out-group derogation – negative attitudes and behaviour towards the out-group. This conflict between groups of all kinds can be referred to as tribalism.

In emotive issues like politics, which tap into aspects of people’s identities, tribalism can morph into a force of dangerous power. Violence can easily follow; indeed, tribalism is at the root of most human conflicts.

Disinformation leverages the human tendency for tribalism by creating and disseminating adversarial narratives. These inflame existing divisions, creating a sense of ‘us vs them’. We can observe many cases of this in recent political events.

Examples include Trump supporters vs Clinton supporters in the US, Leavers vs Remainers in the UK, Muslims vs Buddhists in Myanmar, Han fans vs Tsai fans in Taiwan’s recent presidential election.

Backfire Effect

You might expect people would stop believing in disinformation if they are told it’s untrue. This seems logical, but human psychology doesn’t always work that way. The root of the problem is found (once again) in our self-esteem.

When certain beliefs become embedded in our worldview, they also become part of our identity. If one of those beliefs is challenged, it’s as if someone is shaking up the very foundations of that identity.

Challenges to our identity can be psychologically painful. In response, we may cling tighter to the original belief, making it even stronger. The attempt to correct backfires, so this process is known as the backfire effect.

Key Takeaways

  • Human psychology makes us susceptible to disinformation
  • In a world of information overload, we seek shortcuts to help us navigate. But these can be gamed, such as social proof
  • Much of online behaviour has its roots in aspects of self-esteem and identity
  • Simply ‘debunking’ disinformation may not be effective, because of the backfire effect
  • Adversarial narratives are a common feature of disinformation, found in many situations worldwide. They can lead to tribalism, which risks real-life violence