Home » Bots

Tag: Bots

Social Proof and Online Manipulation

What is social proof?

Every day, countless online sources compete for our attention. The concept of social proof can help us cut through the noise. In this post, I’ll explain how it works, and discuss how it can be gamed.

To avoid information overload and possible burnout, it’s essential to zero in on the important parts and sort them into categories that make sense. But how do we know which parts are important?

The human brain uses many shortcuts to understand the complex world around us. In social proof, we use the approval of others as a shortcut. We evaluate the significance and quality of a piece of information based on how many other people like it.

Social proof is part of the psychology of persuasion, used to great effect by marketers and PR specialists. We rely on it all the time when making consumer decisions.

On Amazon, for example, customer reviews are a shortcut to guide us when choosing between a large number of possible products to purchase.

A quick glance at the reviews allows us to avoid wasting time and energy conducting our own research on various products.

We also rely on social proof when judging the importance of a message on social media.

Almost instinctively, we tend to evaluate a post with many likes or shares more favourably. We assume a lot of other people have already approved of it, so we’re happy to jump on the bandwagon.

But relying too heavily on these shortcuts may leave us vulnerable to the risk of them being manipulated.

Manipulating Social Proof

Social engagement

It’s easy to convey social proof on social media. Liking, favouriting or upvoting is the quickest and most basic form of online social proof.

When we see a post with lots of likes, we instinctually view that information as important.

The act of online sharing also taps into social proof. If other people believe a post is worth sharing, then it must be of value. We may decide to share it too. This can easily lead us into the trap of sharing disinformation.

Example of social proof in a false tweet about Britney Spears
Hackers break into Sony Music account and tweet falsely about death of Britney Spears

But online likes and shares are also pretty easy to game. On Twitter, for example, a few lines of code can produce a bot that can automatically favourite a tweet containing a particular keyword (which can be anything).

A large network of automated accounts can then mass favourite (or mass retweet) any tweet, giving it a false appearance of significance, via artificial social proof.

Another way to convey social proof is via the user profile itself. Twitter is the most obvious example of this. We perceive users with more followers as being more important than those with fewer followers. The followers provide social proof, like an exclusive club.

In contrast, when an account has many friends (accounts it has followed) but few followers, it diminishes the effect of social proof. Again, you can use automated accounts to game this.

By mass following an account and then following each other back, it is possible to create the illusion of an account being popular.

Amazon reviews

Social media isn’t the only place where gaming social proof happens. It also happens across the web, in areas such as online shopping. Take Amazon, which has hundreds of options for anything we want to buy.

How do we know which ones are worth buying? We rely on the ratings and reviews from other buyers.

Ratings and reviews are a form of social proof for products, acting as an essential shortcut for navigating through the mass of available options.

You can even filter search results by the number of rating stars each product has gained. Ratings directly affect the seller’s bottom line. For Amazon’s third-party sellers, ratings can make or break their business.

This random product has great ratings. It looks enticing, but how many are actually genuine?

This is prime territory for gaming. And Amazon’s fake review economy is massive and growing.

Trending topics

Trending topics are another major area where social proof holds sway. The trending topics list shows whatever keywords or hashtags people are tweeting most widely at any point in time.

Whenever big news breaks – such as a terrorist attack, plane crash or celebrity death – it usually appears immediately in Twitter’s trending topics, often before the mainstream media can produce coverage.

There’s a strong incentive to game trending topics. For individual tweeters, tweeting using a keyword or hashtag from the trending list makes their tweet more visible.

It’s common to see Twitter accounts for brands ‘hashtag-jacking’ (or ‘trend-jacking’). This is where brand managers (or spammers) shoehorn the trending hashtag into the tweet, to get their brand in front of a larger audience.

Hashtag-jacking can be done skilfully, if the hashtag is relevant to the brand, but it tends to be the domain of spammers.

This is gaming trending topics on a relatively small scale. But things become more insidious when you take into account bot armies. In these cases, a large number of artificial accounts, perhaps controlled by just one person (the ‘bot herder’), tweet coordinated messages around the same hashtag.

Done properly, this can push the hashtag into the trending topics list, where human users will engage with it, giving it further mileage. It’s an effective way to mainline disinformation into the public arena.

The ultimate goal of gaming social proof in tweets is for mainstream media to pick up the false story. With so many journalists using Twitter to find potential stories, this is not so difficult to achieve.

Google search results

Lastly, we’ll take a quick look at the effects of social proof in Google search results. When searching for something, most internet users don’t click beyond beyond the first page of Google.

So the higher your link shows up, the more likely it is to be influential. SEO experts make a lot of money out of getting client links onto the first page of Google results. What’s more, we are more likely to trust links that show up higher in search results.

Google’s PageRank algorithms work in mysterious ways. The company is constantly adapting to make them harder to manipulate. But it’s still possible to game Google search.

For example, reputation management companies create a large amount of positive content, to help clients push negative articles off the first page of Google.

This happens in politics too. In the run-up to the UK’s general election of 2019, people speculated that Boris Johnson’s campaign team may have gamed Google by seeding bizarre stories about him to make negative coverage less prominent in searches.

In 2016, extremist websites manipulated Google search to make their hate filled propaganda, such as Holocaust denial, rank highly in search. Although Google later fixed this, savvy disinformation purveyors can still find ways to leverage its power to deceive vast swathes of the population.

Social proof: Key takeaways

We should not ignore social proof. It is both a cornerstone of how we navigate the online environment and a prime factor for manipulation.

Social proof is not just on social media. People use it all over the internet, from Amazon reviews to Twitter trends. Even Google search results, which many people trust more than social media, can be gamed.

The best way forward is to keep the concept of social proof in mind when engaging with online content. In particular, think about who might have an incentive to make the article or post that you’re reading look popular.