Home » Influence Operations

Category: Influence Operations

Social Proof and Online Manipulation

What is social proof?

Every day, countless online sources compete for our attention. The concept of social proof can help us cut through the noise. In this post, I’ll explain how it works, and discuss how it can be gamed.

To avoid information overload and possible burnout, it’s essential to zero in on the important parts and sort them into categories that make sense. But how do we know which parts are important?

The human brain uses many shortcuts to understand the complex world around us. In social proof, we use the approval of others as a shortcut. We evaluate the significance and quality of a piece of information based on how many other people like it.

Social proof is part of the psychology of persuasion, used to great effect by marketers and PR specialists. We rely on it all the time when making consumer decisions.

On Amazon, for example, customer reviews are a shortcut to guide us when choosing between a large number of possible products to purchase.

A quick glance at the reviews allows us to avoid wasting time and energy conducting our own research on various products.

We also rely on social proof when judging the importance of a message on social media.

Almost instinctively, we tend to evaluate a post with many likes or shares more favourably. We assume a lot of other people have already approved of it, so we’re happy to jump on the bandwagon.

But relying too heavily on these shortcuts may leave us vulnerable to the risk of them being manipulated.

Manipulating Social Proof

Social engagement

It’s easy to convey social proof on social media. Liking, favouriting or upvoting is the quickest and most basic form of online social proof.

When we see a post with lots of likes, we instinctually view that information as important.

The act of online sharing also taps into social proof. If other people believe a post is worth sharing, then it must be of value. We may decide to share it too. This can easily lead us into the trap of sharing disinformation.

Example of social proof in a false tweet about Britney Spears
Hackers break into Sony Music account and tweet falsely about death of Britney Spears

But online likes and shares are also pretty easy to game. On Twitter, for example, a few lines of code can produce a bot that can automatically favourite a tweet containing a particular keyword (which can be anything).

A large network of automated accounts can then mass favourite (or mass retweet) any tweet, giving it a false appearance of significance, via artificial social proof.

Another way to convey social proof is via the user profile itself. Twitter is the most obvious example of this. We perceive users with more followers as being more important than those with fewer followers. The followers provide social proof, like an exclusive club.

In contrast, when an account has many friends (accounts it has followed) but few followers, it diminishes the effect of social proof. Again, you can use automated accounts to game this.

By mass following an account and then following each other back, it is possible to create the illusion of an account being popular.

Amazon reviews

Social media isn’t the only place where gaming social proof happens. It also happens across the web, in areas such as online shopping. Take Amazon, which has hundreds of options for anything we want to buy.

How do we know which ones are worth buying? We rely on the ratings and reviews from other buyers.

Ratings and reviews are a form of social proof for products, acting as an essential shortcut for navigating through the mass of available options.

You can even filter search results by the number of rating stars each product has gained. Ratings directly affect the seller’s bottom line. For Amazon’s third-party sellers, ratings can make or break their business.

This random product has great ratings. It looks enticing, but how many are actually genuine?

This is prime territory for gaming. And Amazon’s fake review economy is massive and growing.

Trending topics

Trending topics are another major area where social proof holds sway. The trending topics list shows whatever keywords or hashtags people are tweeting most widely at any point in time.

Whenever big news breaks – such as a terrorist attack, plane crash or celebrity death – it usually appears immediately in Twitter’s trending topics, often before the mainstream media can produce coverage.

There’s a strong incentive to game trending topics. For individual tweeters, tweeting using a keyword or hashtag from the trending list makes their tweet more visible.

It’s common to see Twitter accounts for brands ‘hashtag-jacking’ (or ‘trend-jacking’). This is where brand managers (or spammers) shoehorn the trending hashtag into the tweet, to get their brand in front of a larger audience.

Hashtag-jacking can be done skilfully, if the hashtag is relevant to the brand, but it tends to be the domain of spammers.

This is gaming trending topics on a relatively small scale. But things become more insidious when you take into account bot armies. In these cases, a large number of artificial accounts, perhaps controlled by just one person (the ‘bot herder’), tweet coordinated messages around the same hashtag.

Done properly, this can push the hashtag into the trending topics list, where human users will engage with it, giving it further mileage. It’s an effective way to mainline disinformation into the public arena.

The ultimate goal of gaming social proof in tweets is for mainstream media to pick up the false story. With so many journalists using Twitter to find potential stories, this is not so difficult to achieve.

Google search results

Lastly, we’ll take a quick look at the effects of social proof in Google search results. When searching for something, most internet users don’t click beyond beyond the first page of Google.

So the higher your link shows up, the more likely it is to be influential. SEO experts make a lot of money out of getting client links onto the first page of Google results. What’s more, we are more likely to trust links that show up higher in search results.

Google’s PageRank algorithms work in mysterious ways. The company is constantly adapting to make them harder to manipulate. But it’s still possible to game Google search.

For example, reputation management companies create a large amount of positive content, to help clients push negative articles off the first page of Google.

This happens in politics too. In the run-up to the UK’s general election of 2019, people speculated that Boris Johnson’s campaign team may have gamed Google by seeding bizarre stories about him to make negative coverage less prominent in searches.

In 2016, extremist websites manipulated Google search to make their hate filled propaganda, such as Holocaust denial, rank highly in search. Although Google later fixed this, savvy disinformation purveyors can still find ways to leverage its power to deceive vast swathes of the population.

Social proof: Key takeaways

We should not ignore social proof. It is both a cornerstone of how we navigate the online environment and a prime factor for manipulation.

Social proof is not just on social media. People use it all over the internet, from Amazon reviews to Twitter trends. Even Google search results, which many people trust more than social media, can be gamed.

The best way forward is to keep the concept of social proof in mind when engaging with online content. In particular, think about who might have an incentive to make the article or post that you’re reading look popular.

Reputation Risk: When Disinformation Attacks Brands

We hear a lot about disinformation in politics, but what happens when similar techniques are used to target brands? Coordinated disinformation campaigns can devastate a brand’s reputation. In 2019, 78% of US consumers said that disinformation would cause them to lose trust in a brand (according to research from New Knowledge).

As well as trust, disinformation can cause brands to lose business. In the same survey, 40% of consumers said they’d either boycott or stop doing business with the brand or switch over to a competitor. Combined with the risk of long-term reputation damage, these figures highlight just how vulnerable brands can be to disinformation.

Ideology and Pranks

Those who attack the reputation of brands do it for a variety of reasons. It may be an ideological crusade against a brand whose values clash with those of the attacker.

For example, in 2017, a 4chan user launched a disinformation campaign, known as ‘Dreamer Day’, which offered illegal immigrants a 40% discount at their local Starbucks. The campaign had its own hashtag #borderfreecoffee, plus a realistic looking ad designed by other 4chan users. But the campaign objective was simply to lure immigrants out in public and then report them to the police. The 4chan trolls chose to target Starbucks because of its perceived liberal brand values, which were antithetic to their own.

In the UK, an Indian restaurant lost half its revenue after it became the target of disinformation. In this case, attackers posted a fake article online claiming that the restaurant used human meat in its dishes. The story hit Twitter and Facebook where it quickly gained traction. The restaurant was subjected to abusive phone calls and online harassment, and lost many bookings as a result. The motive for the disinformation was unclear, but it was likely a prank, done just for the ‘lols’.

The Quest for Profit

Just as with political disinformation, some attackers target brands for the sake of ideology, while others do it for entertainment value. Still others do it purely for the sake of making money.

In politics, disinformation for profit has primarily been studied in the arena of online advertising. Here, people create websites full of junk political news and disinformation, then promote them on social media. They know that divisive, emotionally triggering content attracts more clicks and hence more money. By loading the sites with ads, their creators can make a big profit from disinformation.

Disinformation for profit can also be aimed at brands. In this situation, one of the most common ways to make money is via the stock market. Fake articles that are given enough traction on social media can crash or skyrocket the value of a company’s stocks. Manipulating the process gives the attacker a prime opportunity to cash in on these artificial shifts.

And the problem is set to get worse. Many investment firms now rely on algorithms to scan news articles, press releases and social media for keywords to help them make investment decisions. Gaming this system is potentially easy to do, simply by creating fake accounts and fake content stuffed with keywords that get picked up by the algorithms.

Disinformation can also be used to make profit by manipulating the online review ecosystem. This tactic could easily be leveraged by unscrupulous brands to get an unfair advantage over their competitors.

It’s easy and cheap to hire firms that specialise in writing convincing fake reviews, on sites like Amazon, TripAdvisor or even on Google itself. With so many consumers now relying on reviews to make buying decisions, a sustained campaign of fake ones can devastate a brand’s reputation.

How brands can protect themselves

In a world where disinformation is prevalent, brands don’t have to be sitting ducks. As a brand manager, you can’t stop someone launching a disinformation campaign against your brand, but you can certainly fight back.

The first step involves arming yourself with knowledge about the extent of the disinformation. That involves finding answers to key questions about what’s being said, where it’s being said, and what sort of people are saying it. Who is the campaign most likely to influence? Will it reach the eyes of your key target audiences, or is it confined to less relevant groups and/or geographic areas? Once you have the answers, you can start to craft a crisis plan or reputation management strategy.

But it’s easy to feel lost among the masses of data out there. That’s why brand managers not only need the right tools, but also the right experts to interpret the data and provide actionable insights.

In a follow-up post, I’ll present some case studies about brands targeted by disinformation and discuss in more detail about how they fought back.

Behind the hashtag: Who’s tweeting about #SurrenderAct ?

If you’ve been following the latest news about Brexit, then you’ve probably heard about the so-called ‘Surrender Act’.

It’s Boris Johnson’s way of describing the Benn Act, passed by Parliament earlier this month to prevent No-Deal Brexit. This compels Johnson to seek an extension to Article 50 if he can’t reach a deal with the EU by October 19, 2019.

Johnson’s supporters didn’t approve of this legislation. They claimed that the Act would ‘undermine’ Britain’s negotiating power with the EU.

#SurrenderAct immediately started trending on Twitter. But who exactly was tweeting it? I jumped into the analytics to find out.

When did the hashtag start?

When analysing a hashtag, I usually begin by checking when it was first tweeted, and by whom. #SurrenderAct was first used by an account that really didn’t want to look like a bot…

Below we see a sharp spike in activity around the hashtag. It was tweeted over 3000 times over 12 hours (mainly during the UK night time).

So who else is tweeting about #SurrenderAct? Below are the top 10 most active hashtag users. In the rest of this post, I’ll put these accounts under the microscope.

Bot, cyborg, or organic human?

You’re probably wondering how many of these accounts are bots. Time for a quick reminder about what bots can (and can’t) do on Twitter. They’re pieces of code designed to amplify a particular hashtag, user or keyword. DFR Lab has a useful guide for spotting automated accounts.

The most obvious indicator of ‘bot-ness’ is high levels of activity, i.e. non-human tweeting patterns. Other top indicators are anonymity: e.g. no photo, or a generic one, a non-specific (usually political) bio, and a vague location, e.g. ‘England’, and amplification: only retweeting or liking other people’s tweets – i.e. boosting their messages in a quick and low-effort way.

Bots are less effective in human-to-human engagement, such as arguing with other Twitter users. That’s more likely to be human operators (or cyborgs, which mix bots with humans).

So, if boosting #SurrenderAct was the main purpose of these accounts, then we’d expect to find evidence of typical bot-like behaviours.

Let’s take a look at three interesting accounts within the top 10.

1. The Hyper-Prolific Tweeter

This account is new to Twitter, having joined in March this year. It has no photo (only the typical ‘egg’) and no bio. Definitely low effort.

But its rate of tweeting is impressive! During a short space of time, ‘christine’ has achieved a rate of over 1000 tweets per day.

Researchers cite a number of different benchmarks for identifying ‘bot-ness’. The Oxford Internet Institute says it’s an average of 50 tweets per day. DFR Lab is more generous. It claims that 72 tweets per day would be suspicious, and over 144 would be ‘highly suspicious’.

Remember too, that retweeting is faster and lower effort than creating replies or original tweets.

As shown above, ‘christine’ is going full bot. 100% of the account’s activity is retweets, all from the Twitter for iPhone app.

2. The Latent Islamophobe

‘Sue Reap’ is at number eight among those who most tweeted #SurrenderAct. There’s some interesting things going on with this account. Its bio is peppered with Tommy Robinson references and hashtags.

The account joined Twitter over seven years ago. But a couple of quick advanced searches shows that it didn’t tweet anything for most of 2012 or 2013.

Or, perhaps it did, but those tweets got deleted…It’s not easy to know.

Suddenly, ‘Susan’ springs into action in late 2013/early 2014 with a flurry of anti-Muslim tweets.

We can see that this account has a suspiciously high activity rate, producing 126.88 tweets per day, of which 22% is replies.

This rate puts the account close to the DFR Lab’s ‘highly suspicious’ bracket of 144 tweets per day.

So has ‘Susan’ given up on Tommy?

Not in the slightest. He’s still foremost in her mind, right up there with leaving the EU. It’s practically an obsession.

3. The ‘true-blue’ Brexiteer

This account is likely to be ‘organic’, i.e. a normal human user. It’s become quite Brexity in recent years, but still within the realms of normal human behaviour.

‘Pat’ was an early adopter of Twitter, joining in 2009, possibly when he/she was 55 (guessing from the handle). That would put them in their mid-60s now; the typical Brexit voter demographic.

At the beginning, ‘Pat’ tweeted everyday comments about garden parties and Michael Jackson. There was no sign of anything political.

In April 2016, when the referendum had been announced, ‘Pat’ was tweeting happily about normal things: celebrities, photography and TV shows.

But come May, as Britain inched closer to the date of the referendum, Pat’s political side suddenly became apparent. Out came the pro-Brexit tweets.

Despite this, the account is still within the realms of being normal. An activity rate of 33 tweets per day is nowhere near ‘botness’. What’s more, the 82% of replies shows that this account engages a lot with other users, rather than simply retweeting things blindly. This is not typical ‘bot’ behaviour.

It’s likely to be a typical older Brexit voter who has become somewhat radicalised by the debate’s tribal nature (it’s not just Brexit voters; but happens to both sides).

These accounts form just a tiny sample of the millions of accounts out there engaging with political content.

Key takeaway: Don’t just assume everyone is a bot; instead think critically before jumping to conclusions.

A Tale Of Two Leaders: Facebook, Astroturfing, And Social Proof

There’s something odd about the Prime Minister’s Facebook page.

Underneath every post, especially those about Brexit, are hundreds of responses. This isn’t unusual for the page of a public figure, but the style of the responses didn’t ring true.

They are all very similar; short utterances of praise for Boris Johnson, repeating words and phrases such as ‘brilliant’, ‘fantastic’, and ‘support Boris 100%’. Each comment is festooned with Facebook’s emojis, mainly representing positive sentiments of ‘like’, ‘love’ and ‘laugh’.

This behaviour feels odd. I’m not denying that a lot of genuine people do support Johnson, but it’s suspicious for so many to consistently comment on his posts in this distinctive and repetitive fashion.

Screenshot of Boris Johnson’s Facebook page, with a selection of comments about Brexit.

Let’s contrast this with the page of his predecessor, Theresa May, specifically her Brexit-related posts. Here we see a very different scenario.

Responses to May’s posts tend to be a lot more varied, in content, tone and length. Some disagree with her. Others support her. But most are expressed in more depth and sophistication of language than the short repetitive replies on Johnson’s.

In short, the responses on May’s page look far more likely to be ‘organic’ (i.e. produced by real people behaving naturally) than the majority of those on Johnson’s. It’s possible that Johnson’s page is using artificial amplification techniques, which may include fake followers.

Screenshot of Theresa May’s Facebook page showing a sample of typical comments about Brexit. Note the contrast with Johnson’s page.

Facebook locks its data down tight, so it’s hard to run further analysis to determine for certain whether the Johnson supporters are part of an organised campaign.

But we can draw from previous examples. Donald Trump used fake Facebook followers during the US presidential campaign. Researchers discovered that over half of the followers on his page came from countries known as hubs for Facebook ‘like farms’.

These ‘farms’ are often found in developing nations such as the Philippines and India, where the dollar stretches a long way. They offer customers the opportunity to buy fake Facebook likes to create the impression of popular support.

As well as likes, customers can purchase fake engagement, usually in the form of comments. This may explain the unusual commenting activity on Johnson’s page.

For political purposes, this type of artificial campaign is an important tool, because it generates the illusion of grassroots support for a particular figure or issue. It even has a name: astroturfing.

Illusion becomes reality when the fake engagement intersects with genuine users, who are more likely to engage with seemingly popular posts thanks to the effect of ‘social proof’ – a psychological phenomenon where people tend to follow the actions of the masses.

This can be leveraged to great effect in social media environments, where user attention spans are low, knee-jerk reactions are prevalent, and ‘likes’ are an addictive form of currency.

Imagined communities: The alt-left on Twitter

In January, I joined a team of researchers in Amsterdam to explore the online communities of ‘Alt-left’ and ‘Antifa’ using data from Twitter and Reddit. This was one of many projects all taking place as part of the Digital Methods Initiative Winter School, organised by the University of Amsterdam.

In particular, we wanted to use the data to find out whether the so called ‘alt-left’ represents a genuine community or if it’s simply a construct of the alt-right in the latter’s attempts to promote a false equivalence, as has been claimed.  We were also curious to establish whether alt-left overlapped with Antifa online, or if they were two distinct communities.

We arrived at the following research questions. This blog post will focus on the first two only.
1. Does the ‘Alt Left’ exist (to Antifa)?
2. Could the ‘Alt Left’ be considered an Alt Right construct?
3. Is the ‘Alt Left’ as a notion mainstreaming?
4. How did the ‘Alt Left’ emerge? How to characterise the
Alt Left’s identity (semantically)?

For the first question, we collected a list of top 20 hashtags associated with both ‘alt-left’ and ‘Antifa’ keywords. We found very little overlap between the two groups. Notably, the discourse of the alt-left (shown in green below) consisted of more words that could be considered ‘mainstream political dialogue’, compared to the ‘Antifa’, which seemed more fringe.

The latter’s hashtags contained references to lesser-known terms such as ‘democratTerrorism’, along with the names of some Japanese Antifa groups. From this data, we could reasonably conclude that the alt-left and Antifa are separate, distinctive discourses.

The term ‘alt-left’ arrived in the mainstream after Donald Trump used it when talking about the violence around the ‘Unite the Right’ rally in the US city of Charlottesville. He condemned groups on both sides of the violence, saying: “What about the alt-left that came charging at, as you say, at the alt-right?”

Subsequently, there has been discussion in the mainstream media about whether alt-left really exists. It could simply be a construct of the far-right as it attempts to gain moral ground against its opposition by creating a false equivalent that can be painted as equally, if not more, violent.

The next section of our research examined this question in more depth, using Twitter data to give a clearer picture. We compared the top 50 Twitter users for both the #altleft and #Antifa hashtags, by looking more closely at their bios and timelines.

Right-wing conservative Trump supporters dominated the alt-left group, while the Antifa group was populated by a strange mix of accounts posting ebay related tweets and tweets in Japanese promoting the ‘Antifa Handbook’.

A deeper dive, this time into the most influential hashtag users (influence = most followed), produced illuminating results. For both keywords, we identified the majority of influential users as pro-Trump conservatives, with the exception of one rather prolific Japanese bot.

We found three significant users who overlapped with both terms: both pro-Trump. The common thread connecting both altleft and Antifa hashtag users appears to be their support for Trump, rather than specific alt-right features.

The below Gephi graph shows the different communities grouping around the Antifa keyword on Twitter. As can be seen, the biggest cluster represents American, right-wing Trump supporters.

We also created an equivalent for the alt-left hashtag communities. This graph is less consistent, but the striking feature about it is the large purple cluster, with one Twitter account ‘kwilli1046’, at its centre, meaning this user is extremely influential.

A screenshot of his profile is shown below. ‘Kevin W’ has around 88k followers and a bio link to a page on Gab, the alternative ‘free speech’ social networking platform where many of the far-right have ended up after being kicked off Twitter.

In conclusion, we found that the ‘alt-left’ does not exist outside of an online construct. In addition, the alt-left is mainly a term propagated by pro-Trump accounts in their attempts to create a ‘faux-public’. With so many of these accounts being pro-Trump, this could potentially point to the use of state-sponsored bots and organised trolling to push the alt-left as a form of disinformation. This could be a worthwhile topic for building on the findings of this research.