Home » Social Media

Category: Social Media

Reputation Risk: When Disinformation Attacks Brands

We hear a lot about disinformation in politics, but what happens when similar techniques are used to target brands? Coordinated disinformation campaigns can devastate a brand’s reputation. In 2019, 78% of US consumers said that disinformation would cause them to lose trust in a brand (according to research from New Knowledge).

As well as trust, disinformation can cause brands to lose business. In the same survey, 40% of consumers said they’d either boycott or stop doing business with the brand or switch over to a competitor. Combined with the risk of long-term reputation damage, these figures highlight just how vulnerable brands can be to disinformation.

Ideology and Pranks

Those who attack the reputation of brands do it for a variety of reasons. It may be an ideological crusade against a brand whose values clash with those of the attacker.

For example, in 2017, a 4chan user launched a disinformation campaign, known as ‘Dreamer Day’, which offered illegal immigrants a 40% discount at their local Starbucks. The campaign had its own hashtag #borderfreecoffee, plus a realistic looking ad designed by other 4chan users. But the campaign objective was simply to lure immigrants out in public and then report them to the police. The 4chan trolls chose to target Starbucks because of its perceived liberal brand values, which were antithetic to their own.

In the UK, an Indian restaurant lost half its revenue after it became the target of disinformation. In this case, attackers posted a fake article online claiming that the restaurant used human meat in its dishes. The story hit Twitter and Facebook where it quickly gained traction. The restaurant was subjected to abusive phone calls and online harassment, and lost many bookings as a result. The motive for the disinformation was unclear, but it was likely a prank, done just for the ‘lols’.

The Quest for Profit

Just as with political disinformation, some attackers target brands for the sake of ideology, while others do it for entertainment value. Still others do it purely for the sake of making money.

In politics, disinformation for profit has primarily been studied in the arena of online advertising. Here, people create websites full of junk political news and disinformation, then promote them on social media. They know that divisive, emotionally triggering content attracts more clicks and hence more money. By loading the sites with ads, their creators can make a big profit from disinformation.

Disinformation for profit can also be aimed at brands. In this situation, one of the most common ways to make money is via the stock market. Fake articles that are given enough traction on social media can crash or skyrocket the value of a company’s stocks. Manipulating the process gives the attacker a prime opportunity to cash in on these artificial shifts.

And the problem is set to get worse. Many investment firms now rely on algorithms to scan news articles, press releases and social media for keywords to help them make investment decisions. Gaming this system is potentially easy to do, simply by creating fake accounts and fake content stuffed with keywords that get picked up by the algorithms.

Disinformation can also be used to make profit by manipulating the online review ecosystem. This tactic could easily be leveraged by unscrupulous brands to get an unfair advantage over their competitors.

It’s easy and cheap to hire firms that specialise in writing convincing fake reviews, on sites like Amazon, TripAdvisor or even on Google itself. With so many consumers now relying on reviews to make buying decisions, a sustained campaign of fake ones can devastate a brand’s reputation.

How brands can protect themselves

In a world where disinformation is prevalent, brands don’t have to be sitting ducks. As a brand manager, you can’t stop someone launching a disinformation campaign against your brand, but you can certainly fight back.

The first step involves arming yourself with knowledge about the extent of the disinformation. That involves finding answers to key questions about what’s being said, where it’s being said, and what sort of people are saying it. Who is the campaign most likely to influence? Will it reach the eyes of your key target audiences, or is it confined to less relevant groups and/or geographic areas? Once you have the answers, you can start to craft a crisis plan or reputation management strategy.

But it’s easy to feel lost among the masses of data out there. That’s why brand managers not only need the right tools, but also the right experts to interpret the data and provide actionable insights.

In a follow-up post, I’ll present some case studies about brands targeted by disinformation and discuss in more detail about how they fought back.

shows a glowing human brain

How Disinformation Hacks Your Brain

Today I’m going to explain how disinformation hacks your brain.

In our fast-moving online world, even the most aware of us can be taken in by disinformation. As humans, our minds work in certain ways that can leave us vulnerable to deception – and no-one is immune.

Our perceptions are not as reliable as we might like to imagine, and the online environment amplifies these flaws. In this post, I’ll discuss five important psychological traits that dictate how disinformation hacks your brain.

5 Ways Disinformation Hacks Your Brain

Confirmation Bias

Search engines give us access to all the world’s information simply by typing a few words into a search bar.

Because of confirmation bias, people tend to search only for information that reinforces their beliefs. Furthermore, even if what they find is disinformation, the effect of confirmation bias makes them less likely to question its veracity.

For example, let’s take someone who already dislikes Donald Trump. Perhaps they might search Google for “why is Trump still president?”

This search produces a slew of articles critical of Trump, feeding into the person’s existing beliefs. Consequently, this becomes a vulnerable moment during which disinformation can easily find a foothold in the mind.

Social Proof

The term ‘social proof’ was first used by Robert Cialdini in his seminal marketing book, Influence. It’s a way of building trust in a person, a product or a message, by demonstrating that many people approve of it.

The bandwagon effect is the motivating force driving social proof. It dictates that if something seems popular, people will feel compelled to join in.

Social proof is especially important in today’s environment of information overload. Because there are so many options available to us, we need a shortcut to help us cut through the noise and determine which ones to trust.

For marketers, social proof is an essential tool. It’s also a powerful weapon in the arsenal of disinformation. Devices such as bots play a major role in building social proof around certain messages, including false ones.

Liking, sharing and replying to these messages creates an illusion of widespread approval, which attracts more people to trust them. This may snowball, causing the message to go viral.

There’s a lot more to say about the role of social proof in disinformation. I’ll explore it in more detail in a follow up post. For now, remember that it’s easy to fake online popularity, so likes and retweets aren’t always a reliable indicator of grassroots public opinion.

The Consensus Effect

We like to think that many other people share our beliefs, preferences, values and habits, even when that’s not actually the case. Behavioural psychology calls this overestimation the false consensus effect. It relates to our self-esteem and the desire to conform as part of a social group, meaning we need to fit in.

Online, the false consensus effect is amplified in two main ways:

  • By means of algorithms that show us opinions reflecting our own (filter bubble effect),
  • By our habit of engaging only with others who support our views (echo chamber effect).

Disinformation that taps into the false consensus effect can find a fertile environment to take root, grow and mutate. Social media helps this happen. No matter how convinced you are of a certain view, you should never forget that other people may well think differently.

Tribalism

Humans are social animals. Consequently, gaining the approval of a likeminded group is important for boosting our self-esteem. We reinforce this sense of self-esteem by behaving in ways that favour our own group (known as the in-group).

For example, we might post on social media about the positive traits of our in-group, which is relatively harmless in itself. However, every in-group needs an out-group.

Furthermore, where there’s in-group loyalty there may also be out-group derogation – negative attitudes and behaviour towards the out-group. This conflict between groups of all kinds is a form of tribalism. It plays a huge role in how disinformation hacks your brain.

In emotive issues like politics, which tap into aspects of people’s identities, tribalism can morph into a force of dangerous power. Violence can easily follow. In fact, tribalism is the driving force behind many human conflicts.

Disinformation leverages the human tendency for tribalism by creating and disseminating adversarial narratives. These inflame existing divisions, creating a sense of ‘us vs them’. We can observe many cases of this in recent political events.

Examples include Trump supporters vs Clinton supporters in the US, Leavers vs Remainers in the UK, Muslims vs Buddhists in Myanmar, Han fans vs Tsai fans in Taiwan’s recent presidential election.

The Backfire Effect

You might expect people would stop believing in disinformation if they are told it’s untrue. This seems logical, however human psychology doesn’t always work that way. The root of the problem is found (once again) in our self-esteem.

Furthermore, when certain beliefs become embedded in our worldview, they also become part of our identity. If one of those beliefs is challenged, it’s as if someone is shaking up the very foundations of that identity.

Challenges to our identity can be psychologically painful. In response, we may cling tighter to the original belief, making it even stronger. The attempt to correct backfires, therefore this process is known as the backfire effect.

How Disinformation Hacks Your Brain: Key Takeaways

  • Human psychology makes us susceptible to disinformation
  • In a world of information overload, we seek shortcuts to help us navigate. However, these can be gamed, such as social proof.
  • Much of online behaviour has its roots in aspects of self-esteem and identity.
  • Simply ‘debunking’ disinformation may not be effective, due to the backfire effect.
  • Adversarial narratives are a common feature of disinformation, found in many situations worldwide. They can lead to tribalism, which risks real-life violence.

Behind the hashtag: Who’s tweeting about #SurrenderAct ?

If you’ve been following the latest news about Brexit, then you’ve probably heard about the so-called ‘Surrender Act’.

It’s Boris Johnson’s way of describing the Benn Act, passed by Parliament earlier this month to prevent No-Deal Brexit. This compels Johnson to seek an extension to Article 50 if he can’t reach a deal with the EU by October 19, 2019.

Johnson’s supporters didn’t approve of this legislation. They claimed that the Act would ‘undermine’ Britain’s negotiating power with the EU.

#SurrenderAct immediately started trending on Twitter. But who exactly was tweeting it? I jumped into the analytics to find out.

When did the hashtag start?

When analysing a hashtag, I usually begin by checking when it was first tweeted, and by whom. #SurrenderAct was first used by an account that really didn’t want to look like a bot…

Below we see a sharp spike in activity around the hashtag. It was tweeted over 3000 times over 12 hours (mainly during the UK night time).

So who else is tweeting about #SurrenderAct? Below are the top 10 most active hashtag users. In the rest of this post, I’ll put these accounts under the microscope.

Bot, cyborg, or organic human?

You’re probably wondering how many of these accounts are bots. Time for a quick reminder about what bots can (and can’t) do on Twitter. They’re pieces of code designed to amplify a particular hashtag, user or keyword. DFR Lab has a useful guide for spotting automated accounts.

The most obvious indicator of ‘bot-ness’ is high levels of activity, i.e. non-human tweeting patterns. Other top indicators are anonymity: e.g. no photo, or a generic one, a non-specific (usually political) bio, and a vague location, e.g. ‘England’, and amplification: only retweeting or liking other people’s tweets – i.e. boosting their messages in a quick and low-effort way.

Bots are less effective in human-to-human engagement, such as arguing with other Twitter users. That’s more likely to be human operators (or cyborgs, which mix bots with humans).

So, if boosting #SurrenderAct was the main purpose of these accounts, then we’d expect to find evidence of typical bot-like behaviours.

Let’s take a look at three interesting accounts within the top 10.

1. The Hyper-Prolific Tweeter

This account is new to Twitter, having joined in March this year. It has no photo (only the typical ‘egg’) and no bio. Definitely low effort.

But its rate of tweeting is impressive! During a short space of time, ‘christine’ has achieved a rate of over 1000 tweets per day.

Researchers cite a number of different benchmarks for identifying ‘bot-ness’. The Oxford Internet Institute says it’s an average of 50 tweets per day. DFR Lab is more generous. It claims that 72 tweets per day would be suspicious, and over 144 would be ‘highly suspicious’.

Remember too, that retweeting is faster and lower effort than creating replies or original tweets.

As shown above, ‘christine’ is going full bot. 100% of the account’s activity is retweets, all from the Twitter for iPhone app.

2. The Latent Islamophobe

‘Sue Reap’ is at number eight among those who most tweeted #SurrenderAct. There’s some interesting things going on with this account. Its bio is peppered with Tommy Robinson references and hashtags.

The account joined Twitter over seven years ago. But a couple of quick advanced searches shows that it didn’t tweet anything for most of 2012 or 2013.

Or, perhaps it did, but those tweets got deleted…It’s not easy to know.

Suddenly, ‘Susan’ springs into action in late 2013/early 2014 with a flurry of anti-Muslim tweets.

We can see that this account has a suspiciously high activity rate, producing 126.88 tweets per day, of which 22% is replies.

This rate puts the account close to the DFR Lab’s ‘highly suspicious’ bracket of 144 tweets per day.

So has ‘Susan’ given up on Tommy?

Not in the slightest. He’s still foremost in her mind, right up there with leaving the EU. It’s practically an obsession.

3. The ‘true-blue’ Brexiteer

This account is likely to be ‘organic’, i.e. a normal human user. It’s become quite Brexity in recent years, but still within the realms of normal human behaviour.

‘Pat’ was an early adopter of Twitter, joining in 2009, possibly when he/she was 55 (guessing from the handle). That would put them in their mid-60s now; the typical Brexit voter demographic.

At the beginning, ‘Pat’ tweeted everyday comments about garden parties and Michael Jackson. There was no sign of anything political.

In April 2016, when the referendum had been announced, ‘Pat’ was tweeting happily about normal things: celebrities, photography and TV shows.

But come May, as Britain inched closer to the date of the referendum, Pat’s political side suddenly became apparent. Out came the pro-Brexit tweets.

Despite this, the account is still within the realms of being normal. An activity rate of 33 tweets per day is nowhere near ‘botness’. What’s more, the 82% of replies shows that this account engages a lot with other users, rather than simply retweeting things blindly. This is not typical ‘bot’ behaviour.

It’s likely to be a typical older Brexit voter who has become somewhat radicalised by the debate’s tribal nature (it’s not just Brexit voters; but happens to both sides).

These accounts form just a tiny sample of the millions of accounts out there engaging with political content.

Key takeaway: Don’t just assume everyone is a bot; instead think critically before jumping to conclusions.

A Tale Of Two Leaders: Facebook, Astroturfing, And Social Proof

There’s something odd about the Prime Minister’s Facebook page.

Underneath every post, especially those about Brexit, are hundreds of responses. This isn’t unusual for the page of a public figure, but the style of the responses didn’t ring true.

They are all very similar; short utterances of praise for Boris Johnson, repeating words and phrases such as ‘brilliant’, ‘fantastic’, and ‘support Boris 100%’. Each comment is festooned with Facebook’s emojis, mainly representing positive sentiments of ‘like’, ‘love’ and ‘laugh’.

This behaviour feels odd. I’m not denying that a lot of genuine people do support Johnson, but it’s suspicious for so many to consistently comment on his posts in this distinctive and repetitive fashion.

Screenshot of Boris Johnson’s Facebook page, with a selection of comments about Brexit.

Let’s contrast this with the page of his predecessor, Theresa May, specifically her Brexit-related posts. Here we see a very different scenario.

Responses to May’s posts tend to be a lot more varied, in content, tone and length. Some disagree with her. Others support her. But most are expressed in more depth and sophistication of language than the short repetitive replies on Johnson’s.

In short, the responses on May’s page look far more likely to be ‘organic’ (i.e. produced by real people behaving naturally) than the majority of those on Johnson’s. It’s possible that Johnson’s page is using artificial amplification techniques, which may include fake followers.

Screenshot of Theresa May’s Facebook page showing a sample of typical comments about Brexit. Note the contrast with Johnson’s page.

Facebook locks its data down tight, so it’s hard to run further analysis to determine for certain whether the Johnson supporters are part of an organised campaign.

But we can draw from previous examples. Donald Trump used fake Facebook followers during the US presidential campaign. Researchers discovered that over half of the followers on his page came from countries known as hubs for Facebook ‘like farms’.

These ‘farms’ are often found in developing nations such as the Philippines and India, where the dollar stretches a long way. They offer customers the opportunity to buy fake Facebook likes to create the impression of popular support.

As well as likes, customers can purchase fake engagement, usually in the form of comments. This may explain the unusual commenting activity on Johnson’s page.

For political purposes, this type of artificial campaign is an important tool, because it generates the illusion of grassroots support for a particular figure or issue. It even has a name: astroturfing.

Illusion becomes reality when the fake engagement intersects with genuine users, who are more likely to engage with seemingly popular posts thanks to the effect of ‘social proof’ – a psychological phenomenon where people tend to follow the actions of the masses.

This can be leveraged to great effect in social media environments, where user attention spans are low, knee-jerk reactions are prevalent, and ‘likes’ are an addictive form of currency.

How Personal Branding Paved The Way For The Post-Truth Era

Over a decade ago, an idea was born that seemed innocent at the time, even ground-breaking. It was the idea of personal branding; marketing one’s own skills like a product. In this piece, I’m going to reflect on how the personal branding mindset has played a role in creating today’s polarised and tribal online environment.

In his original Fast Company article,‘The Brand Called You’, author Tom Peters urges his readers to develop their personal brands by delivering talks and developing word-of-mouth marketing around their unique skills. He briefly mentions the importance of showing familiarity with new technology (such as email), but as a rather minor consideration. After all, it was 1997; the digital world hadn’t yet become an inextricable part of everyone’s lives.

Fast forward a few years to the early 2000s, where people had started publishing their own content using blogs and personal websites. The social media platform MySpace was launched in 2003, followed a couple of years later by Facebook. These tools were powerful and they allowed ordinary people to broadcast their message, whatever that might be, to large audiences. It was a whole new way to build the brand called you.

Digital tribalism

The growth of social media and blogs spawned a whole generation of online content creators, some successful, many not. People could now reinvent themselves personally and professionally simply by producing relevant online content and sharing it with audiences via social media. The trick to success was finding a bunch of people with whom your message resonated, i.e. your tribe.

The idea of ‘finding your tribe’ is central in branding strategy, both for commercial marketers and personal branders. Personal branding gurus often stress the importance of being bold and even divisive in the content you choose to publish. The goal in doing so is to eliminate those who aren’t on board with your opinions, leaving only your loyal, like-minded tribe remaining.

Arguably, this tribal approach has instilled in the digital generation a habit of being strongly opinionated online. It’s all too easy to be bold and divisive when you’re safely behind a screen. You can blog, make videos and write ebooks to your hearts’ content.

But creating effective content for personal branding takes up a lot of time and mental energy. Not everyone wants (or has the skills/motivation) to write original blog posts about their key career interests. Luckily, there’s another approach: content curation.

This popular and effortless alternative for building a personal brand community involves sharing other people’s content with your target audience, sometimes (but not necessarily) adding your own quote or original take.

Curation can be done quickly and with the minimum of effort; an appealing strategy in a time-pressed world. For example, content curation on Twitter could be as simple as retweeting articles and tweets relevant to the personal brand you wish to create. By doing this consistently, you can attract like-minded people, which then gives you a tribe, or brand community.

Another relevant factor in the development and solidification of personal branding is the deliberate design of online social networks. This encourages users to take actions which generate more likes, clicks and engagement from their audience.

Content curation and social networks’ design are symbiotic processes intended to complement one another, leading to a cycle where people create (or curate) content, gain approval from their tribe, experience a positive self-esteem boost, and repeat. This ongoing process generates increased traffic for the social networks and more revenue for their vital advertisers.

Personal branding meets politics

In 2008, online social networks made their big debut in politics as part of Barack Obama’s presidential campaign. Obama won the presidency, and followed up in 2012 with another win and another dose of digital political campaigning.

By then, more and more people were using social media and the first signs of manipulation were began to emerge. On top of that, attention spans were beginning to erode as people became used to a lifestyle lived almost wholly online. The introduction of Apple’s iPhone, and the resulting explosion in smartphone use exacerbated this shift, giving people access to social media in their pockets at all times.

It created the beginnings of a world where everyone on a bus or train would have their head down staring at a smartphone. Once we gained the possibility of sharing content at the touch of a ‘share’ button, content curation as part of maintaining a self image would soon become habitual for many. By 2016, social network use was prevalent, most people had a smartphone, and information was flowing non-stop.

Politics had firmly entered the personal branding arena, and campaign managers deployed increasingly clever strategies, such as digital profiling and social ads, to win over voting populations. This practice came to a head with the EU referendum in the UK, closely followed by the 2016 election of Donald Trump as US president.

Going tribal

To better understand what drove these events, it’s useful to first consider the innate human tendency to see the world in terms of ‘us vs them’. This is well demonstrated by the work of behavioural psychologist Henri Tajfel on what he called the ‘social identity theory of group conflict’, in particular the ‘minimal group paradigm’.

The minimal group paradigm shows that people define themselves in opposing groups over the most trivial of matters, such as a coin flip, grouping themselves into ‘Heads’ and ‘ Tails’. Once divided into groups, people tend to favour their own ingroup while disadvantaging, and even derogating, the outgroup. If people can get tribal over a simple coin flip, imagine what they’d be like over political ideology.

Further research has shown that not only do people tend to strongly favour their ingroup, but they also have a tendency to derogate the outgroup. This us vs them mentality manifests in many areas of life, from harmless rivalry over cities and sports (e.g. Boston vs New York or Manchester vs Liverpool), to more serious issues of racism, xenophobia and nationalism.

It also manifests in the digital world, exacerbated by today’s entrenched tendency for personal branding and ‘finding one’s tribe’. People receive positive reinforcement as part of the in-group whenever they broadcast their identity to their fellow brand community members. They usually do this by sharing content, whether their own, or, more commonly, curated from others.

Two infamous political examples are the behaviour of Trump supporters versus Clinton supporters, or Leave versus Remain supporters. Both sides commonly derogate the other (e.g. libtard, Brexiteer) and view their ingroup as superior.

That’s not the only way social identity theory manifests itself in contemporary digital politics. In addition to derogating each others’ perceived political outgroup, it’s become common practice to derogate, and even dehumanise, certain outgroups in wider society, normally minorities such as Muslims, refugees or immigrants.

These groups have become easy targets because of an array of social and political events over recent decades that have put them squarely in the firing line. Ever since the terrorist attacks of 9/11, the British and US mainstream media has consistently highlighted attacks committed by Muslim perpetrators while downplaying similar ones conducted by non-Muslims.

What’s more, the Syrian civil war and the rise of ISIS triggered a massive influx of refugees from Syria and Iraq into Europe. Together, these events produced a climate of fear and uncertainty; fertile territory for innate ‘us and them’ attitudes to thrive in a digital sphere where online tribalism (in the personal branding sense) had long been a common practice.

Tribes before truth

This leads to a very current concern: the rise of online misinformation, often known as ‘fake news’. With such a huge flood of information now available via our smartphones, we don’t always have time to read everything in detail. So we take shortcuts and get lazy about processing information properly. We simply don’t have the time or inclination to think deeply about every piece of content we interact with online.

Nevertheless, we crave engagement and approval from our ‘tribe’. Perhaps we’ve become somewhat addicted to it, to the extent that we sometimes share articles without even reading them. Recent research found that between 50–70% of all URLs on Twitter are shared without being opened, suggesting that people share them based only on the headline. This has heavy implications for the spread of misinformation, and suggests too, that fact-checking probably won’t work.

In an online space rife with misinformation, why would someone share an article without reading it first? Arguably, broadcasting our affiliation to our digital tribe matters more to us than veracity. More critically, broadcasting this affiliation to our ingroup is likely to involve derogation of an outgroup. After all, we really want those likes and shares, and that’s often the best way to get them.

One of the key goals in sharing content on social media (especially Twitter) seems to be to signal that ‘we’ (the ingroup) are different from ‘them’ (the outgroup). This dichotomy shows up most disturbingly in stories about ‘Muslim rape gangs’, refugee ‘sex mobs’, and terrorist attacks that never happened (e.g. the fictitious Bowling Green massacre).

In this tense milieu, it’s easy for misinformation to get picked up and spread as part of the ‘tribal broadcasting’ process, or ‘content curation’ in personal branding parlance. If a certain news story fits people’s ingroup vs outgroup narratives, they’re probably going to share it on social media. Truth may come second to tribalism.

The real danger comes when this digital tribalism plays out in real world scenarios, such as an uptick in anti-immigrant hate crime, or violent events such as ‘Pizzagate’. Both have been linked to online misinformation.

You might ask what the social media giants are doing to address this issue. They’ve made various efforts to implement reporting tools so that users can report hate speech. They have also shut down particularly heinous accounts such as InfoWars, that exist purely to peddle misinformation and hate.

But digital tribalism in fact boosts all the metrics that spell success for social media firms, creating a self-reinforcing situation. One can’t help but wonder how far they’ll actually go to rein this in.

If only we could all quit social media, en masse. Would that solve the problem? Or does it run deeper than that?