Home » social media

Tag: social media

Analysing Trump’s Medical Disinformation on Facebook

US president Donald Trump shocked the world this week with his latest piece of medical disinformation.

Trump claimed that injecting disinfectant into the body could be an ‘interesting’ way to cure COVID-19.

He later tried to back-pedal, claiming he was being sarcastic. But that wasn’t how most of the world took it.

Dangers of medical disinformation

The mainstream media and the public widely lambasted this dangerous medical disinformation.

Amid the furore over Trump’s remarks, a major disinfectant firm issued a statement urging the public not to inject or drink any of their products.

However, members of pro-Trump Facebook groups dedicated to conspiracy theories displayed quite the opposite reaction. 

I examined some of these groups to provide comment for an article in CodaStory. I’d previously gathered this list because of the strong focus on various ‘corona disinformation conspiracies’.

These include 5G causing the virus, the virus being a US bioweapon, and Bill Gates as having orchestrated the ‘virus hoax’ in his ambition to enforce a worldwide vaccine programme. 

Many of the groups also centred around the Qanon conspiracy theory.

Pro-Trump Facebook reactions

You might expect the suggestion of injecting bleach to be a step too far even for these largely pro-Trump groups. Not so. 

In my initial observation of the groups, I noticed three distinct ways in which the members attempted to account for Trump’s bizarre medical disinformation.

First, that Trump was just ‘playing the media’. People must be stupid if they believe he meant what he said.

Commenters also attributed all the negative media coverage to ‘yet another’ MSM (mainstream media), liberal, or Democrat attempt to smear Trump.

Secondly, some commenters claimed that the media had quoted Trump ‘out of context’. According to them, he was speaking ‘more generally’ about possible ways to treat COVID-19.

Others highlighted a fact check article from far-right news outlet Breitbart. But no-one acknowledged the videos of Trump making these claims for everyone to see and hear. 

The third claim relates more closely to other COVID-19 medical disinformation, ‘miracle cures’. This commenter claimed that Trump must have been referring to those UV light therapy and ozone therapy, which already exist.

Things got more interesting when the commenter drew links between the medical disinformation about bleach and the popular narrative of ‘Vitamin C as miracle cure’.

They claimed that taking Vitamin C causes hydrogen peroxide to build up in the body. It followed that hydrogen peroxide has a disinfectant effect, so Trump’s comments have a basis in medical fact.

Rationalising medical disinformation

These three counter-narratives about Trump’s medical disinformation all attempt to rationalise an influential figure making a dangerous and irresponsible remark.

Tribal attitudes drive many of these rationalisations. For example, the claims that the media purposefully misinterpreted Trump’s comments in a ‘libs’ or ‘Dems’ smear attack. Once again, this reinforces the existing divide between populist pro-Trump narratives and the mainstream.

The question remains: How many of these Facebook group members are genuine American citizens? Facebook itself is the only entity that could properly attribute the accounts. And it doesn’t seem to be giving much away.

I suspect group members are a mix of genuine Trump supporters and astroturfers working to stir up tribal hatred of the ‘other side’.

Tribal attitudes can be dangerous, particularly in relation to public health. People in the pro-Trump tribe are more likely to challenge messages from the perceived ‘outgroup’ (‘experts’ and the ‘MSM’) such as critical public health advice from the WHO.

A similar dynamic has fuelled recent anti-lockdown protests across the US, which may already have spread the virus further and compromised the entire country. Astroturfing was certainly a factor there; there’s no reason why it couldn’t be influencing these groups too.

Social Proof and Online Manipulation

What is social proof?

Every day, countless online sources compete for our attention. The concept of social proof can help us cut through the noise. In this post, I’ll explain how it works, and discuss how it can be gamed.

To avoid information overload and possible burnout, it’s essential to zero in on the important parts and sort them into categories that make sense. But how do we know which parts are important?

The human brain uses many shortcuts to understand the complex world around us. In social proof, we use the approval of others as a shortcut. We evaluate the significance and quality of a piece of information based on how many other people like it.

Social proof is part of the psychology of persuasion, used to great effect by marketers and PR specialists. We rely on it all the time when making consumer decisions.

On Amazon, for example, customer reviews are a shortcut to guide us when choosing between a large number of possible products to purchase.

A quick glance at the reviews allows us to avoid wasting time and energy conducting our own research on various products.

We also rely on social proof when judging the importance of a message on social media.

Almost instinctively, we tend to evaluate a post with many likes or shares more favourably. We assume a lot of other people have already approved of it, so we’re happy to jump on the bandwagon.

But relying too heavily on these shortcuts may leave us vulnerable to the risk of them being manipulated.

Manipulating Social Proof

Social engagement

It’s easy to convey social proof on social media. Liking, favouriting or upvoting is the quickest and most basic form of online social proof.

When we see a post with lots of likes, we instinctually view that information as important.

The act of online sharing also taps into social proof. If other people believe a post is worth sharing, then it must be of value. We may decide to share it too. This can easily lead us into the trap of sharing disinformation.

Example of social proof in a false tweet about Britney Spears
Hackers break into Sony Music account and tweet falsely about death of Britney Spears

But online likes and shares are also pretty easy to game. On Twitter, for example, a few lines of code can produce a bot that can automatically favourite a tweet containing a particular keyword (which can be anything).

A large network of automated accounts can then mass favourite (or mass retweet) any tweet, giving it a false appearance of significance, via artificial social proof.

Another way to convey social proof is via the user profile itself. Twitter is the most obvious example of this. We perceive users with more followers as being more important than those with fewer followers. The followers provide social proof, like an exclusive club.

In contrast, when an account has many friends (accounts it has followed) but few followers, it diminishes the effect of social proof. Again, you can use automated accounts to game this.

By mass following an account and then following each other back, it is possible to create the illusion of an account being popular.

Amazon reviews

Social media isn’t the only place where gaming social proof happens. It also happens across the web, in areas such as online shopping. Take Amazon, which has hundreds of options for anything we want to buy.

How do we know which ones are worth buying? We rely on the ratings and reviews from other buyers.

Ratings and reviews are a form of social proof for products, acting as an essential shortcut for navigating through the mass of available options.

You can even filter search results by the number of rating stars each product has gained. Ratings directly affect the seller’s bottom line. For Amazon’s third-party sellers, ratings can make or break their business.

This random product has great ratings. It looks enticing, but how many are actually genuine?

This is prime territory for gaming. And Amazon’s fake review economy is massive and growing.

Trending topics

Trending topics are another major area where social proof holds sway. The trending topics list shows whatever keywords or hashtags people are tweeting most widely at any point in time.

Whenever big news breaks – such as a terrorist attack, plane crash or celebrity death – it usually appears immediately in Twitter’s trending topics, often before the mainstream media can produce coverage.

There’s a strong incentive to game trending topics. For individual tweeters, tweeting using a keyword or hashtag from the trending list makes their tweet more visible.

It’s common to see Twitter accounts for brands ‘hashtag-jacking’ (or ‘trend-jacking’). This is where brand managers (or spammers) shoehorn the trending hashtag into the tweet, to get their brand in front of a larger audience.

Hashtag-jacking can be done skilfully, if the hashtag is relevant to the brand, but it tends to be the domain of spammers.

This is gaming trending topics on a relatively small scale. But things become more insidious when you take into account bot armies. In these cases, a large number of artificial accounts, perhaps controlled by just one person (the ‘bot herder’), tweet coordinated messages around the same hashtag.

Done properly, this can push the hashtag into the trending topics list, where human users will engage with it, giving it further mileage. It’s an effective way to mainline disinformation into the public arena.

The ultimate goal of gaming social proof in tweets is for mainstream media to pick up the false story. With so many journalists using Twitter to find potential stories, this is not so difficult to achieve.

Google search results

Lastly, we’ll take a quick look at the effects of social proof in Google search results. When searching for something, most internet users don’t click beyond beyond the first page of Google.

So the higher your link shows up, the more likely it is to be influential. SEO experts make a lot of money out of getting client links onto the first page of Google results. What’s more, we are more likely to trust links that show up higher in search results.

Google’s PageRank algorithms work in mysterious ways. The company is constantly adapting to make them harder to manipulate. But it’s still possible to game Google search.

For example, reputation management companies create a large amount of positive content, to help clients push negative articles off the first page of Google.

This happens in politics too. In the run-up to the UK’s general election of 2019, people speculated that Boris Johnson’s campaign team may have gamed Google by seeding bizarre stories about him to make negative coverage less prominent in searches.

In 2016, extremist websites manipulated Google search to make their hate filled propaganda, such as Holocaust denial, rank highly in search. Although Google later fixed this, savvy disinformation purveyors can still find ways to leverage its power to deceive vast swathes of the population.

Social proof: Key takeaways

We should not ignore social proof. It is both a cornerstone of how we navigate the online environment and a prime factor for manipulation.

Social proof is not just on social media. People use it all over the internet, from Amazon reviews to Twitter trends. Even Google search results, which many people trust more than social media, can be gamed.

The best way forward is to keep the concept of social proof in mind when engaging with online content. In particular, think about who might have an incentive to make the article or post that you’re reading look popular.

Behind the hashtag: Who’s tweeting about #SurrenderAct ?

If you’ve been following the latest news about Brexit, then you’ve probably heard about the so-called ‘Surrender Act’.

It’s Boris Johnson’s way of describing the Benn Act, passed by Parliament earlier this month to prevent No-Deal Brexit. This compels Johnson to seek an extension to Article 50 if he can’t reach a deal with the EU by October 19, 2019.

Johnson’s supporters didn’t approve of this legislation. They claimed that the Act would ‘undermine’ Britain’s negotiating power with the EU.

#SurrenderAct immediately started trending on Twitter. But who exactly was tweeting it? I jumped into the analytics to find out.

When did the hashtag start?

When analysing a hashtag, I usually begin by checking when it was first tweeted, and by whom. #SurrenderAct was first used by an account that really didn’t want to look like a bot…

Below we see a sharp spike in activity around the hashtag. It was tweeted over 3000 times over 12 hours (mainly during the UK night time).

So who else is tweeting about #SurrenderAct? Below are the top 10 most active hashtag users. In the rest of this post, I’ll put these accounts under the microscope.

Bot, cyborg, or organic human?

You’re probably wondering how many of these accounts are bots. Time for a quick reminder about what bots can (and can’t) do on Twitter. They’re pieces of code designed to amplify a particular hashtag, user or keyword. DFR Lab has a useful guide for spotting automated accounts.

The most obvious indicator of ‘bot-ness’ is high levels of activity, i.e. non-human tweeting patterns. Other top indicators are anonymity: e.g. no photo, or a generic one, a non-specific (usually political) bio, and a vague location, e.g. ‘England’, and amplification: only retweeting or liking other people’s tweets – i.e. boosting their messages in a quick and low-effort way.

Bots are less effective in human-to-human engagement, such as arguing with other Twitter users. That’s more likely to be human operators (or cyborgs, which mix bots with humans).

So, if boosting #SurrenderAct was the main purpose of these accounts, then we’d expect to find evidence of typical bot-like behaviours.

Let’s take a look at three interesting accounts within the top 10.

1. The Hyper-Prolific Tweeter

This account is new to Twitter, having joined in March this year. It has no photo (only the typical ‘egg’) and no bio. Definitely low effort.

But its rate of tweeting is impressive! During a short space of time, ‘christine’ has achieved a rate of over 1000 tweets per day.

Researchers cite a number of different benchmarks for identifying ‘bot-ness’. The Oxford Internet Institute says it’s an average of 50 tweets per day. DFR Lab is more generous. It claims that 72 tweets per day would be suspicious, and over 144 would be ‘highly suspicious’.

Remember too, that retweeting is faster and lower effort than creating replies or original tweets.

As shown above, ‘christine’ is going full bot. 100% of the account’s activity is retweets, all from the Twitter for iPhone app.

2. The Latent Islamophobe

‘Sue Reap’ is at number eight among those who most tweeted #SurrenderAct. There’s some interesting things going on with this account. Its bio is peppered with Tommy Robinson references and hashtags.

The account joined Twitter over seven years ago. But a couple of quick advanced searches shows that it didn’t tweet anything for most of 2012 or 2013.

Or, perhaps it did, but those tweets got deleted…It’s not easy to know.

Suddenly, ‘Susan’ springs into action in late 2013/early 2014 with a flurry of anti-Muslim tweets.

We can see that this account has a suspiciously high activity rate, producing 126.88 tweets per day, of which 22% is replies.

This rate puts the account close to the DFR Lab’s ‘highly suspicious’ bracket of 144 tweets per day.

So has ‘Susan’ given up on Tommy?

Not in the slightest. He’s still foremost in her mind, right up there with leaving the EU. It’s practically an obsession.

3. The ‘true-blue’ Brexiteer

This account is likely to be ‘organic’, i.e. a normal human user. It’s become quite Brexity in recent years, but still within the realms of normal human behaviour.

‘Pat’ was an early adopter of Twitter, joining in 2009, possibly when he/she was 55 (guessing from the handle). That would put them in their mid-60s now; the typical Brexit voter demographic.

At the beginning, ‘Pat’ tweeted everyday comments about garden parties and Michael Jackson. There was no sign of anything political.

In April 2016, when the referendum had been announced, ‘Pat’ was tweeting happily about normal things: celebrities, photography and TV shows.

But come May, as Britain inched closer to the date of the referendum, Pat’s political side suddenly became apparent. Out came the pro-Brexit tweets.

Despite this, the account is still within the realms of being normal. An activity rate of 33 tweets per day is nowhere near ‘botness’. What’s more, the 82% of replies shows that this account engages a lot with other users, rather than simply retweeting things blindly. This is not typical ‘bot’ behaviour.

It’s likely to be a typical older Brexit voter who has become somewhat radicalised by the debate’s tribal nature (it’s not just Brexit voters; but happens to both sides).

These accounts form just a tiny sample of the millions of accounts out there engaging with political content.

Key takeaway: Don’t just assume everyone is a bot; instead think critically before jumping to conclusions.

Nuanced communities: Mapping ISIS support on Twitter

As every content marketer knows, creating resonant narratives requires intimate knowledge of the audience in question.

Nowhere is this more true than in attempts to counter the potent messaging of ISIS. The terrorist group is infamous for its ability to attract recruits from across the world to commit violence in the name of the ‘caliphate.’

ISIS has been a fixture in the global public consciousness for over two years, from its dramatic emergence in summer 2014 to facing near-decline earlier this year, followed by resurgence with its latest attack on Berlin just weeks ago. Long before Berlin, the group had already become notorious for the quality and power of its social media messaging, professionally produced videos and slick English-language print publications.

Concerned national governments and civil society groups have made numerous attempts to counter the ISIS narrative in various ways, ranging from shutting down followers’ Twitter accounts en masse to creating alternative narratives that aim to discredit the group, its ideology and its actions. But despite all these attempts, attacks against European cities remain a very real threat.

As another gloomy and blood-soaked year of ISIS activity comes to an end, the group shows no sign of fading away. Although it has lost physical territory in Iraq and Syria, the ongoing risk of the ISIS virtual caliphate persists.

A whole range of diverse factors determine an individual’s likelihood to become radicalised, many of which have been studied in significant depth elsewhere. Social media is not necessarily the most influential factor, but it undoubtedly plays a role.

RAND, a US-based think-tank, conducted a detailed research study, published in 2016, to examine ISIS support and opposition networks on Twitter, aiming to gather insights that could inform future counter-messaging efforts.

The study used a mixed-method analytics approach to map publicly available Twitter data from across the Arabic-speaking Twitter-verse. Specific techniques used were community detection algorithms to detect links between Twitter users that could signify the presence of interactive communities, along with social network analysis and lexical analysis to draw out key themes from among the chatter.

Research goals were to learn how to differentiate between ISIS opponents and supporters; to understand who they are and what they are saying; and to understand the connections between them while identifying the influencers.

Lexical analysis uncovered four major groups, or ‘meta-communities’ among the Arabic-speaking ISIS conversation on Twitter. These were Shia, Sunni, Syrian Mujahideen, and ISIS Supporters. They are characterised by certain distinct patterns in their tweets. Shia tend to condemn ISIS and hold positive views of Christians/the West/the international coalition fighting ISIS. This is unsurprising considering the long-standing hostility between Sunni and Shia Muslims and the fact that ISIS is a Sunni group.

The Syrian Mujahideen group is anti-Assad, holds mixed views of ISIS, and negative views of the coalition. ISIS supporters talk positively in bombastic overblown language about ISIS and the caliphate. They insult Shia, the Assad regime, and the West. Notably, their approach to social media strategy is by far the most sophisticated of the lot. And finally, the Sunni group is heavily divided along nationalistic lines, which includes most countries of the Arab world.

Key findings of interest

1. Unique audiences, essential nuance

Telling the difference in large datasets between ISIS supporters and opponents was key for this study. RAND researchers chose an easy way; Twitter users who tweeted the Arabic word for ‘Islamic State’ (الدولة ا س مية ) were considered to be supporters, while those who used the acronym ‘DAESH’ (داعش ) were opponents. This dividing line isn’t foolproof but, based on what’s known about the significance of these two Arabic terms, it seems a valid way to approach the task. Research discovered that although opponents outnumbered supporters six to one, the supporters were far more active, producing 50 % more tweets daily.

This could point to a couple of things. Firstly the outnumbering suggests that the majority of the Arab world (or at least the Twitter sphere) is anti-ISIS; while the volume of pro-ISIS tweets could suggest passionate support for the group, or on the other hand could point to the presence of armies of pro-ISIS bots or perhaps the use of astro-turfing. The latter two could be an interesting case for new research, especially in the present climate where the curtain has been lifted on use of social media bots, astro-turfing armies and persona management software.

2. Jordanian pilot, Turkish soldiers

The researchers also plotted Twitter activity levels for all four groups, between July 2014 (when ISIS emerged and announced itself to the world), to May 2015. Notable findings were firstly that both the anti-ISIS groups (Shia and Sunni States) showed similar activity patterns, suggesting that both were responding to the same ISIS-related events. All four groups experienced a large spike in activity in early February 2015, when ISIS released a video showing Jordanian pilot Moath al-Kasasbeh being burned alive.

After this event, the ISIS supporters activity decreased sharply, while the Syrian Mujahideen’s grew to almost match the Shia and Sunni States groups. Possible explanations (assuming the ISIS supporters are not bots) could include outrage at the murder of a fellow Muslim, and/or outrage at the way he was killed, burning, which is forbidden in the Qur’an. It would be interesting to compare the Twitter response to al-Kasasbeh’s murder with the response to another ISIS burning video, released last week, where two Turkish soldiers were killed.

This comparison could reveal further insights about the nature of the original 2015 spike; or reveal changing attitudes towards Turkey, which has started fighting against ISIS in recent months and has most likely become hated among the group’s supporters as a result.

3. Social media mavens

The ISIS supporters Twitter community analysed in the study showed particular features that made it distinct from the other groups. The supporters group members were more active than the other three groups (despite smaller numbers overall). They tweeted a lot of pro-ISIS terms and phrases, predictably. But most notable about this group was their fluency and command of advanced social media strategy, as shown by their use of certain terms on Twitter. In the study, the supporters group used disproportionately high levels of terms such as spread, link, breaking news, media office, and pictorial evidence.

In general, ISIS has always been exceptionally conversant with social media marketing tools and techniques, in fact far superior to the efforts of many national governments. I would be very interested to see a study that uncovers who exactly is responsible for the ISIS propaganda, what their backgrounds are, and how they were recruited and trained (if indeed they weren’t already expert in this area).

4. CVE insights from Twitter data

Finally, the report offers insights for policy-makers and for those engaged in online CVE efforts across the Arab world. The most important of these is a reiteration of the need for counter-messaging that’s not just tailored, but that shows deep levels of insight into the mindsets of its target audiences. Research like this can help reveal useful themes and connections to build upon.

Also, the ongoing efforts by Twitter to ban pro-ISIS accounts has undoubtedly driven many of them to other channels, most notoriously Telegram. Analysing activity on new channels would be of great use in revealing any shifts in ISIS supporters focus or mindset. Much in the landscape has changed since this report was released, and continues to do so at a rapid rate.

Fake armies: A field guide to astroturfing

“There are invisible rulers who control the destinies of millions.”

― Edward L. Bernays

It sounds so Orwellian; the world’s opinions shaped by vast armies of bots, or by paid groups of teenagers in Macedonia. But far from being a 1984 nightmare come to life, this scenario has become reality; and not just in authoritarian states. Technology is now used to drown out the voices of real people, creating an alternate reality where fake opinions rule and the zeitgeist is based on myths.

What exactly is astroturfing?

Astroturfing is where paid groups or automated technologies (‘bots’) fool the public into believing that certain opinions are more popular or widespread than in reality. It’s used in many arenas, from political campaigning to Amazon reviews. With the increasing influence of social media it’s difficult to tell fake from fact. Astroturfing is especially likely to happen whenever the interests of big business come into conflict with those of the public, for example climate change and big oil, or lung cancer and tobacco companies. To challenge scientifically proven fact should be an impossible endeavour, as surely nothing is more sacred than fact? But in a world led by fake news and paid opinion, the word of experts has been cheapened. In fact, many people no longer trust experts at all. This was demonstrated to devastating effect this year during the EU referendum in the UK, and the presidential elections in the United States.

When did astroturfing begin?

Astroturfing is not a phenomenon of the digital age. It’s been going on since before social media began. Back in the days of print newspapers, so-called ‘concerned residents’ would send a barrage of letters to the editor, especially around election times, to protest against certain policies or candidates. Now that newspapers have gone online the armies of astroturfers have headed to the nearest obvious outlet: the comment sections. From there, it’s an easy step to create multiple identities and start posting comments. Forums are another prime target for astroturfers, along with blogs and of course, social media. Have you ever felt a sense of despair when reading the comments under a newspaper article posted on Facebook? They seem to bring out the worst of human nature, but some of them could be astroturfers. In our low moments, when we feel the world is doomed to a constant cycle of bigotry, xenophobia and fear, perhaps we’d do well to remind ourselves that the rabid anti-Muslim or anti-foreigner comments online could simply be the work of some bot army.

What’s the role of technology?

As technology advances further, astroturfing gets more sophisticated. Russia has a particular talent for harnessing the power of fake opinion on a massive scale, using something called ‘persona management software’. This software creates bot armies that use fake IP addresses to hide their location, along with generating authentic-looking ‘aged’ profiles. There’s almost no way to tell bot from human – and that’s where the real danger lies. Fake opinion en masse can have alarming results; shifting the social and political mood and whipping people up into hysteria over issues minor or even non-existent.

Thanks to the online echo chambers that we live in these days, fake opinion can spread with ease once sown. It becomes further reinforced and legitimised by ongoing social sharing and discussion. Most social media users get their news from within a bubble, as algorithms do their utmost to show only the updates that the user is most likely to engage with. This means there’s less chance of people being shown opinions that challenge their existing worldview. That’s a recipe for disaster – and it’s one that we’ve only just begun to understand the significance of.

What are the implications?

Politics in 2016 is fishy business. In particular, the Trump election campaign is extremely suspicious. There have been claims that Russia used its cyber warfare prowess to interfere in the US elections; in the end putting Trump in command of the country. Notably, Russia has been accused of using its hackers to access Wikileaks to produce a leak of thousands of incriminating emails supposedly sent by Hillary Clinton. This move eroded public trust in Clinton and narrowed the gap between candidates by double digits. Again, like astroturfing, this technique is not new. Orchestrating the right conditions to encourage people to act in a certain way has been used for decades. The father of propaganda, Edward Bernays, used it to great effect in the early 20th century, to sell pianos and bacon, and cause regime change in Guatemala.

Having Trump in power is very much in Russia’s interests. Trump is inexperienced in politics, especially foreign policy, making him very much open to manipulation from afar. He has a reputation for being greedy, meaning he can be easily bought. He has already said publicly that he favours anon-interventionist military policy abroad. For the Kremlin, a Trump presidency is Russia’s very own puppet in the White House. It’s the Cold War revisited, with Russia scoring a massive coup against the US. Only this time Russia has technology on its side, propelling its influence all the way into the corridors of American power. The Soviets couldn’t have hoped for anything like it.

Controlling the zeitgeist via propaganda and astroturfing has reached new heights in this fundamentally connected age where the concept of ‘post-truth’ is rapidly gaining currency. That’s a serious concern; it makes a mockery of democracy and free speech, destroying the validity of the internet as a forum for useful online debate. Soon we won’t know what’s bot and what’s not. In this post-truth, Trump-tainted era, one could well argue that is already the case.