Author: sjnorth

Disinformation, Influence Ops and the Humble Blog

Just a few days ago, Wired Magazine published an article revealing that the ongoing Russian disinformation campaign also involved a number of long form blog posts published on Medium.

In this post, I want to talk about why this seemingly new move from Russian info ops is actually not surprising at all and makes perfect logical sense in an overall content strategy. Blogs play a key role in an effective strategy, with content posted on Twitter, Facebook and other social platforms acting as ‘outposts’, all emanating from the blog, which is placed at the heart.

Some of the Medium bloggers had tens of thousands of followers. They published content around predictable topics such as ‘race, science and politics, indoctrination on US college campuses’ – and that good old chestnut, ‘lies from Hillary Clinton’. The fake blogger identities were carefully crafted to appeal to their target audience, with bios claiming to be conservative, Christian, and family-oriented.

This approach reflects that used on Twitter by many bot and sock puppet accounts; designed to instil a sense of community between the fake accounts and their Trump-supporting human target audiences. It’s an attempt to create an in-group, define its identity in opposition to the ‘out-group’ and build trust between its members. This makes the messages seem more believable in the target audience’s minds.

The most famous and prolific disinformation blogger of all is ‘Jenna Abrams’, the Kremlin operated account that fooled Americans for a number of years. From 2014 onwards, the ‘freewheeling American blogger’ who just happened to push a return to segregation and decried ‘PC culture run amok’ had plenty of time to build a significant target audience. The account was perfectly placed to propel divisive messages into the mainstream media, becoming an influential voice just in time for Trump’s election in 2016.

How did this fake identity become so influential? Firstly, ‘Abrams’ touted a lot of controversial opinions. Pushing openly xenophobic and racist opinions riled up a lot of journalists and celebrities on social media, which helped to boost Abram’s messages and help them gain more traction and visibility. Many of her tweets and posts went viral. When something goes viral it’s quite common for it to reach the attention of the mainstream media, and this is what happened with Abrams. The Abrams account was featured in many of the biggest media outlets and this in turn fed back into her following, attracting more people to view, share and comment on her content.

Conventional marketing wisdom has always claimed blogging as a highly effective way to build a personal or professional brand. During its early days, a whole host of blogging gurus and advice sites sprung up, such as Problogger, Neil Patel and Chris Brogan. They taught people how to harness the power of online content to define their brand and attract better professional opportunities. Indeed, many Internet celebrities such as Tim Ferriss, Gala Darling and Ash Ambirge of the Middle Finger Project came into existence thanks to the power of blogging.

I did it myself; spending around four years building up a professional identity in the field of nation brand strategy, by creating content about the topic and publishing it online. This, combined with a lot of social media outreach and networking, plus a bit of well-placed SEO, brought me a modest range of opportunities. They ranged from an invitation to keynote at an international conference, to an interview with CNN and a number of consulting opportunities based on my supposed expertise. This expertise was solely built through blogging.

My nation branding blog wasn’t pushing disinformation. But nevertheless it’s another case in point that shows just how easy it is to build a believable personal brand and gain an audience purely through online content creation. With this in mind, it doesn’t surprise me in the least that Russia’s expert info ops strategists are using blogging as part of their disinformation campaigns. I’d wager we can expect others like these in the future.

What’s more, the line between a personal blog and a professional news source has become blurred. Anyone can set up a website and promote their own content to the world. If it attracts enough viewers and enough engagement, then the site will start to gain value and become a trusted source. With enough traction, a simple blog might even turn into something like Breitbart, with the influence to significantly affect public opinion and bring toxic debates into the mainstream.

Finding links between ISIS, online disinformation and Islamophobia

I’m currently very interested in looking for possible links between the 2014 emergence of Isis and the disinformation campaigns currently plaguing social media. Both aim to poison public perceptions of Muslims. Could sentiment analysis help to draw the link between the two? The process could begin by mapping the sentiment in relation to certain keywords when political events occur involving Muslims. It could also be interesting to map public sentiment towards Muslims after terrorist attacks happen around Europe and the UK. This would likely produce spikes of negative sentiment, as would be expected.

But can social media evidence be found that points to a long-term trend for increased negativity towards Muslims? And can this be mapped convincingly to the starting point of the dramatic emergence of Isis in summer 2014? Can we then compare this period of two or three years with the period from 2001 (after 9/11) all the way through to 2013? I’m curious to know whether the online content put out by Isis can be linked in some way with various amplification campaigns by Russian bots or sock puppets. It is becoming common knowledge that actors sympathetic to Russia have been manipulating Western social media opinion in the run up to both Brexit and the election of Donald Trump. Russia has also been accused of orchestrating this manipulation in various national elections across the European Union.

The rise of the far right, which helped bring about Brexit and Trump, seems to have happened in tandem with the emergence of Isis. If Islamist extremists and far right extremists are, as some experts argue, just two sides of the same coin, then it would be logical for their trajectories to run in parallel. But how has this relationship played out on social media? Significant proportions of the public now spend massive amounts of time on social media, which has largely replaced television and newspapers as the architect of public opinion. Therefore, whoever controls social media messages has a good chance of controlling public opinion.

With phenomena such as viral content and the rise of memes, there is much opportunity for malicious actors with vested interests to sow chaos and promote confusion. But is it really feasible that the world’s present direction has been orchestrated by certain groups? For argument’s sake, let’s assume for a moment that Russia is the architect of the current situation. The term ‘disinformation’ comes from the Russian phrase dezinformatsiya, the Cold War name of a KGB department specialising in black propaganda.

Moreover, Russia has always been known for its commitment to long-term strategic military thinking. During the Cold War, Russia was the underdog, with fewer resources than the wealthy United States. To hold its own, Russia was forced to develop its capabilities in a more strategic manner, going above and beyond traditional military power. This parallels how the online world works, which has long been the domain of the underdog, the misfit, the bootstrapper, and the hustler. People who seem powerless have strategically used the Internet to gain the upper hand in many walks of life. It only takes one video or blog post going viral, and the resulting following can transform the nerdiest bedroom dweller into an overnight Internet celebrity, with the ability to reach a wide audience. It doesn’t seem much of a stretch to imagine a clever government could easily harness this sort of power to pursue its own interests.

Social media has become the lens through which millions of people view the world. If that lens is warped, then their perceptions can be easily manipulated. Some would argue with this perspective, saying that it denies people their agency, portraying them as passive actors who soak up messages without any critical thinking. The decline of the attention economy is also relevant here. Our attention spans have been hijacked. Studies have shown people are losing the ability to concentrate for long periods of time. Instead, our brains now seek the instant hits of dopamine available from notifications and popups. Facebook and Twitter have had a profound effect on our societies, where large swathes have willingly eroded their own abilities to focus. It is certainly not difficult to conceive that certain actors would take advantage of this to push their own agendas.

Google search data Islamophobia

Can Google search data help solve Islamophobia?

For decades, social scientists have conducted research using some combination of surveys, census data, focus groups, interviews, and observation techniques. With the exception of covert observation, which brings its own ethical issues, these methods have something in common: the dishonesty of people. These methods are all subject to human lies and therefore unable to paint a reliable picture of society’s true beliefs and darkest fears. In fact, the most objective forms of data are given up willingly, in private, where people are free from the worry of being judged. Short of stealing people’s diaries or tapping their phone calls, what else can researchers do to gather the most objective data possible?

Better than surveys

In our digital era the most obvious answer is also the correct one. But until now, few people have thought to leverage this tool and publicise their findings in such an accessible way and at such a pertinent time. What is the technology we all use to ask questions, seek validation, and search for the most outrageous things? Why of course, it’s Google. Many people would be embarrassed to publicly display their Google search history. I know mine is full of very silly things. But at the same time, these queries are deeply revealing; which is precisely why they strike a nerve. They display some of our deepest secrets. For example, a few years ago I used to get occasional panic attacks. I remember waking up at 3 am in an unfamiliar country, caught in the midst of an attack, gasping for breath. To calm myself, I searched Google for reassurance that it was ‘just’ a panic attack.

Google as ‘truth serum’

People search Google for all manner of things. Seth Stephens-Davidowitz (see below for video of his recent RSA talk), the researcher who produced this study, found many searches for terms involving ‘abortions’, ‘closet gays’, ‘penis size’, and ‘breastfeeding of husbands’ (the latter being apparently popular in India). He also found other more sinister patterns, ones suggesting American racism was far more widespread than previously thought. In fact, search data shows the idea of America as a ‘post-racial’ society, much-touted after the 2008 election of Barack Obama, to be quite absurd. Google showed American racism and Islamophobia were thoroughly alive and kicking, even in places where people didn’t publicly admit to holding racist views. They espouse very different opinions in the privacy of their own homes, face-to-face only with Google. It’s Google as ‘truth serum’. Almost ten years later, with Trump at the helm, perhaps America is finally showing its true face.

Tracking Islamophobia in searches

Obama’s address to the nation after the 2015 San Bernardino attack, provides an interesting example of how search data reflects hidden social views. In the speech, he aimed to calm outraged people and restore order to the country. In particular, he wanted to counteract the backlash that Muslim-Americans would surely face. While he was speaking of Muslims as ‘our friends’, ‘neighbours’ and so on, Google search data was telling a different story. After each terrorist attack (and this happens in the UK too) the volume of negative and violent searches about Muslims skyrockets. Islamophobic searches like ‘kill all Muslims’, become alarmingly high.

During most of Obama’s speech, these searches didn’t reduce or even level off. Instead they became even more frequent. This makes sense, because challenging people’s world views acts as an attack on their fundamental identity. In response, most will cling tighter to whatever they believe. But later in his speech, Obama changed tack. He introduced new images: not just of Muslim-Americans as friends and neighbours, who should be respected, but also of ‘Muslim soldiers’, willing to die for America, and ‘Muslim athletes’, representing the country on the world stage.

From ‘terrorists’ to soldiers and athletes

And then, something changed in the data. Islamophobic searches slowed down, to be replaced with searches for ‘Muslim athletes’, and ‘Muslim soldiers’. Something had resonated with the people searching; instead of responding predictably to Obama’s perceived ‘attack’ on their entrenched world views, they had become curious. I believe this happened for two reasons, partly because the idea of Muslims as athletes and soldiers resonated with ‘patriotic’ American audiences. But also because these images perhaps helped to ‘de-otherise’ public perceptions of Muslims. By drawing on resonant all-American themes, Obama associated Muslims with a set of positive images rather than just trying to convince wider America to accept them as a group. In response, albeit temporarily, the volume of Islamophobic searches slowed and included more positive searches.

This is encouraging in some ways, because despite the fleeting nature of this positivity, its presence suggests two important things, 1) that Islamophobia is largely a problem of perceptions, and 2) that the tide can be turned back. Negative views of Muslims have become deeply entrenched over the last three decades. Islamophobia as a public perception is regularly reinforced by mainstream media, by certain think tanks and their ‘experts’, and by reactions to the terrible deeds of ISIS; a group that has hijacked the image of Islam worldwide.

How can this data help us?

Can Google search data offer us the chance to fix some of society’s ills? Its revealing nature shows our darkest fears in a way no survey can ever do. Having this information (anonymous of course) could be used to bring issues into the open and address their root causes. In the case of Islamophobia, analysing Google searches could reveal where the gaps and misperceptions lie in wider society’s understanding of Muslims. It could allow us to categorise the fears, misunderstandings, and false perceptions. This could inform the design of social initiatives targeting specific problems, helping people understand each other better and gain a stronger sense of reality over perception.

Reflections: Terrorism and Social Media Conference 2017

Last week, in a sleepy Welsh city by the sea, a group of social media and terrorism researchers came together to discuss the latest challenges in the field.

I learned a lot, met people doing admirable work and came away inspired with ideas to shape my own research in the future. This post is a short synopsis of topics from the conference that struck me as important, interesting and/or particularly thought-provoking.

The visual web

Maura Conway’s opening keynote was peppered with mentions of the visual web – and it’s importance in the study of terrorist and extremist activity. All extremist groups have a visual profile, and many use images as a central feature of their propaganda and recruiting efforts.

One look at the ISIS propaganda magazine, Dabiq, proves this point. And it’s not only about images, but also video, which terrorist groups have used for decades, from the grainy, muffled bin Laden recordings all the way through to the glossy ISIS productions. Far-right groups use images too – from the notorious Pepe the Frog to a range of logos featuring swords, swastikas and national flags.

The ‘post-truth’, digital era has ushered in a trend for using images as part of disinformation efforts; driving so-called ‘fake news’. A recent example springs to mind from the March 2017 Westminster attack. In the swirling social media aftermath of Khalid Mahmood’s actions there emerged a photo of a Muslim woman wearing a hijab, walking past victims across Westminster bridge, engrossed in her phone as she walked.

The image was quickly hijacked, attached to numerous false claims attacking the unknown woman for her apparent ‘disdain’ for the injured victims. These claims spawned thousands of comments where people released their Islamophobic feelings to the full, feeding into the milieu of anti-Muslim sentiment that presently hangs over society.

Of course, the truth was very different. The woman had been messaging friends and family to let them know she was safe after the attack. Despite the truth being outed, the damage had already been done. Social perceptions of Muslims as ‘bad’ had been further reinforced.

Back to Prof Conway’s speech; in which she highlighted the ‘strong signalling function’ of images, making them critical subjects for further analysis. Yet most terrorism analysts still focus primarily on text, because the analysis of images is more challenging. Visual analytics tools and techniques do exist, both qual and quant, with big data research on images being especially popular in communication science at the moment.

In short: we need to pay more attention to the visual nature of the internet – and focus more on these ‘low-hanging fruit’ of visual analytics in the study of extremism.

The far-right

TASM didn’t focus only on the Islam-related side of extremism, but showcased a balanced view across the spectrum, with plenty of emphasis on research into the far-right. I attended several interesting panel talks on this subject, and came away with a number of key points.

One piece of research compared Britain First with Reclaim Australia, aiming to draw out the nuances within the umbrella term ‘far-right’. Methodology involved corpus assisted discourse analysis (CADS) on a static dataset, showing text that Britain First and Reclaim Australia supporters had posted on social media over a three-month period.

The researchers used a social media insights tool, Blurrt, to gather raw data, then used Python scripts to sort it into a workable format before finally analysing using CADS. In particular, they focused on collocations to reveal telling patterns in ideas and sentiments across the two groups.

Findings included a strong pattern of ‘othering’ – the core ‘us versus them’ narrative (which is a common theme not just among far-right discourse but also in some mainstream media and foreign policy: e.g. the Iraq war – ‘Axis of Evil’).

It was unsurprising therefore to find that Muslims and immigrants were particularly targeted. In what appears to be an extension of the ‘us versus them’ theme, ‘metaphors of invasion’ were often found in the discourse of both groups.

Other common themes included mentions of ‘our women’, ‘our religion’ and ‘our culture’ as being under threat from the ‘invaders’. All these themes feel very masculine. It could be interesting to reflect on the proportion of these sentiments that come from male authors; and could also be worth analysing what far-right discourse looks like from a female perspective.

In general, researchers concluded that far-right propaganda is less ‘overtly’ violent than that of ISIS, and is mainly rooted in nationalistic tendencies. This raises many questions. Is this how the far-right have managed to fly ‘under the radar’ for so long? Are they seen as being defensive rather than offensive? (And hence the ‘good guys’ on some level).

Could that be a factor in the much-discussed media under-reporting of far-right crimes, while focusing almost hysterically on those perpetrated by jihadists? Or, are ISIS and similar viewed as ‘worse’ simply because they are more ‘other’ (i.e. racism)?

Resonant narratives

Just as in commercial marketing, narratives work best when they intersect with individual agency and contexts. In his panel talk, Dr Akil Awan pointed out that CVE campaigns must not neglect the real-world issues that allow extremist narratives to resonate in the first place.

So how do ISIS narratives achieve success? They play on themes of belonging and identity; important for people experiencing ‘dual culture alterity’, i.e. feeling alienated from both their parents’ culture and the culture of their country of upbringing. In these cases, a return to fundamentalism becomes an ‘anchor’; a default setting of identity in a sea of alienation.

Awan highlighted the disparity between perceptions and reality around the true numbers of Muslims living in European countries. The media drives much of this misperception; making people feel ‘under siege’, creating fear, driving societies apart and destroying any sense of cohesion. In such a milieu, it is easy for ISIS to ‘eliminate the grey zone’ by means of terrorist acts. The media has already primed society for ISIS to succeed.

Understanding perceptions is as important as understanding reality; because how people perceive something will guide their course of action in response to it. Current CVE campaigns (based around tools such as counter-narrative videos) are cheap to implement and make it look like governments are taking action.

But recognising the ‘lived experience’ of minority groups is one of the keys to successful CVE efforts; neglecting to do so is hypocritical and unlikely to be effective.


In closing, we heard from the arbiter of all this – Facebook. Dr Erin Saltman explained the tools Facebook uses to tackle the online side of extremism and terrorism. These tools include a database of extremist propaganda images that relies on machine learning to match images as they surface, and automatically remove them.

But machine learning has its limitations, and humans are still required to take into account context and nuance. At present, the two work in tandem to surface the content (machine learning) and then interpret it as needed (humans).

Other tools include Facebook Insights, commonly used in commercial marketing, but can also be leveraged to guide counter-speech initiatives and enable precise reading of audiences.

The age of social media, although still in its infancy, has already had profound impact on politics and society – as well as the individual psychology of internet users. The long-term effects are unknown, with many changes no doubt still on the way.


Troll farms and terrorism

In the wake of the Manchester attack, I noticed many angry, hateful and Islamophobic comments on Facebook. Many came from profiles that didn’t look human. On closer examination these profiles seemed designed to spew hate speech, mainly about Islam and Muslims. Once set up, the owners of these profiles insert their hateful remarks into relevant comment threads, seeking reactions from the general public – from the genuine ‘organic’ Facebook users.

As well as eliciting angry reactions these comments amplify hate and division in society. They change people’s perceptions, making them feel surrounded by fear and hate. Repeated themes pop up in comment sections, often including the same tropes about Islam (e.g. burkas are taking over, sharia law will engulf the UK, paedophile comments about the Prophet Muhammad, all Muslims sympathise with Isis, all Muslims hate the West, why don’t Muslims condemn Isis, etc).

Such repetition may plant ideas in the minds of susceptible individuals. The bandwagon effect is at work. People read Facebook comments sections, see a lot of hateful remarks about Islam and tend to agree without critique. In the wake of a dreadful terrorist attack, people may feel justified in doing this. Hence the seeds of fear and hate of the ‘other’ are sown within a society already made vulnerable by the aftershock of an attack, even more effective because it targeted children and young people.

The question is, what are these commenters trying to achieve? They may be bots, or real people managing an army of profiles at the same time, using persona management software of some kind. So many more questions emerge from this. Who is managing them? Who is funding them? What is their political goal? And what measures can we take to inoculate the people, especially the young, from reading and absorbing these comments without taking a critical approach. A fearful populace is so easy to manipulate.

The fact that children were killed further discourages any critical stance. It leaves the person publicly trying to understand the drivers behind the attack open to accusations of ‘sympathising’ with the attacker. That’s never a pleasant position to be in. It can mean that some who would otherwise have spoken out in support of a more nuance and critical response to this attack are afraid to do so, because they don’t want to be accused of sympathising with individuals or organisations who would murder children.

The Manchester incident has ratcheted up the stakes. Whoever is influencing this sideshow must be satisfied with the political and social response to the Manchester attack. It’s not far-fetched to surmise that these attacks and their responses could form part of a wider strategy. We need to ask ourselves questions such as: How does a terrorist group begin? What is it responding to? What are its end goals? Could the idea of Isis supporters wanting to build a caliphate be purely part of the fantasy narrative put forward by or on behalf of the group?

Perhaps the people who pull the Isis strings don’t care if a caliphate ever gets established. Instead the group is just a tool, a powerful and dangerous one. Its actions and its mythology tell a brutal yet compelling story. The story is manipulated by cynical strategists to effect political change on a global scale, by playing on negative perceptions and people’s deep rooted suspicions of the ‘other’, in this case Muslims. Making people band together with the familiar is easy in times of fear. It’s simply a self protection mechanism, part of human nature to exclude the stranger in order to protect the known.

It would be useful to have an easy way of telling when a profile has been set up purely for the sake of propaganda, and when its just owned by a real and hateful human being. Even people who hold racist, Islamophobic and anti-semitic views are unlikely to spend the majority of their time on social media mouthing off about these views. They would surely do other things on their profiles too, like share photos of family members, holidays, pets, work, life and so on. Even racists have normal lives. But for the trolls and bots, hate is the centre of their identity. Their social profiles reflect this, even when they try to appear ‘human’.

They have no other reason to exist except to spread hate. I’m convinced these profiles all share a set of common features, with elements that don’t quite ring true, perhaps related to the type of language they use to post the comments. Are there patterns in the choice of words? Do they tend to repeat the same tropes about Islam again and again? What are these tropes? Is the language that of a native English speaker? Or are there certain idiosyncrasies that point to a shared origin point?

Let’s consider this from a public relations perspective. What would be the most emotive targets for terrorist attacks? Of course children, the most innocent of targets. That would be certain to generate the most intense possible public reaction and outcry, as it did in Manchester. Timing is also important, so the attack needs to be tied to current events. In the UK that could be the upcoming general election and the start of Ramadan. Tensions between different groups in British society are already running high, thanks to the Westminster attack a few weeks earlier. Although far fewer people died than in Manchester, the public was further primed for holding negative and hateful views about Muslims in general.

Once the shocking event has been carried out, the next step is to implement the social media strategy. And this is where tools like troll farms and bots become very useful. They, or their operators, wade into the comments sections of relevant articles on social media and/or the mainstream press (probably finding these articles through brand discovery tools or Google Alerts). They then post repetitive comments, memes and supposed ‘debunking’ videos designed to show everyone the so-called true nature of the ‘barbaric’ Islam. Over the long term, this causes an overall shift in perceptions of Islam in the minds of real humans reading these comments and soaking up their negative sentiment.

In this social media age it’s easy to build long-term, wide-ranging and multifaceted PR and content strategies to influence entire swathes of populations. From fake videos and doctored images, to armies of commenters posting fake opinions on social media, it all creates a climate of fear and uses it to shift perceptions of certain groups. This is perfectly plausible. I aim to explore the practical aspects of how this is achieved, in order to devise ways to neutralise it. Perhaps we can somehow prepare the minds and world views of young people so they face the digital world with more critical eyes. We should educate them about the presence of trolls and bots and show them how to spot real human opinions.