Author: sjnorth

Reflections: Terrorism and Social Media Conference 2017

Last week, in a sleepy Welsh city by the sea, a group of social media and terrorism researchers came together to discuss the latest challenges in the field.

I learned a lot, met people doing admirable work and came away inspired with ideas to shape my own research in the future. This post is a short synopsis of topics from the conference that struck me as important, interesting and/or particularly thought-provoking.

The visual web

Maura Conway’s opening keynote was peppered with mentions of the visual web – and it’s importance in the study of terrorist and extremist activity. All extremist groups have a visual profile, and many use images as a central feature of their propaganda and recruiting efforts.

One look at the ISIS propaganda magazine, Dabiq, proves this point. And it’s not only about images, but also video, which terrorist groups have used for decades, from the grainy, muffled bin Laden recordings all the way through to the glossy ISIS productions. Far-right groups use images too – from the notorious Pepe the Frog to a range of logos featuring swords, swastikas and national flags.

The ‘post-truth’, digital era has ushered in a trend for using images as part of disinformation efforts; driving so-called ‘fake news’. A recent example springs to mind from the March 2017 Westminster attack. In the swirling social media aftermath of Khalid Mahmood’s actions there emerged a photo of a Muslim woman wearing a hijab, walking past victims across Westminster bridge, engrossed in her phone as she walked.

The image was quickly hijacked, attached to numerous false claims attacking the unknown woman for her apparent ‘disdain’ for the injured victims. These claims spawned thousands of comments where people released their Islamophobic feelings to the full, feeding into the milieu of anti-Muslim sentiment that presently hangs over society.

Of course, the truth was very different. The woman had been messaging friends and family to let them know she was safe after the attack. Despite the truth being outed, the damage had already been done. Social perceptions of Muslims as ‘bad’ had been further reinforced.

Back to Prof Conway’s speech; in which she highlighted the ‘strong signalling function’ of images, making them critical subjects for further analysis. Yet most terrorism analysts still focus primarily on text, because the analysis of images is more challenging. Visual analytics tools and techniques do exist, both qual and quant, with big data research on images being especially popular in communication science at the moment.

In short: we need to pay more attention to the visual nature of the internet – and focus more on these ‘low-hanging fruit’ of visual analytics in the study of extremism.

The far-right

TASM didn’t focus only on the Islam-related side of extremism, but showcased a balanced view across the spectrum, with plenty of emphasis on research into the far-right. I attended several interesting panel talks on this subject, and came away with a number of key points.

One piece of research compared Britain First with Reclaim Australia, aiming to draw out the nuances within the umbrella term ‘far-right’. Methodology involved corpus assisted discourse analysis (CADS) on a static dataset, showing text that Britain First and Reclaim Australia supporters had posted on social media over a three-month period.

The researchers used a social media insights tool, Blurrt, to gather raw data, then used Python scripts to sort it into a workable format before finally analysing using CADS. In particular, they focused on collocations to reveal telling patterns in ideas and sentiments across the two groups.

Findings included a strong pattern of ‘othering’ – the core ‘us versus them’ narrative (which is a common theme not just among far-right discourse but also in some mainstream media and foreign policy: e.g. the Iraq war – ‘Axis of Evil’).

It was unsurprising therefore to find that Muslims and immigrants were particularly targeted. In what appears to be an extension of the ‘us versus them’ theme, ‘metaphors of invasion’ were often found in the discourse of both groups.

Other common themes included mentions of ‘our women’, ‘our religion’ and ‘our culture’ as being under threat from the ‘invaders’. All these themes feel very masculine. It could be interesting to reflect on the proportion of these sentiments that come from male authors; and could also be worth analysing what far-right discourse looks like from a female perspective.

In general, researchers concluded that far-right propaganda is less ‘overtly’ violent than that of ISIS, and is mainly rooted in nationalistic tendencies. This raises many questions. Is this how the far-right have managed to fly ‘under the radar’ for so long? Are they seen as being defensive rather than offensive? (And hence the ‘good guys’ on some level).

Could that be a factor in the much-discussed media under-reporting of far-right crimes, while focusing almost hysterically on those perpetrated by jihadists? Or, are ISIS and similar viewed as ‘worse’ simply because they are more ‘other’ (i.e. racism)?

Resonant narratives

Just as in commercial marketing, narratives work best when they intersect with individual agency and contexts. In his panel talk, Dr Akil Awan pointed out that CVE campaigns must not neglect the real-world issues that allow extremist narratives to resonate in the first place.

So how do ISIS narratives achieve success? They play on themes of belonging and identity; important for people experiencing ‘dual culture alterity’, i.e. feeling alienated from both their parents’ culture and the culture of their country of upbringing. In these cases, a return to fundamentalism becomes an ‘anchor’; a default setting of identity in a sea of alienation.

Awan highlighted the disparity between perceptions and reality around the true numbers of Muslims living in European countries. The media drives much of this misperception; making people feel ‘under siege’, creating fear, driving societies apart and destroying any sense of cohesion. In such a milieu, it is easy for ISIS to ‘eliminate the grey zone’ by means of terrorist acts. The media has already primed society for ISIS to succeed.

Understanding perceptions is as important as understanding reality; because how people perceive something will guide their course of action in response to it. Current CVE campaigns (based around tools such as counter-narrative videos) are cheap to implement and make it look like governments are taking action.

But recognising the ‘lived experience’ of minority groups is one of the keys to successful CVE efforts; neglecting to do so is hypocritical and unlikely to be effective.

Conclusion

In closing, we heard from the arbiter of all this – Facebook. Dr Erin Saltman explained the tools Facebook uses to tackle the online side of extremism and terrorism. These tools include a database of extremist propaganda images that relies on machine learning to match images as they surface, and automatically remove them.

But machine learning has its limitations, and humans are still required to take into account context and nuance. At present, the two work in tandem to surface the content (machine learning) and then interpret it as needed (humans).

Other tools include Facebook Insights, commonly used in commercial marketing, but can also be leveraged to guide counter-speech initiatives and enable precise reading of audiences.

The age of social media, although still in its infancy, has already had profound impact on politics and society – as well as the individual psychology of internet users. The long-term effects are unknown, with many changes no doubt still on the way.

 

Troll farms and terrorism

In the wake of the Manchester attack, I noticed many angry, hateful and Islamophobic comments on Facebook. Many came from profiles that didn’t look human. On closer examination these profiles seemed designed to spew hate speech, mainly about Islam and Muslims. Once set up, the owners of these profiles insert their hateful remarks into relevant comment threads, seeking reactions from the general public – from the genuine ‘organic’ Facebook users.

As well as eliciting angry reactions these comments amplify hate and division in society. They change people’s perceptions, making them feel surrounded by fear and hate. Repeated themes pop up in comment sections, often including the same tropes about Islam (e.g. burkas are taking over, sharia law will engulf the UK, paedophile comments about the Prophet Muhammad, all Muslims sympathise with Isis, all Muslims hate the West, why don’t Muslims condemn Isis, etc).

Such repetition may plant ideas in the minds of susceptible individuals. The bandwagon effect is at work. People read Facebook comments sections, see a lot of hateful remarks about Islam and tend to agree without critique. In the wake of a dreadful terrorist attack, people may feel justified in doing this. Hence the seeds of fear and hate of the ‘other’ are sown within a society already made vulnerable by the aftershock of an attack, even more effective because it targeted children and young people.

The question is, what are these commenters trying to achieve? They may be bots, or real people managing an army of profiles at the same time, using persona management software of some kind. So many more questions emerge from this. Who is managing them? Who is funding them? What is their political goal? And what measures can we take to inoculate the people, especially the young, from reading and absorbing these comments without taking a critical approach. A fearful populace is so easy to manipulate.

The fact that children were killed further discourages any critical stance. It leaves the person publicly trying to understand the drivers behind the attack open to accusations of ‘sympathising’ with the attacker. That’s never a pleasant position to be in. It can mean that some who would otherwise have spoken out in support of a more nuance and critical response to this attack are afraid to do so, because they don’t want to be accused of sympathising with individuals or organisations who would murder children.

The Manchester incident has ratcheted up the stakes. Whoever is influencing this sideshow must be satisfied with the political and social response to the Manchester attack. It’s not far-fetched to surmise that these attacks and their responses could form part of a wider strategy. We need to ask ourselves questions such as: How does a terrorist group begin? What is it responding to? What are its end goals? Could the idea of Isis supporters wanting to build a caliphate be purely part of the fantasy narrative put forward by or on behalf of the group?

Perhaps the people who pull the Isis strings don’t care if a caliphate ever gets established. Instead the group is just a tool, a powerful and dangerous one. Its actions and its mythology tell a brutal yet compelling story. The story is manipulated by cynical strategists to effect political change on a global scale, by playing on negative perceptions and people’s deep rooted suspicions of the ‘other’, in this case Muslims. Making people band together with the familiar is easy in times of fear. It’s simply a self protection mechanism, part of human nature to exclude the stranger in order to protect the known.

It would be useful to have an easy way of telling when a profile has been set up purely for the sake of propaganda, and when its just owned by a real and hateful human being. Even people who hold racist, Islamophobic and anti-semitic views are unlikely to spend the majority of their time on social media mouthing off about these views. They would surely do other things on their profiles too, like share photos of family members, holidays, pets, work, life and so on. Even racists have normal lives. But for the trolls and bots, hate is the centre of their identity. Their social profiles reflect this, even when they try to appear ‘human’.

They have no other reason to exist except to spread hate. I’m convinced these profiles all share a set of common features, with elements that don’t quite ring true, perhaps related to the type of language they use to post the comments. Are there patterns in the choice of words? Do they tend to repeat the same tropes about Islam again and again? What are these tropes? Is the language that of a native English speaker? Or are there certain idiosyncrasies that point to a shared origin point?

Let’s consider this from a public relations perspective. What would be the most emotive targets for terrorist attacks? Of course children, the most innocent of targets. That would be certain to generate the most intense possible public reaction and outcry, as it did in Manchester. Timing is also important, so the attack needs to be tied to current events. In the UK that could be the upcoming general election and the start of Ramadan. Tensions between different groups in British society are already running high, thanks to the Westminster attack a few weeks earlier. Although far fewer people died than in Manchester, the public was further primed for holding negative and hateful views about Muslims in general.

Once the shocking event has been carried out, the next step is to implement the social media strategy. And this is where tools like troll farms and bots become very useful. They, or their operators, wade into the comments sections of relevant articles on social media and/or the mainstream press (probably finding these articles through brand discovery tools or Google Alerts). They then post repetitive comments, memes and supposed ‘debunking’ videos designed to show everyone the so-called true nature of the ‘barbaric’ Islam. Over the long term, this causes an overall shift in perceptions of Islam in the minds of real humans reading these comments and soaking up their negative sentiment.

In this social media age it’s easy to build long-term, wide-ranging and multifaceted PR and content strategies to influence entire swathes of populations. From fake videos and doctored images, to armies of commenters posting fake opinions on social media, it all creates a climate of fear and uses it to shift perceptions of certain groups. This is perfectly plausible. I aim to explore the practical aspects of how this is achieved, in order to devise ways to neutralise it. Perhaps we can somehow prepare the minds and world views of young people so they face the digital world with more critical eyes. We should educate them about the presence of trolls and bots and show them how to spot real human opinions.

Who’s winning on the digital battlefield?

On the eve of the French presidential elections, there’s a sudden flurry of activity on social media. A candidate’s name – #Macron – is trending on Twitter. So what’s the news? A large stash of Emmanuel Macron’s private emails have been hacked and leaked online.

Sound familiar?

That’s because it’s happened before. You probably remember last year’s debacle about Hillary Clinton’s leaked emails. This more than likely contributed to her losing the election to Donald Trump. If nothing else, it created an air of public suspicion around Clinton that did irreparable damage to her reputation. I still think back to that hacking event and recall it as a haze of rumours and misinformation; I was never totally clear what the core of the issue really was.

And in light of this latest development with France, I begin to wonder if confusion is actually the goal in all this. Perhaps we give whoever is behind this too much credit by assuming they’re actually pulling the strings of public opinion. What would be easier, and perhaps just as damaging, would be simply to sow the seeds of mistrust. With everyone at each others throats, arguing bitterly about what is and isn’t ‘fake news’, there’s room for the malevolent forces to continue their underhand work of sabotaging democracy. When journalists digging deep to report the truth on something can so easily have their work discredited as ‘fake news’ by none other than the US president himself, we really are veering into a disturbing new reality.

Who is actually responsible for this mischief? Sources point to a Russian hacking group known, among a variety of other names, as “Fancy Bear”. It’s the same group said to be responsible for hacking Hillary Clinton’s emails last year. “The key goals and objectives of the campaign appear to be to undermine Macron’s presidential candidacy and cast doubt on the democratic electoral process in general,” said Vitali Kremez, director of research at Flashpoint, a business risk intelligence company in New York, in an interview with the New York Times.

We should not underestimate the abilities of Russia in this arena. Dmitri Alperovitch, of CrowdStrike, told the MIT Technology Review that Russia ‘gets the true nature of the battlefield’ in a way the West does not. “They’ve been thinking about this for a very long time,” he said. “It actually goes at least as far back as the Tsarist era in the 1860s, when they created one of the first modern intelligence agencies, the Okhranka.” So Russia has been doing this sort of thing for decades, but the rise of digital offers the perfect new landscape for even deeper subterfuge.

But there’s one ray of hope; and that’s in how the French media has responded to the Macron email leak so far; by not reporting on the contents of it. This seems a smart move. Part of French law requires candidates to stop campaigning between midnight on Friday to when the polls close at 8pm on Sunday. Candidates are forbidden to give media interviews or issue statements. The timing of the email hack was likely designed to coincide with this, in an attempt to release the emails while Macron was unable to respond. But denying the fake news trolls the oxygen of media publicity cuts the head off the snake; removing much of its potential to harm. The same goes for terrorist incidents. ‘Propaganda of the deed’ as terrorism was once known, relies on shock and awe to achieve its ends. In an ‘always-on’ digital society this effect is massively amplified and completely fake incidents can even be instigated, by anyone anywhere. If the media had denied the ‘oxygen of publicity’ to groups like Isis from the very beginning, the world might be less messy today.

The emergence of Isis fuelled the rise of the far-right, giving white supremacists and ultraconservatives the opportunity to rise up and gain power under the guise of ‘protecting’ the nation from threat. Of course that ‘threat’ is constantly portrayed as emanating from Islam and Muslims. And so the cycle continues. But the example of France is certainly a promising one. The election outcome will reveal if it actually worked. Perhaps going forward, these issues could be mitigated by a more scrupulous mainstream media, one that’s less desperate for ‘clicks’ to ensure its survival, along with citizen journalism collectives such as Bellingcat, to shed light on old issues and reveal new cracks in existing narratives.

Images of Islam: How fake news drives public opinion about Muslims

“During times of universal deceit, telling the truth becomes a revolutionary act”

– George Orwell

Through misinformation to political change

Misinformation about Muslims (including refugees, immigrants and ordinary citizens) has been used to construct harmful narratives, reinforce existing Islamophobia and, arguably, create a fertile environment for enacting profound social and political change. Although the role of Islamophobia in the media has been explored in depth, less research has been done into social media, especially the role of social media images. These images are highly susceptible to manipulation when taken out of context, or attached to wholly fake news. They can be used to promulgate certain political agendas, heighten divisions in society, and cause actual harm to vulnerable groups.

In March 2017, shortly after Khalid Masood attacked Westminster, a certain image began circulating on social media. It depicted a young woman wearing a hijab. She had a distressed expression, clutching her phone as she walked across Westminster bridge. Behind her, one of Masood’s victims lay on the pavement, surrounded by concerned passers-by. The image went viral across Twitter, Facebook and a range of anti-Islam blogs.

The intense discussion it generated focused on the recurring narrative of the woman showing ‘disdain’, which degenerated into intense Islamophobic stereotyping. Some users implied that the woman had sympathies with the Westminster attacker. Others posted tweets and comments in her defence but the damage had already been done. It seemed as if yet another layer had been added to long-standing negative public perceptions of Islam and Muslims.

Going viral: Islamophobia online

There are many studies on the portrayals of Islam and Muslims in the print and television media. However, the social media realm remains relatively under-studied in this regard, in relation to its size, prevalence and increasing influence upon shaping social attitudes. Research suggests that negative attitudes towards Islam and Muslims remain frequent in the social media context (Törnberg and Törnberg, 2016).

Users commonly perceived Muslims as ‘a homogeneous out-group embroiled in conflict, violence and extremism’ (ibid., p.133). In general, Muslims and Islam are regularly portrayed in the UK media in a variety of negative contexts, such as being a ‘threat to security’, a threat to ‘our’ way of life’, and positioned as being perpetually in conflict with mainstream ‘British values’. (Moore et al, 2008).

Moreover, media coverage of Muslims seemed to have ‘gained its own momentum’ (ibid., p. 10) over time, starting with 9/11 and growing since then. It suggests the presence of innate Islamophobia, where journalists do not question the dominant narratives of Muslims but simply perpetuate them. In terms of images, the research found that the most dominant images were those of lone Muslim males, particularly in police mugshots, suggesting associations with terrorism, extremism or deviance (for example, the hook-handed cleric Abu Hamza was frequently featured).

Subjects of images were commonly located outside police stations or law courts. In addition, Muslims depicted in media images were most often found in prayer, preaching, or in protest groups. This indicated a primary focus on themes either of terrorism or of cultural/religious differences, among the majority of media images depicting Muslims (ibid., p. 28).

Poole (2002) reinforces these findings, claiming that topics commonly associated with Muslims and Islam have included ‘terrorism’, politics and reactions to the war in Iraq. News coverage tends to link Muslims to global events, creating strong associations between Islam and situations of conflict and violence. Where there is domestic coverage, it tends to highlight situations that ‘focus on social tensions’, ‘raise questions over loyalty and belonging’ and consistently cast Muslims as being in opposition to ‘traditional British values’. Deep-rooted perceptions of Muslims as ‘the other’, combined with suspicion around their loyalty to ‘British values’, and ‘our’ way of life, could ignite feelings that spur users to believe negative fake news in relation to Muslims, and share images and stories that support their beliefs without engaging in critical analysis.

In recent years a fresh series of newsworthy incidents related to Muslims, such as the murder of Lee Rigby, various ISIS atrocities, and attacks in Paris, Brussels, Nice, Berlin and London, have encouraged members of the public to express their opinions about Islam and Muslims on social media. People have also become more likely to share images associated with Muslims without stopping to check their veracity.

Muddying the narrative: fake news 

This has led to the spread a number of ‘fake news’ stories, where images of Muslims have been linked with certain news events, taken out of context, or used to create entirely fake news. In these cases, it is not merely the mainstream media to blame. Members of the public can easily create and share fake news, contributing to reinforcing negative perceptions of Muslims and perpetuating an ongoing narrative that has the potential to do harm.

Online narratives can also cause harm in the real world, as explored by Imran Awan (2016) in his research linking a rise in hate crimes against Muslims to a corresponding rise in Islamophobic content being shared on Twitter and Facebook. Awan argues that a large proportion of comments posted on social media about Muslims possessed an ‘extremist and incendiary undertone’. He offers a typology of Facebook Islamophobia in an attempt to explain what compels people to post and share such content (ibid., p. 8).

None of the aforementioned studies, however, specifically explore the role of images on social media. Images can be powerful, even more so than words, and they dominate the contemporary media landscape. Digital technology has opened up new avenues for ordinary users to create, edit, and doctor their own images. Images on social media therefore play an important role in constructing fake news and driving the narratives that fulfil political agendas. This is a significant area worthy of further exploration.

Bored of selective outrage

When I first heard the news about yesterday’s incident at Orly Airport in Paris, my first thought was ‘I hope he’s not Muslim’.

I’m not Muslim. So why do I care?

Because I’m bored of the constant stream of outrage. I dislike how people on Twitter and Facebook revel in lambasting all Muslims because of one more incident that supposedly ‘proves their point’ – that all Muslims are part of some dastardly global terrorist plot. It feels as if these commenters are watching and waiting, ready to pounce with their vitriol as soon as news of an incident breaks, no matter how minor.

I can’t subscribe to any view of Muslims being innately prone to terrorism because of their religion. Part of that inability comes from my being a reasonably intelligent and cosmopolitan human, but otherwise it’s because this outrage is selective, misdirected and full of holes – and that bothers me.

In fact, it infuriates me.

Can’t mentally ill people and criminals be Muslims too? Or are those categories reserved for white people? Can’t the two things be mutually exclusive, rather than religion being treated as the sole defining factor?

Why is it always the same old story every single time? Couldn’t a random guy, prone to criminality, perhaps suffering from mental illness (but who just happens to be from a Muslim background, with a Muslim name and brown skin) one day decide to attack people in an airport?

Anyone can claim their deeds in the name of Allah. It doesn’t necessarily make them part of a wider terrorist plot. With the constant public hysteria surrounding ‘Islamic terrorism’, widely reported, it’s not hard to imagine someone latching onto it as support in their criminal mission, even if they aren’t religious at all. It’s part of the bandwagon effect.

Out of interest, I compared Google results for the Orly incident with the recent Canadian mosque shooting. In the former incident, only the perp died. Arguably, he brought it on himself by trying to grab a soldier’s weapon in an airport. And he was already an experienced criminal known to the police.

But in the Canadian incident, six innocent mosque-goers were murdered by a white man with a gun. He had no criminal background whatsoever. There would have been no way to predict the attack. I find that terrifying.

The first page of Google speaks for itself. The Orly would-be attacker, once his identity emerged, was immediately labelled ‘radicalised Muslim’, and his attack (which killed no-one except himself) described as being ‘treated as a possible act of terror’.

Whereas Alexander Bissonette, the mosque killer, is described as a ‘student’, a ‘suspect’, and perhaps, in one of the more critical headlines, a ‘mosque shooter’. Even worse, an innocent witness of Moroccan background was mistakenly thought to be the shooter, before the real one was identified.

Al Jazeera describes Bissonette as ‘a French-Canadian university student known for his far-right views’. The word ‘terrorist’ is suspiciously absent from most of the coverage.

Yet this guy murdered six people because of their religion. He was driven by extreme right-wing, white supremacist political views. He killed for the sake of those views. Is that not the very definition of terrorism?

Terrorism has been around for a long time. It was around in the French Revolution, where Robespierre and the Jacobins conducted their ‘Reign of Terror’.

It was around in imperial Russia, where disgruntled students became ‘radicalised’ and committed public attacks – ‘propaganda of the deed’ – designed to create fear and bring down the ruling regime.

And terrorism was very much around in the late 20th century when the Irish Republican Army conducted attacks across the UK. These groups are only a few of the thousands of armed resistances that have existed throughout history; there are many more, spanning a wide range of cultural contexts.

They have cropped up wherever inequality and injustice are found, in a misguided attempt to somehow ‘redress the balance’ and achieve a ‘better life’ for themselves via the cause they represent, despite the warped methods they choose to get there.

The point is: terrorism is not specific to any religion, race, nationality or culture. We need to stop treating it as such.

It would help if we could put terrorism into perspective. But that will remain difficult unless the language we absorb from the media every day is adjusted to portray a more balanced picture of what’s really going on in the world. Although in this era of clickbait for revenue, perhaps that’s just a futile dream.

Nevertheless, we can do our part to help by thinking critically beyond the headlines and continuing to call out selective outrage whenever and wherever it occurs.