Troll farms and terrorism

In the wake of the Manchester attack, I noticed many angry, hateful and Islamophobic comments on Facebook. Many came from profiles that didn’t look human. On closer examination these profiles seemed designed to spew hate speech, mainly about Islam and Muslims. Once set up, the owners of these profiles insert their hateful remarks into relevant comment threads, seeking reactions from the general public – from the genuine ‘organic’ Facebook users.

As well as eliciting angry reactions these comments amplify hate and division in society. They change people’s perceptions, making them feel surrounded by fear and hate. Repeated themes pop up in comment sections, often including the same tropes about Islam (e.g. burkas are taking over, sharia law will engulf the UK, paedophile comments about the Prophet Muhammad, all Muslims sympathise with Isis, all Muslims hate the West, why don’t Muslims condemn Isis, etc).

Such repetition may plant ideas in the minds of susceptible individuals. The bandwagon effect is at work. People read Facebook comments sections, see a lot of hateful remarks about Islam and tend to agree without critique. In the wake of a dreadful terrorist attack, people may feel justified in doing this. Hence the seeds of fear and hate of the ‘other’ are sown within a society already made vulnerable by the aftershock of an attack, even more effective because it targeted children and young people.

The question is, what are these commenters trying to achieve? They may be bots, or real people managing an army of profiles at the same time, using persona management software of some kind. So many more questions emerge from this. Who is managing them? Who is funding them? What is their political goal? And what measures can we take to inoculate the people, especially the young, from reading and absorbing these comments without taking a critical approach. A fearful populace is so easy to manipulate.

The fact that children were killed further discourages any critical stance. It leaves the person publicly trying to understand the drivers behind the attack open to accusations of ‘sympathising’ with the attacker. That’s never a pleasant position to be in. It can mean that some who would otherwise have spoken out in support of a more nuance and critical response to this attack are afraid to do so, because they don’t want to be accused of sympathising with individuals or organisations who would murder children.

The Manchester incident has ratcheted up the stakes. Whoever is influencing this sideshow must be satisfied with the political and social response to the Manchester attack. It’s not far-fetched to surmise that these attacks and their responses could form part of a wider strategy. We need to ask ourselves questions such as: How does a terrorist group begin? What is it responding to? What are its end goals? Could the idea of Isis supporters wanting to build a caliphate be purely part of the fantasy narrative put forward by or on behalf of the group?

Perhaps the people who pull the Isis strings don’t care if a caliphate ever gets established. Instead the group is just a tool, a powerful and dangerous one. Its actions and its mythology tell a brutal yet compelling story. The story is manipulated by cynical strategists to effect political change on a global scale, by playing on negative perceptions and people’s deep rooted suspicions of the ‘other’, in this case Muslims. Making people band together with the familiar is easy in times of fear. It’s simply a self protection mechanism, part of human nature to exclude the stranger in order to protect the known.

It would be useful to have an easy way of telling when a profile has been set up purely for the sake of propaganda, and when its just owned by a real and hateful human being. Even people who hold racist, Islamophobic and anti-semitic views are unlikely to spend the majority of their time on social media mouthing off about these views. They would surely do other things on their profiles too, like share photos of family members, holidays, pets, work, life and so on. Even racists have normal lives. But for the trolls and bots, hate is the centre of their identity. Their social profiles reflect this, even when they try to appear ‘human’.

They have no other reason to exist except to spread hate. I’m convinced these profiles all share a set of common features, with elements that don’t quite ring true, perhaps related to the type of language they use to post the comments. Are there patterns in the choice of words? Do they tend to repeat the same tropes about Islam again and again? What are these tropes? Is the language that of a native English speaker? Or are there certain idiosyncrasies that point to a shared origin point?

Let’s consider this from a public relations perspective. What would be the most emotive targets for terrorist attacks? Of course children, the most innocent of targets. That would be certain to generate the most intense possible public reaction and outcry, as it did in Manchester. Timing is also important, so the attack needs to be tied to current events. In the UK that could be the upcoming general election and the start of Ramadan. Tensions between different groups in British society are already running high, thanks to the Westminster attack a few weeks earlier. Although far fewer people died than in Manchester, the public was further primed for holding negative and hateful views about Muslims in general.

Once the shocking event has been carried out, the next step is to implement the social media strategy. And this is where tools like troll farms and bots become very useful. They, or their operators, wade into the comments sections of relevant articles on social media and/or the mainstream press (probably finding these articles through brand discovery tools or Google Alerts). They then post repetitive comments, memes and supposed ‘debunking’ videos designed to show everyone the so-called true nature of the ‘barbaric’ Islam. Over the long term, this causes an overall shift in perceptions of Islam in the minds of real humans reading these comments and soaking up their negative sentiment.

In this social media age it’s easy to build long-term, wide-ranging and multifaceted PR and content strategies to influence entire swathes of populations. From fake videos and doctored images, to armies of commenters posting fake opinions on social media, it all creates a climate of fear and uses it to shift perceptions of certain groups. This is perfectly plausible. I aim to explore the practical aspects of how this is achieved, in order to devise ways to neutralise it. Perhaps we can somehow prepare the minds and world views of young people so they face the digital world with more critical eyes. We should educate them about the presence of trolls and bots and show them how to spot real human opinions.

Leave a Reply

Your email address will not be published. Required fields are marked *