Author: sjnorth

Imagined communities: The alt-left on Twitter

In January, I joined a team of researchers in Amsterdam to explore the online communities of ‘Alt-left’ and ‘Antifa’ using data from Twitter and Reddit. This was one of many projects all taking place as part of the Digital Methods Initiative Winter School, organised by the University of Amsterdam.

In particular, we wanted to use the data to find out whether the so called ‘alt-left’ represents a genuine community or if it’s simply a construct of the alt-right in the latter’s attempts to promote a false equivalence, as has been claimed.  We were also curious to establish whether alt-left overlapped with Antifa online, or if they were two distinct communities.

We arrived at the following research questions. This blog post will focus on the first two only.
1. Does the ‘Alt Left’ exist (to Antifa)?
2. Could the ‘Alt Left’ be considered an Alt Right construct?
3. Is the ‘Alt Left’ as a notion mainstreaming?
4. How did the ‘Alt Left’ emerge? How to characterise the
Alt Left’s identity (semantically)?

For the first question, we collected a list of top 20 hashtags associated with both ‘alt-left’ and ‘Antifa’ keywords. We found very little overlap between the two groups. Notably, the discourse of the alt-left (shown in green below) consisted of more words that could be considered ‘mainstream political dialogue’, compared to the ‘Antifa’, which seemed more fringe.

The latter’s hashtags contained references to lesser-known terms such as ‘democratTerrorism’, along with the names of some Japanese Antifa groups. From this data, we could reasonably conclude that the alt-left and Antifa are separate, distinctive discourses.

The term ‘alt-left’ arrived in the mainstream after Donald Trump used it when talking about the violence around the ‘Unite the Right’ rally in the US city of Charlottesville. He condemned groups on both sides of the violence, saying: “What about the alt-left that came charging at, as you say, at the alt-right?”

Subsequently, there has been discussion in the mainstream media about whether alt-left really exists. It could simply be a construct of the far-right as it attempts to gain moral ground against its opposition by creating a false equivalent that can be painted as equally, if not more, violent.

The next section of our research examined this question in more depth, using Twitter data to give a clearer picture. We compared the top 50 Twitter users for both the #altleft and #Antifa hashtags, by looking more closely at their bios and timelines.

Right-wing conservative Trump supporters dominated the alt-left group, while the Antifa group was populated by a strange mix of accounts posting ebay related tweets and tweets in Japanese promoting the ‘Antifa Handbook’.

A deeper dive, this time into the most influential hashtag users (influence = most followed), produced illuminating results. For both keywords, we identified the majority of influential users as pro-Trump conservatives, with the exception of one rather prolific Japanese bot.

We found three significant users who overlapped with both terms: both pro-Trump. The common thread connecting both altleft and Antifa hashtag users appears to be their support for Trump, rather than specific alt-right features.

The below Gephi graph shows the different communities grouping around the Antifa keyword on Twitter. As can be seen, the biggest cluster represents American, right-wing Trump supporters.

We also created an equivalent for the alt-left hashtag communities. This graph is less consistent, but the striking feature about it is the large purple cluster, with one Twitter account ‘kwilli1046’, at its centre, meaning this user is extremely influential.

A screenshot of his profile is shown below. ‘Kevin W’ has around 88k followers and a bio link to a page on Gab, the alternative ‘free speech’ social networking platform where many of the far-right have ended up after being kicked off Twitter.

In conclusion, we found that the ‘alt-left’ does not exist outside of an online construct. In addition, the alt-left is mainly a term propagated by pro-Trump accounts in their attempts to create a ‘faux-public’. With so many of these accounts being pro-Trump, this could potentially point to the use of state-sponsored bots and organised trolling to push the alt-left as a form of disinformation. This could be a worthwhile topic for building on the findings of this research.

 

Can ‘Online Surges’ Drive Long Term Attitude Change?

It comes as little surprise to learn that today’s wave of anti-Muslim online sentiment is being led by specific Islamophobic organisations, and channelled through public figures such as Tommy Robinson and Pamela Geller. And over the last three years, a spate of incidents tied to Muslim perpetrators, including vehicle attacks and knifings, have added fuel to the fire and, in the minds of some, justified their anti-Muslim viewpoints. Indeed, one often notices a sense of warped, self-righteous ‘public duty’, among online commenters who bash Muslims and link them ceaselessly to terrorism. After all, aren’t terrorists a danger to our society; public enemy number one? Surely it’s acceptable to point that out. When looking through the lens of online news and social media, especially in the unbridled comments sections, the casual observer may come to feel that the majority of Britons hate, fear, and dehumanise Muslims.

But that particular picture of public opinion could be misleading. The true makeup of this climate of hate may come as more of a surprise. A soon-to-be-published report (and numerous bloggers and journalists) claims that many of the social media accounts spreading anti-Muslim sentiment online aren’t who they claim to be. Many aren’t even human, while others don’t represent ‘organic’ human opinions. Here we have the bots and paid sock-puppets. One is generated by algorithms, the other operates from a pre-existing messaging playbook. Attempts to engage them in dialogue often feel like arguing with a brick wall, or an extremely resilient ideologue; impervious to reason of any kind. Oddly, people often describe ardent Trump supporters in this way.

In terms of the climate around Muslims and Islam, these media manipulators use a range of tools to try and shift public opinion on a wide scale. A lot of psychological devices come into play here, for example the bandwagon effect. In this, people tend to do something just because others are doing it; such as blindly adopting a popular opinion around a contentious social or political issue. It’s a tool that’s been used for decades in political campaigns and commercial advertising. When bots and sock-puppets masquerade as ordinary British and American citizens (cleverly leveraging their profiles to appear so) who hate Islam, they are relying on the bandwagon effect to encourage real citizens to adopt similar views. When it looks like so many people are talking badly about a certain group or person, it’s easy to assume the rumours might well be true. The bot armies also latch onto people like Tommy Robinson to amplify his messages, and add their own, whenever a relevant story breaks.

Unfortunately, terrorist attacks have become a critical asset in a giant influence ops campaign. Just as PR stunts drive content marketing traffic in the commercial world, so attacks (or rumours of attacks) also drive anti-Muslim ‘brand-building’ in the world of organised Islamophobia. It’s an interesting symbiotic relationship that would merit further study. The prime goals of the campaign appears to be driving wedges into society, creating an atmosphere of fear and turning groups against one another. Persuasion, whether by means of disinformation or fact, has attitude change as its end goal. A range of psychological theories purport to explain the processes behind it, but for now it’s more important to focus on effects. Whoever is masterminding these influence ops wants to shift the pendulum and create a new anti-Muslim normal in public opinion. They are using every tool at their disposal to do so, including false amplification, echo chambers, and visual disinformation.

But is the campaign working? To find out, we need to measure subtle shifts in public sentiment over a span of years, and then find out how to tie them to anti-Islam messaging campaigns. Of course, results are likely to be skewed by certain factors. One could be the mainstream British media, in particular the Express and Daily Mail, where coverage of all things immigration, refugees and Islam often teeters on the brink of disinformation, especially in the ways in which topics are spun. Specific incidents, especially of visual disinformation, such as the Muslim woman at Westminster, could be used as starting points to track associated sentiment online. Fearful knee-jerk reactions to terrorist attacks are to be expected, but broader long-term shifts in sentiment are harder to track. What’s more, they are far more insidious, corroding society from the inside out.

Disinformation, Influence Ops and the Humble Blog

Just a few days ago, Wired Magazine published an article revealing that the ongoing Russian disinformation campaign also involved a number of long form blog posts published on Medium.

In this post, I want to talk about why this seemingly new move from Russian info ops is actually not surprising at all and makes perfect logical sense in an overall content strategy. Blogs play a key role in an effective strategy, with content posted on Twitter, Facebook and other social platforms acting as ‘outposts’, all emanating from the blog, which is placed at the heart.

Some of the Medium bloggers had tens of thousands of followers. They published content around predictable topics such as ‘race, science and politics, indoctrination on US college campuses’ – and that good old chestnut, ‘lies from Hillary Clinton’. The fake blogger identities were carefully crafted to appeal to their target audience, with bios claiming to be conservative, Christian, and family-oriented.

This approach reflects that used on Twitter by many bot and sock puppet accounts; designed to instil a sense of community between the fake accounts and their Trump-supporting human target audiences. It’s an attempt to create an in-group, define its identity in opposition to the ‘out-group’ and build trust between its members. This makes the messages seem more believable in the target audience’s minds.

The most famous and prolific disinformation blogger of all is ‘Jenna Abrams’, the Kremlin operated account that fooled Americans for a number of years. From 2014 onwards, the ‘freewheeling American blogger’ who just happened to push a return to segregation and decried ‘PC culture run amok’ had plenty of time to build a significant target audience. The account was perfectly placed to propel divisive messages into the mainstream media, becoming an influential voice just in time for Trump’s election in 2016.

How did this fake identity become so influential? Firstly, ‘Abrams’ touted a lot of controversial opinions. Pushing openly xenophobic and racist opinions riled up a lot of journalists and celebrities on social media, which helped to boost Abram’s messages and help them gain more traction and visibility. Many of her tweets and posts went viral. When something goes viral it’s quite common for it to reach the attention of the mainstream media, and this is what happened with Abrams. The Abrams account was featured in many of the biggest media outlets and this in turn fed back into her following, attracting more people to view, share and comment on her content.

Conventional marketing wisdom has always claimed blogging as a highly effective way to build a personal or professional brand. During its early days, a whole host of blogging gurus and advice sites sprung up, such as Problogger, Neil Patel and Chris Brogan. They taught people how to harness the power of online content to define their brand and attract better professional opportunities. Indeed, many Internet celebrities such as Tim Ferriss, Gala Darling and Ash Ambirge of the Middle Finger Project came into existence thanks to the power of blogging.

I did it myself; spending around four years building up a professional identity in the field of nation brand strategy, by creating content about the topic and publishing it online. This, combined with a lot of social media outreach and networking, plus a bit of well-placed SEO, brought me a modest range of opportunities. They ranged from an invitation to keynote at an international conference, to an interview with CNN and a number of consulting opportunities based on my supposed expertise. This expertise was solely built through blogging.

My nation branding blog wasn’t pushing disinformation. But nevertheless it’s another case in point that shows just how easy it is to build a believable personal brand and gain an audience purely through online content creation. With this in mind, it doesn’t surprise me in the least that Russia’s expert info ops strategists are using blogging as part of their disinformation campaigns. I’d wager we can expect others like these in the future.

What’s more, the line between a personal blog and a professional news source has become blurred. Anyone can set up a website and promote their own content to the world. If it attracts enough viewers and enough engagement, then the site will start to gain value and become a trusted source. With enough traction, a simple blog might even turn into something like Breitbart, with the influence to significantly affect public opinion and bring toxic debates into the mainstream.

Finding links between ISIS, online disinformation and Islamophobia

I’m currently very interested in looking for possible links between the 2014 emergence of Isis and the disinformation campaigns currently plaguing social media. Both aim to poison public perceptions of Muslims. Could sentiment analysis help to draw the link between the two? The process could begin by mapping the sentiment in relation to certain keywords when political events occur involving Muslims. It could also be interesting to map public sentiment towards Muslims after terrorist attacks happen around Europe and the UK. This would likely produce spikes of negative sentiment, as would be expected.

But can social media evidence be found that points to a long-term trend for increased negativity towards Muslims? And can this be mapped convincingly to the starting point of the dramatic emergence of Isis in summer 2014? Can we then compare this period of two or three years with the period from 2001 (after 9/11) all the way through to 2013? I’m curious to know whether the online content put out by Isis can be linked in some way with various amplification campaigns by Russian bots or sock puppets. It is becoming common knowledge that actors sympathetic to Russia have been manipulating Western social media opinion in the run up to both Brexit and the election of Donald Trump. Russia has also been accused of orchestrating this manipulation in various national elections across the European Union.

The rise of the far right, which helped bring about Brexit and Trump, seems to have happened in tandem with the emergence of Isis. If Islamist extremists and far right extremists are, as some experts argue, just two sides of the same coin, then it would be logical for their trajectories to run in parallel. But how has this relationship played out on social media? Significant proportions of the public now spend massive amounts of time on social media, which has largely replaced television and newspapers as the architect of public opinion. Therefore, whoever controls social media messages has a good chance of controlling public opinion.

With phenomena such as viral content and the rise of memes, there is much opportunity for malicious actors with vested interests to sow chaos and promote confusion. But is it really feasible that the world’s present direction has been orchestrated by certain groups? For argument’s sake, let’s assume for a moment that Russia is the architect of the current situation. The term ‘disinformation’ comes from the Russian phrase dezinformatsiya, the Cold War name of a KGB department specialising in black propaganda.

Moreover, Russia has always been known for its commitment to long-term strategic military thinking. During the Cold War, Russia was the underdog, with fewer resources than the wealthy United States. To hold its own, Russia was forced to develop its capabilities in a more strategic manner, going above and beyond traditional military power. This parallels how the online world works, which has long been the domain of the underdog, the misfit, the bootstrapper, and the hustler. People who seem powerless have strategically used the Internet to gain the upper hand in many walks of life. It only takes one video or blog post going viral, and the resulting following can transform the nerdiest bedroom dweller into an overnight Internet celebrity, with the ability to reach a wide audience. It doesn’t seem much of a stretch to imagine a clever government could easily harness this sort of power to pursue its own interests.

Social media has become the lens through which millions of people view the world. If that lens is warped, then their perceptions can be easily manipulated. Some would argue with this perspective, saying that it denies people their agency, portraying them as passive actors who soak up messages without any critical thinking. The decline of the attention economy is also relevant here. Our attention spans have been hijacked. Studies have shown people are losing the ability to concentrate for long periods of time. Instead, our brains now seek the instant hits of dopamine available from notifications and popups. Facebook and Twitter have had a profound effect on our societies, where large swathes have willingly eroded their own abilities to focus. It is certainly not difficult to conceive that certain actors would take advantage of this to push their own agendas.

Google search data Islamophobia

Can Google search data help solve Islamophobia?

For decades, social scientists have conducted research using some combination of surveys, census data, focus groups, interviews, and observation techniques. With the exception of covert observation, which brings its own ethical issues, these methods have something in common: the dishonesty of people. These methods are all subject to human lies and therefore unable to paint a reliable picture of society’s true beliefs and darkest fears. In fact, the most objective forms of data are given up willingly, in private, where people are free from the worry of being judged. Short of stealing people’s diaries or tapping their phone calls, what else can researchers do to gather the most objective data possible?

Better than surveys

In our digital era the most obvious answer is also the correct one. But until now, few people have thought to leverage this tool and publicise their findings in such an accessible way and at such a pertinent time. What is the technology we all use to ask questions, seek validation, and search for the most outrageous things? Why of course, it’s Google. Many people would be embarrassed to publicly display their Google search history. I know mine is full of very silly things. But at the same time, these queries are deeply revealing; which is precisely why they strike a nerve. They display some of our deepest secrets. For example, a few years ago I used to get occasional panic attacks. I remember waking up at 3 am in an unfamiliar country, caught in the midst of an attack, gasping for breath. To calm myself, I searched Google for reassurance that it was ‘just’ a panic attack.

Google as ‘truth serum’

People search Google for all manner of things. Seth Stephens-Davidowitz (see below for video of his recent RSA talk), the researcher who produced this study, found many searches for terms involving ‘abortions’, ‘closet gays’, ‘penis size’, and ‘breastfeeding of husbands’ (the latter being apparently popular in India). He also found other more sinister patterns, ones suggesting American racism was far more widespread than previously thought. In fact, search data shows the idea of America as a ‘post-racial’ society, much-touted after the 2008 election of Barack Obama, to be quite absurd. Google showed American racism and Islamophobia were thoroughly alive and kicking, even in places where people didn’t publicly admit to holding racist views. They espouse very different opinions in the privacy of their own homes, face-to-face only with Google. It’s Google as ‘truth serum’. Almost ten years later, with Trump at the helm, perhaps America is finally showing its true face.

Tracking Islamophobia in searches

Obama’s address to the nation after the 2015 San Bernardino attack, provides an interesting example of how search data reflects hidden social views. In the speech, he aimed to calm outraged people and restore order to the country. In particular, he wanted to counteract the backlash that Muslim-Americans would surely face. While he was speaking of Muslims as ‘our friends’, ‘neighbours’ and so on, Google search data was telling a different story. After each terrorist attack (and this happens in the UK too) the volume of negative and violent searches about Muslims skyrockets. Islamophobic searches like ‘kill all Muslims’, become alarmingly high.

During most of Obama’s speech, these searches didn’t reduce or even level off. Instead they became even more frequent. This makes sense, because challenging people’s world views acts as an attack on their fundamental identity. In response, most will cling tighter to whatever they believe. But later in his speech, Obama changed tack. He introduced new images: not just of Muslim-Americans as friends and neighbours, who should be respected, but also of ‘Muslim soldiers’, willing to die for America, and ‘Muslim athletes’, representing the country on the world stage.

From ‘terrorists’ to soldiers and athletes

And then, something changed in the data. Islamophobic searches slowed down, to be replaced with searches for ‘Muslim athletes’, and ‘Muslim soldiers’. Something had resonated with the people searching; instead of responding predictably to Obama’s perceived ‘attack’ on their entrenched world views, they had become curious. I believe this happened for two reasons, partly because the idea of Muslims as athletes and soldiers resonated with ‘patriotic’ American audiences. But also because these images perhaps helped to ‘de-otherise’ public perceptions of Muslims. By drawing on resonant all-American themes, Obama associated Muslims with a set of positive images rather than just trying to convince wider America to accept them as a group. In response, albeit temporarily, the volume of Islamophobic searches slowed and included more positive searches.

This is encouraging in some ways, because despite the fleeting nature of this positivity, its presence suggests two important things, 1) that Islamophobia is largely a problem of perceptions, and 2) that the tide can be turned back. Negative views of Muslims have become deeply entrenched over the last three decades. Islamophobia as a public perception is regularly reinforced by mainstream media, by certain think tanks and their ‘experts’, and by reactions to the terrible deeds of ISIS; a group that has hijacked the image of Islam worldwide.

How can this data help us?

Can Google search data offer us the chance to fix some of society’s ills? Its revealing nature shows our darkest fears in a way no survey can ever do. Having this information (anonymous of course) could be used to bring issues into the open and address their root causes. In the case of Islamophobia, analysing Google searches could reveal where the gaps and misperceptions lie in wider society’s understanding of Muslims. It could allow us to categorise the fears, misunderstandings, and false perceptions. This could inform the design of social initiatives targeting specific problems, helping people understand each other better and gain a stronger sense of reality over perception.