Author: sjnorth

Brexit influence campaign: #BritainIndependence

The atmosphere in Britain is becoming increasingly heated as October 31st inches ever closer. This is the date when the country will leave the European Union — if all goes to plan for Boris Johnson and the Conservative Party. Right now the political stakes are higher than ever.

Parliament has tried to protect the country from the potential catastrophe that may result from leaving without a deal. In the nick of time before prorogation, they passed a last-minute bill into law, designed to compel Johnson to seek an extension, if he can’t reach a suitable deal with the EU by the end of October.

Johnson has already lost multiple times in Parliament, despite his efforts to prorogue it for an unusually long time. Last week, a Scottish court ruled that this prorogation was unlawful. This week, the case goes to the English Supreme Court, where it will be challenged.

#BritainIndependence

In this highly-charged environment, Twitter has provided a constant source of lively political debate around Brexit. Many issues are bubbling up at present, some more relevant than others, but here I want to explore a particularly interesting hashtag.

#BritainIndependence has been popping up a lot lately. The first thing to do is to find out when it was first used, and who first used it. The hashtag came fairly late to the game, on September 9, via a user called ‘Trevor’, whose screen name is stuffed with pro-Brexit hashtags.

Signalling ingroup identity

A quick glance at Trevor’s bio is revealing. First, the bio is a strong indicator of self-professed identity on Twitter. In Trevor’s case, it contains words that reflect traditional values: conservative, nationalist, family-oriented, words such as ‘Christian’, ‘white’, ‘loyal British subject’, and ‘married’.

This creates a sense of group identity, designed to give Trevor’s profile immediate appeal to others who identify with similar values – i.e. signalling that he is part of an ingroup. In this case, the ingroup is pro-Brexit Twitter users.

The references to popular British football teams (Arsenal and Rangers), is likely designed to portray the account as belonging to a genuine working-class British citizen – another effort at signalling ingroup identity.

But the cultural references feel jumbled: Arsenal is an English team, while Rangers is a Scottish team. That plus the random mention of Northern Ireland means this bio doesn’t quite ring true. In fact, it feels like someone playing at being a Brit, perhaps for nefarious reasons.

What’s more, ‘neighbor’ is spelled in US English. No genuine speaker of British English would use US spelling; especially a man who seems so deeply committed to British patriotism. Clue, Trevor is likely not the grassroots British man that he pretends to be.

We could dig much deeper into Trevor’s account, especially his past tweet history. His account is devoted to tweeting about Brexit, even though it was created in September 2015, before Brexit existed. It would be interesting to see what Trevor was tweeting about between then and June 2016, but that’s a topic for another post…

Hashtag activity

Next up, let’s take a look at how Twitter users have been interacting with the #BritainIndependence hashtag, since ‘Trevor’ coined it on September 9th (coincidentally, just in time for the prorogation of Parliament).

All of the most retweeted tweets on the #BritainIndependence hashtag come from users with heavily pro-Brexit screen names (and usernames like @Brexit4me and @Feck_the_EU), suggesting one-topic accounts that exist simply for the purpose of engaging with Brexit-related discussions.

Retweets have two main functions, 1) they spread a message across Twitter, 2) they create validation for the message via social proof (i.e. if other people have engaged with this tweet, then it must be worth engaging with).

Liking (or favouriting) tweets reinforces the sense of social proof, while also increasing the likelihood of the tweet being seen in other users’ feeds.

The below tweets contain strong evidence of typical adversarial narratives, designed to promote a sense of tribalism, i.e. “us vs them”.

Examples include:

  • ‘Unelected judges’ and ‘hijacked by extremists’ (fits into narrative of the EU/Remain supporters being ‘anti-democratic’)
  • ‘Tattooed Eastern European thug’, and ‘brutal rape’ (fits into narrative of foreigners and ‘The Other’ as being threatening, especially to women)
  • ‘Me, just a patriot’ (supports the narrative of pro-Brexit voters as being especially patriotic. This is a similar notion to that conveyed by Trump’s ‘Make America Great Again (#MAGA’), which coincidentally appears in many of the user bios tweeting the #BritainIndependence hashtag.

Clearly, the #BritainIndependence hashtag exists to stoke further divides between the two sides in the Brexit debate, while creating the illusion of widespread support for the pro-Leave side. It’s highly likely that the hashtag was initialised for that very purpose, as the nature of ‘Trevor’s’ account suggests.

Furthermore, it’s telling that this hashtag coincides with several significant real-life events in the Brexit timeline that could threaten the pro-Brexit side, including the beginning of (unlawful) prorogation and the case going to the Supreme Court.

But why are so many of the top posting accounts so similar, with their bios stuffed with tribal keywords ? And why are so many of them blatantly US-centric, or with such obvious cultural errors (such as the spelling of ‘favorite’)?

This could indicate an organised social media manipulation campaign aiming to create the illusion of widespread support for the pro-Brexit side while deepening social and political divisions at a critical juncture for the country.

As October 31 inches closer, the discussion is certain to get even more heated – and we’re sure to see lots of interesting social media activity.

I’ll post further analyses here between now and October 31.

How personal branding paved the way for post-truth

Over a decade ago, an idea was born that seemed innocent at the time, even ground-breaking. It was the idea of personal branding; marketing one’s own skills like a product. In this piece, I’m going to reflect on how the personal branding mindset has played a role in creating today’s polarised and tribal online environment.

In his original Fast Company article,‘The Brand Called You’, author Tom Peters urges his readers to develop their personal brands by delivering talks and developing word-of-mouth marketing around their unique skills. He briefly mentions the importance of showing familiarity with new technology (such as email), but as a rather minor consideration. After all, it was 1997; the digital world hadn’t yet become an inextricable part of everyone’s lives.

Fast forward a few years to the early 2000s, where people had started publishing their own content using blogs and personal websites. The social media platform MySpace was launched in 2003, followed a couple of years later by Facebook. These tools were powerful and they allowed ordinary people to broadcast their message, whatever that might be, to large audiences. It was a whole new way to build the brand called you.

Digital tribalism

The growth of social media and blogs spawned a whole generation of online content creators, some successful, many not. People could now reinvent themselves personally and professionally simply by producing relevant online content and sharing it with audiences via social media. The trick to success was finding a bunch of people with whom your message resonated, i.e. your tribe.

The idea of ‘finding your tribe’ is central in branding strategy, both for commercial marketers and personal branders. Personal branding gurus often stress the importance of being bold and even divisive in the content you choose to publish. The goal in doing so is to eliminate those who aren’t on board with your opinions, leaving only your loyal, like-minded tribe remaining.

Arguably, this tribal approach has instilled in the digital generation a habit of being strongly opinionated online. It’s all too easy to be bold and divisive when you’re safely behind a screen. You can blog, make videos and write ebooks to your hearts’ content.

But creating effective content for personal branding takes up a lot of time and mental energy. Not everyone wants (or has the skills/motivation) to write original blog posts about their key career interests. Luckily, there’s another approach: content curation.

This popular and effortless alternative for building a personal brand community involves sharing other people’s content with your target audience, sometimes (but not necessarily) adding your own quote or original take.

Curation can be done quickly and with the minimum of effort; an appealing strategy in a time-pressed world. For example, content curation on Twitter could be as simple as retweeting articles and tweets relevant to the personal brand you wish to create. By doing this consistently, you can attract like-minded people, which then gives you a tribe, or brand community.

Another relevant factor in the development and solidification of personal branding is the deliberate design of online social networks. This encourages users to take actions which generate more likes, clicks and engagement from their audience.

Content curation and social networks’ design are symbiotic processes intended to complement one another, leading to a cycle where people create (or curate) content, gain approval from their tribe, experience a positive self-esteem boost, and repeat. This ongoing process generates increased traffic for the social networks and more revenue for their vital advertisers.

Personal branding meets politics

In 2008, online social networks made their big debut in politics as part of Barack Obama’s presidential campaign. Obama won the presidency, and followed up in 2012 with another win and another dose of digital political campaigning.

By then, more and more people were using social media and the first signs of manipulation were began to emerge. On top of that, attention spans were beginning to erode as people became used to a lifestyle lived almost wholly online. The introduction of Apple’s iPhone, and the resulting explosion in smartphone use exacerbated this shift, giving people access to social media in their pockets at all times.

It created the beginnings of a world where everyone on a bus or train would have their head down staring at a smartphone. Once we gained the possibility of sharing content at the touch of a ‘share’ button, content curation as part of maintaining a self image would soon become habitual for many. By 2016, social network use was prevalent, most people had a smartphone, and information was flowing non-stop.

Politics had firmly entered the personal branding arena, and campaign managers deployed increasingly clever strategies, such as digital profiling and social ads, to win over voting populations. This practice came to a head with the EU referendum in the UK, closely followed by the 2016 election of Donald Trump as US president.

Going tribal

To better understand what drove these events, it’s useful to first consider the innate human tendency to see the world in terms of ‘us vs them’. This is well demonstrated by the work of behavioural psychologist Henri Tajfel on what he called the ‘social identity theory of group conflict’, in particular the ‘minimal group paradigm’.

The minimal group paradigm shows that people define themselves in opposing groups over the most trivial of matters, such as a coin flip, grouping themselves into ‘Heads’ and ‘ Tails’. Once divided into groups, people tend to favour their own ingroup while disadvantaging, and even derogating, the outgroup. If people can get tribal over a simple coin flip, imagine what they’d be like over political ideology.

Further research has shown that not only do people tend to strongly favour their ingroup, but they also have a tendency to derogate the outgroup. This us vs them mentality manifests in many areas of life, from harmless rivalry over cities and sports (e.g. Boston vs New York or Manchester vs Liverpool), to more serious issues of racism, xenophobia and nationalism.

It also manifests in the digital world, exacerbated by today’s entrenched tendency for personal branding and ‘finding one’s tribe’. People receive positive reinforcement as part of the in-group whenever they broadcast their identity to their fellow brand community members. They usually do this by sharing content, whether their own, or, more commonly, curated from others.

Two infamous political examples are the behaviour of Trump supporters versus Clinton supporters, or Leave versus Remain supporters. Both sides commonly derogate the other (e.g. libtard, Brexiteer) and view their ingroup as superior.

That’s not the only way social identity theory manifests itself in contemporary digital politics. In addition to derogating each others’ perceived political outgroup, it’s become common practice to derogate, and even dehumanise, certain outgroups in wider society, normally minorities such as Muslims, refugees or immigrants.

These groups have become easy targets because of an array of social and political events over recent decades that have put them squarely in the firing line. Ever since the terrorist attacks of 9/11, the British and US mainstream media has consistently highlighted attacks committed by Muslim perpetrators while downplaying similar ones conducted by non-Muslims.

What’s more, the Syrian civil war and the rise of ISIS triggered a massive influx of refugees from Syria and Iraq into Europe. Together, these events produced a climate of fear and uncertainty; fertile territory for innate ‘us and them’ attitudes to thrive in a digital sphere where online tribalism (in the personal branding sense) had long been a common practice.

Tribes before truth

This leads to a very current concern: the rise of online misinformation, often known as ‘fake news’. With such a huge flood of information now available via our smartphones, we don’t always have time to read everything in detail. So we take shortcuts and get lazy about processing information properly. We simply don’t have the time or inclination to think deeply about every piece of content we interact with online.

Nevertheless, we crave engagement and approval from our ‘tribe’. Perhaps we’ve become somewhat addicted to it, to the extent that we sometimes share articles without even reading them. Recent research found that between 50–70% of all URLs on Twitter are shared without being opened, suggesting that people share them based only on the headline. This has heavy implications for the spread of misinformation, and suggests too, that fact-checking probably won’t work.

In an online space rife with misinformation, why would someone share an article without reading it first? Arguably, broadcasting our affiliation to our digital tribe matters more to us than veracity. More critically, broadcasting this affiliation to our ingroup is likely to involve derogation of an outgroup. After all, we really want those likes and shares, and that’s often the best way to get them.

One of the key goals in sharing content on social media (especially Twitter) seems to be to signal that ‘we’ (the ingroup) are different from ‘them’ (the outgroup). This dichotomy shows up most disturbingly in stories about ‘Muslim rape gangs’, refugee ‘sex mobs’, and terrorist attacks that never happened (e.g. the fictitious Bowling Green massacre).

In this tense milieu, it’s easy for misinformation to get picked up and spread as part of the ‘tribal broadcasting’ process, or ‘content curation’ in personal branding parlance. If a certain news story fits people’s ingroup vs outgroup narratives, they’re probably going to share it on social media. Truth may come second to tribalism.

The real danger comes when this digital tribalism plays out in real world scenarios, such as an uptick in anti-immigrant hate crime, or violent events such as ‘Pizzagate’. Both have been linked to online misinformation.

You might ask what the social media giants are doing to address this issue. They’ve made various efforts to implement reporting tools so that users can report hate speech. They have also shut down particularly heinous accounts such as InfoWars, that exist purely to peddle misinformation and hate.

But digital tribalism in fact boosts all the metrics that spell success for social media firms, creating a self-reinforcing situation. One can’t help but wonder how far they’ll actually go to rein this in.

If only we could all quit social media, en masse. Would that solve the problem? Or does it run deeper than that?

Tweeting my way into academia

On Twitter not long ago, someone suggested that academics should avoid using social media. He cited reasons such as distractions, narcissism, and ‘the risk of getting trolled’.

I’m studying social media for my PhD and so I’m well aware of its flaws. But, like any tool, when used in the right way it can open up many new opportunities. Here, I’ll speak out in defence of Twitter’s usefulness, and explain how I used it to find an unadvertised, fully-funded PhD opportunity.

My professional life has revolved around Twitter for over six years, ever since 2012 when I started blogging about nation branding. It helped me gain traction networking in the field, which led to media interview requests, conference and keynote speaking invitation in far-flung destinations (Jamaica, Jordan, Turkey, Indonesia…) and assorted consulting gigs.

That blog also played a central role in my eventually securing a competitive job at a London software startup. All because of a blog and Twitter.

It wasn’t that complicated. I simply wrote posts, stuck them up on my website and then publicised them on social media along with the requisite hashtags. I interacted with people who replied, and took the conversation in interesting directions.

But the contacts I made were invaluable, and, perhaps more importantly, Twitter was central in enabling me to join conversations around the topic of nation branding, get my thoughts out into the world, and in the process build up my expertise and credibility.

This PhD may be the biggest victory to date for my Twitter use. It all started back in early 2017, when I was working in London. It was a good experience at an exceptional company. I learned a lot about how tech startups work, and I enjoyed the time spent with my colleagues.

But I just wasn’t passionate enough about the subject matter and I yearned for something more. For a long time, ever since my time in Istanbul, I’d been deeply intrigued by politics, international relations and media, often with dashes of technology around the edges.

At around the same time, the unfolding saga of Brexit, Trump and online radicalisation captured my attention in a big way. Social media played a key role in the story. There was much talk about ISIS using social media platforms to brainwash vulnerable young people and entice them to Syria.

As 2017 slipped by, the primary narrative around social media and politics shifted. It began to focus less on radicalisation and more on how various foreign influences (and perhaps homegrown ones too…) had used social media to foment dissent against the status quo. Some even argued that our democracy itself was being subverted, hijacked by bad actors.

It was propaganda for the digital age and it fascinated me. I wanted to study it in more depth. Academia offered the perfect platform to do that.

Having no desire to take out loans or decimate my savings, I knew I had to find a PhD that offered full funding. I applied to an advertised position at the University of Sheffield, to research the role of visual social media in fake news.

After reaching the interview stage, they told me I hadn’t been selected for the role. Nevertheless it was a useful experience, because now I had a whole PhD proposal ready to go. It just needed to find a home.

One day in the aftermath of that minor setback, I was browsing Twitter, looking through hashtags relevant to my interests, seeing if any opportunities might pop up. Those hashtags led me to a professor at the University of Bath, who was researching cybersecurity and online trust.

I pinged him a DM, explaining that I had a proposal that could be relevant to his research interests, and asked if he’d be keen to take a look. He was, so we Skyped and exchanged emails in which he advised me on how best to fine-tune the proposal.

I got up at 5am every day that week to get it ready for submission. The university accepted it and miraculously there was full funding available.

I quit my startup job, and the rest is history. I’m now happily immersed in a research topic that I find meaningful, while also developing new skills in Python, network analysis, machine learning and statistics.

I’m eventually planning to go back into industry rather than continue in academia, but these skills will be invaluable whichever route I choose to take.

Why I’m taking a ‘data-driven science’ approach to research

In the age of big data, many new debates have emerged about the ‘best’ approach to research.

Some scholars argue there’s no longer any real need for theory, and claim that we should allow the ‘data to speak for themselves’. Others argue that all data carries inherent bias. That means we need knowledge of existing theory to provide the context necessary for meaningful understanding.

This is especially important in the social and political sciences, where big data researchers seek to understand complex human phenomena such as wars, genocide or racism, using massive computational datasets. It’s not easy for quantitative big data models to shed new insights on areas like these without drawing on existing knowledge, which may still be relevant even when dating back decades.

Boyd and Crawford (2012) support this view in their claim of an ‘arrogant undercurrent’ in the field of big data research that’s all too hasty to sideline older forms of research. For example, the process of cleaning a large social media dataset, e.g. from Twitter, is ‘inherently subjective’, as the researcher decides which attributes to include and which to ignore.

With these debates in mind, I’ve decided to use a ‘data-driven science’ approach in my PhD research. That means using existing behavioural science theory as a foundation to help me interpret findings in large-scale social media datasets, and blending qualitative methods with big data approaches based in computational social science.

It means I’ll need to get better at programming (Python is my language of choice), and venture into the exciting new world of machine learning. At the same time, I won’t abandon older forms of research methods (such as interviews), if they seem the right fit for the job.

In this blog, I’ll discuss the code I’m using in my research as it evolves (yes, there will be code snippets!) I’m relatively new to programming, so it’ll be a learning journey of sorts, probably with its fair share of mishaps and zig-zags.

I’m also fascinated by the many areas of social and political life that technology has affected, so expect a smattering of posts with musings about AI, ethics, life and so on. I’m looking forward to interacting with the community and having some interesting conversations.

***

References

Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information Communication and Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 205395171452848. https://doi.org/10.1177/2053951714528481

Imagined communities: The alt-left on Twitter

In January, I joined a team of researchers in Amsterdam to explore the online communities of ‘Alt-left’ and ‘Antifa’ using data from Twitter and Reddit. This was one of many projects all taking place as part of the Digital Methods Initiative Winter School, organised by the University of Amsterdam.

In particular, we wanted to use the data to find out whether the so called ‘alt-left’ represents a genuine community or if it’s simply a construct of the alt-right in the latter’s attempts to promote a false equivalence, as has been claimed.  We were also curious to establish whether alt-left overlapped with Antifa online, or if they were two distinct communities.

We arrived at the following research questions. This blog post will focus on the first two only.
1. Does the ‘Alt Left’ exist (to Antifa)?
2. Could the ‘Alt Left’ be considered an Alt Right construct?
3. Is the ‘Alt Left’ as a notion mainstreaming?
4. How did the ‘Alt Left’ emerge? How to characterise the
Alt Left’s identity (semantically)?

For the first question, we collected a list of top 20 hashtags associated with both ‘alt-left’ and ‘Antifa’ keywords. We found very little overlap between the two groups. Notably, the discourse of the alt-left (shown in green below) consisted of more words that could be considered ‘mainstream political dialogue’, compared to the ‘Antifa’, which seemed more fringe.

The latter’s hashtags contained references to lesser-known terms such as ‘democratTerrorism’, along with the names of some Japanese Antifa groups. From this data, we could reasonably conclude that the alt-left and Antifa are separate, distinctive discourses.

The term ‘alt-left’ arrived in the mainstream after Donald Trump used it when talking about the violence around the ‘Unite the Right’ rally in the US city of Charlottesville. He condemned groups on both sides of the violence, saying: “What about the alt-left that came charging at, as you say, at the alt-right?”

Subsequently, there has been discussion in the mainstream media about whether alt-left really exists. It could simply be a construct of the far-right as it attempts to gain moral ground against its opposition by creating a false equivalent that can be painted as equally, if not more, violent.

The next section of our research examined this question in more depth, using Twitter data to give a clearer picture. We compared the top 50 Twitter users for both the #altleft and #Antifa hashtags, by looking more closely at their bios and timelines.

Right-wing conservative Trump supporters dominated the alt-left group, while the Antifa group was populated by a strange mix of accounts posting ebay related tweets and tweets in Japanese promoting the ‘Antifa Handbook’.

A deeper dive, this time into the most influential hashtag users (influence = most followed), produced illuminating results. For both keywords, we identified the majority of influential users as pro-Trump conservatives, with the exception of one rather prolific Japanese bot.

We found three significant users who overlapped with both terms: both pro-Trump. The common thread connecting both altleft and Antifa hashtag users appears to be their support for Trump, rather than specific alt-right features.

The below Gephi graph shows the different communities grouping around the Antifa keyword on Twitter. As can be seen, the biggest cluster represents American, right-wing Trump supporters.

We also created an equivalent for the alt-left hashtag communities. This graph is less consistent, but the striking feature about it is the large purple cluster, with one Twitter account ‘kwilli1046’, at its centre, meaning this user is extremely influential.

A screenshot of his profile is shown below. ‘Kevin W’ has around 88k followers and a bio link to a page on Gab, the alternative ‘free speech’ social networking platform where many of the far-right have ended up after being kicked off Twitter.

In conclusion, we found that the ‘alt-left’ does not exist outside of an online construct. In addition, the alt-left is mainly a term propagated by pro-Trump accounts in their attempts to create a ‘faux-public’. With so many of these accounts being pro-Trump, this could potentially point to the use of state-sponsored bots and organised trolling to push the alt-left as a form of disinformation. This could be a worthwhile topic for building on the findings of this research.