Author: sjnorth

How personal branding paved the way for post-truth

Over a decade ago, an idea was born that seemed innocent at the time, even ground-breaking. It was the idea of personal branding; marketing one’s own skills like a product. In this piece, I’m going to reflect on how the personal branding mindset has played a role in creating today’s polarised and tribal online environment.

In his original Fast Company article,‘The Brand Called You’, author Tom Peters urges his readers to develop their personal brands by delivering talks and developing word-of-mouth marketing around their unique skills. He briefly mentions the importance of showing familiarity with new technology (such as email), but as a rather minor consideration. After all, it was 1997; the digital world hadn’t yet become an inextricable part of everyone’s lives.

Fast forward a few years to the early 2000s, where people had started publishing their own content using blogs and personal websites. The social media platform MySpace was launched in 2003, followed a couple of years later by Facebook. These tools were powerful and they allowed ordinary people to broadcast their message, whatever that might be, to large audiences. It was a whole new way to build the brand called you.

Digital tribalism

The growth of social media and blogs spawned a whole generation of online content creators, some successful, many not. People could now reinvent themselves personally and professionally simply by producing relevant online content and sharing it with audiences via social media. The trick to success was finding a bunch of people with whom your message resonated, i.e. your tribe.

The idea of ‘finding your tribe’ is central in branding strategy, both for commercial marketers and personal branders. Personal branding gurus often stress the importance of being bold and even divisive in the content you choose to publish. The goal in doing so is to eliminate those who aren’t on board with your opinions, leaving only your loyal, like-minded tribe remaining.

Arguably, this tribal approach has instilled in the digital generation a habit of being strongly opinionated online. It’s all too easy to be bold and divisive when you’re safely behind a screen. You can blog, make videos and write ebooks to your hearts’ content.

But creating effective content for personal branding takes up a lot of time and mental energy. Not everyone wants (or has the skills/motivation) to write original blog posts about their key career interests. Luckily, there’s another approach: content curation.

This popular and effortless alternative for building a personal brand community involves sharing other people’s content with your target audience, sometimes (but not necessarily) adding your own quote or original take.

Curation can be done quickly and with the minimum of effort; an appealing strategy in a time-pressed world. For example, content curation on Twitter could be as simple as retweeting articles and tweets relevant to the personal brand you wish to create. By doing this consistently, you can attract like-minded people, which then gives you a tribe, or brand community.

Another relevant factor in the development and solidification of personal branding is the deliberate design of online social networks. This encourages users to take actions which generate more likes, clicks and engagement from their audience.

Content curation and social networks’ design are symbiotic processes intended to complement one another, leading to a cycle where people create (or curate) content, gain approval from their tribe, experience a positive self-esteem boost, and repeat. This ongoing process generates increased traffic for the social networks and more revenue for their vital advertisers.

Personal branding meets politics

In 2008, online social networks made their big debut in politics as part of Barack Obama’s presidential campaign. Obama won the presidency, and followed up in 2012 with another win and another dose of digital political campaigning.

By then, more and more people were using social media and the first signs of manipulation were began to emerge. On top of that, attention spans were beginning to erode as people became used to a lifestyle lived almost wholly online. The introduction of Apple’s iPhone, and the resulting explosion in smartphone use exacerbated this shift, giving people access to social media in their pockets at all times.

It created the beginnings of a world where everyone on a bus or train would have their head down staring at a smartphone. Once we gained the possibility of sharing content at the touch of a ‘share’ button, content curation as part of maintaining a self image would soon become habitual for many. By 2016, social network use was prevalent, most people had a smartphone, and information was flowing non-stop.

Politics had firmly entered the personal branding arena, and campaign managers deployed increasingly clever strategies, such as digital profiling and social ads, to win over voting populations. This practice came to a head with the EU referendum in the UK, closely followed by the 2016 election of Donald Trump as US president.

Going tribal

To better understand what drove these events, it’s useful to first consider the innate human tendency to see the world in terms of ‘us vs them’. This is well demonstrated by the work of behavioural psychologist Henri Tajfel on what he called the ‘social identity theory of group conflict’, in particular the ‘minimal group paradigm’.

The minimal group paradigm shows that people define themselves in opposing groups over the most trivial of matters, such as a coin flip, grouping themselves into ‘Heads’ and ‘ Tails’. Once divided into groups, people tend to favour their own ingroup while disadvantaging, and even derogating, the outgroup. If people can get tribal over a simple coin flip, imagine what they’d be like over political ideology.

Further research has shown that not only do people tend to strongly favour their ingroup, but they also have a tendency to derogate the outgroup. This us vs them mentality manifests in many areas of life, from harmless rivalry over cities and sports (e.g. Boston vs New York or Manchester vs Liverpool), to more serious issues of racism, xenophobia and nationalism.

It also manifests in the digital world, exacerbated by today’s entrenched tendency for personal branding and ‘finding one’s tribe’. People receive positive reinforcement as part of the in-group whenever they broadcast their identity to their fellow brand community members. They usually do this by sharing content, whether their own, or, more commonly, curated from others.

Two infamous political examples are the behaviour of Trump supporters versus Clinton supporters, or Leave versus Remain supporters. Both sides commonly derogate the other (e.g. libtard, Brexiteer) and view their ingroup as superior.

That’s not the only way social identity theory manifests itself in contemporary digital politics. In addition to derogating each others’ perceived political outgroup, it’s become common practice to derogate, and even dehumanise, certain outgroups in wider society, normally minorities such as Muslims, refugees or immigrants.

These groups have become easy targets because of an array of social and political events over recent decades that have put them squarely in the firing line. Ever since the terrorist attacks of 9/11, the British and US mainstream media has consistently highlighted attacks committed by Muslim perpetrators while downplaying similar ones conducted by non-Muslims.

What’s more, the Syrian civil war and the rise of ISIS triggered a massive influx of refugees from Syria and Iraq into Europe. Together, these events produced a climate of fear and uncertainty; fertile territory for innate ‘us and them’ attitudes to thrive in a digital sphere where online tribalism (in the personal branding sense) had long been a common practice.

Tribes before truth

This leads to a very current concern: the rise of online misinformation, often known as ‘fake news’. With such a huge flood of information now available via our smartphones, we don’t always have time to read everything in detail. So we take shortcuts and get lazy about processing information properly. We simply don’t have the time or inclination to think deeply about every piece of content we interact with online.

Nevertheless, we crave engagement and approval from our ‘tribe’. Perhaps we’ve become somewhat addicted to it, to the extent that we sometimes share articles without even reading them. Recent research found that between 50–70% of all URLs on Twitter are shared without being opened, suggesting that people share them based only on the headline. This has heavy implications for the spread of misinformation, and suggests too, that fact-checking probably won’t work.

In an online space rife with misinformation, why would someone share an article without reading it first? Arguably, broadcasting our affiliation to our digital tribe matters more to us than veracity. More critically, broadcasting this affiliation to our ingroup is likely to involve derogation of an outgroup. After all, we really want those likes and shares, and that’s often the best way to get them.

One of the key goals in sharing content on social media (especially Twitter) seems to be to signal that ‘we’ (the ingroup) are different from ‘them’ (the outgroup). This dichotomy shows up most disturbingly in stories about ‘Muslim rape gangs’, refugee ‘sex mobs’, and terrorist attacks that never happened (e.g. the fictitious Bowling Green massacre).

In this tense milieu, it’s easy for misinformation to get picked up and spread as part of the ‘tribal broadcasting’ process, or ‘content curation’ in personal branding parlance. If a certain news story fits people’s ingroup vs outgroup narratives, they’re probably going to share it on social media. Truth may come second to tribalism.

The real danger comes when this digital tribalism plays out in real world scenarios, such as an uptick in anti-immigrant hate crime, or violent events such as ‘Pizzagate’. Both have been linked to online misinformation.

You might ask what the social media giants are doing to address this issue. They’ve made various efforts to implement reporting tools so that users can report hate speech. They have also shut down particularly heinous accounts such as InfoWars, that exist purely to peddle misinformation and hate.

But digital tribalism in fact boosts all the metrics that spell success for social media firms, creating a self-reinforcing situation. One can’t help but wonder how far they’ll actually go to rein this in.

If only we could all quit social media, en masse. Would that solve the problem? Or does it run deeper than that?

Tweeting my way into academia

On Twitter not long ago, someone suggested that academics should avoid using social media. He cited reasons such as distractions, narcissism, and ‘the risk of getting trolled’.

I’m studying social media for my PhD and so I’m well aware of its flaws. But, like any tool, when used in the right way it can open up many new opportunities. Here, I’ll speak out in defence of Twitter’s usefulness, and explain how I used it to find an unadvertised, fully-funded PhD opportunity.

My professional life has revolved around Twitter for over six years, ever since 2012 when I started blogging about nation branding. It helped me gain traction networking in the field, which led to media interview requests, conference and keynote speaking invitation in far-flung destinations (Jamaica, Jordan, Turkey, Indonesia…) and assorted consulting gigs.

That blog also played a central role in my eventually securing a competitive job at a London software startup. All because of a blog and Twitter.

It wasn’t that complicated. I simply wrote posts, stuck them up on my website and then publicised them on social media along with the requisite hashtags. I interacted with people who replied, and took the conversation in interesting directions.

But the contacts I made were invaluable, and, perhaps more importantly, Twitter was central in enabling me to join conversations around the topic of nation branding, get my thoughts out into the world, and in the process build up my expertise and credibility.

This PhD may be the biggest victory to date for my Twitter use. It all started back in early 2017, when I was working in London. It was a good experience at an exceptional company. I learned a lot about how tech startups work, and I enjoyed the time spent with my colleagues.

But I just wasn’t passionate enough about the subject matter and I yearned for something more. For a long time, ever since my time in Istanbul, I’d been deeply intrigued by politics, international relations and media, often with dashes of technology around the edges.

At around the same time, the unfolding saga of Brexit, Trump and online radicalisation captured my attention in a big way. Social media played a key role in the story. There was much talk about ISIS using social media platforms to brainwash vulnerable young people and entice them to Syria.

As 2017 slipped by, the primary narrative around social media and politics shifted. It began to focus less on radicalisation and more on how various foreign influences (and perhaps homegrown ones too…) had used social media to foment dissent against the status quo. Some even argued that our democracy itself was being subverted, hijacked by bad actors.

It was propaganda for the digital age and it fascinated me. I wanted to study it in more depth. Academia offered the perfect platform to do that.

Having no desire to take out loans or decimate my savings, I knew I had to find a PhD that offered full funding. I applied to an advertised position at the University of Sheffield, to research the role of visual social media in fake news.

After reaching the interview stage, they told me I hadn’t been selected for the role. Nevertheless it was a useful experience, because now I had a whole PhD proposal ready to go. It just needed to find a home.

One day in the aftermath of that minor setback, I was browsing Twitter, looking through hashtags relevant to my interests, seeing if any opportunities might pop up. Those hashtags led me to a professor at the University of Bath, who was researching cybersecurity and online trust.

I pinged him a DM, explaining that I had a proposal that could be relevant to his research interests, and asked if he’d be keen to take a look. He was, so we Skyped and exchanged emails in which he advised me on how best to fine-tune the proposal.

I got up at 5am every day that week to get it ready for submission. The university accepted it and miraculously there was full funding available.

I quit my startup job, and the rest is history. I’m now happily immersed in a research topic that I find meaningful, while also developing new skills in Python, network analysis, machine learning and statistics.

I’m eventually planning to go back into industry rather than continue in academia, but these skills will be invaluable whichever route I choose to take.

Why I’m taking a ‘data-driven science’ approach to research

In the age of big data, many new debates have emerged about the ‘best’ approach to research.

Some scholars argue there’s no longer any real need for theory, and claim that we should allow the ‘data to speak for themselves’. Others argue that all data carries inherent bias. That means we need knowledge of existing theory to provide the context necessary for meaningful understanding.

This is especially important in the social and political sciences, where big data researchers seek to understand complex human phenomena such as wars, genocide or racism, using massive computational datasets. It’s not easy for quantitative big data models to shed new insights on areas like these without drawing on existing knowledge, which may still be relevant even when dating back decades.

Boyd and Crawford (2012) support this view in their claim of an ‘arrogant undercurrent’ in the field of big data research that’s all too hasty to sideline older forms of research. For example, the process of cleaning a large social media dataset, e.g. from Twitter, is ‘inherently subjective’, as the researcher decides which attributes to include and which to ignore.

With these debates in mind, I’ve decided to use a ‘data-driven science’ approach in my PhD research. That means using existing behavioural science theory as a foundation to help me interpret findings in large-scale social media datasets, and blending qualitative methods with big data approaches based in computational social science.

It means I’ll need to get better at programming (Python is my language of choice), and venture into the exciting new world of machine learning. At the same time, I won’t abandon older forms of research methods (such as interviews), if they seem the right fit for the job.

In this blog, I’ll discuss the code I’m using in my research as it evolves (yes, there will be code snippets!) I’m relatively new to programming, so it’ll be a learning journey of sorts, probably with its fair share of mishaps and zig-zags.

I’m also fascinated by the many areas of social and political life that technology has affected, so expect a smattering of posts with musings about AI, ethics, life and so on. I’m looking forward to interacting with the community and having some interesting conversations.

***

References

Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information Communication and Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 205395171452848. https://doi.org/10.1177/2053951714528481

Imagined communities: The alt-left on Twitter

In January, I joined a team of researchers in Amsterdam to explore the online communities of ‘Alt-left’ and ‘Antifa’ using data from Twitter and Reddit. This was one of many projects all taking place as part of the Digital Methods Initiative Winter School, organised by the University of Amsterdam.

In particular, we wanted to use the data to find out whether the so called ‘alt-left’ represents a genuine community or if it’s simply a construct of the alt-right in the latter’s attempts to promote a false equivalence, as has been claimed.  We were also curious to establish whether alt-left overlapped with Antifa online, or if they were two distinct communities.

We arrived at the following research questions. This blog post will focus on the first two only.
1. Does the ‘Alt Left’ exist (to Antifa)?
2. Could the ‘Alt Left’ be considered an Alt Right construct?
3. Is the ‘Alt Left’ as a notion mainstreaming?
4. How did the ‘Alt Left’ emerge? How to characterise the
Alt Left’s identity (semantically)?

For the first question, we collected a list of top 20 hashtags associated with both ‘alt-left’ and ‘Antifa’ keywords. We found very little overlap between the two groups. Notably, the discourse of the alt-left (shown in green below) consisted of more words that could be considered ‘mainstream political dialogue’, compared to the ‘Antifa’, which seemed more fringe.

The latter’s hashtags contained references to lesser-known terms such as ‘democratTerrorism’, along with the names of some Japanese Antifa groups. From this data, we could reasonably conclude that the alt-left and Antifa are separate, distinctive discourses.

The term ‘alt-left’ arrived in the mainstream after Donald Trump used it when talking about the violence around the ‘Unite the Right’ rally in the US city of Charlottesville. He condemned groups on both sides of the violence, saying: “What about the alt-left that came charging at, as you say, at the alt-right?”

Subsequently, there has been discussion in the mainstream media about whether alt-left really exists. It could simply be a construct of the far-right as it attempts to gain moral ground against its opposition by creating a false equivalent that can be painted as equally, if not more, violent.

The next section of our research examined this question in more depth, using Twitter data to give a clearer picture. We compared the top 50 Twitter users for both the #altleft and #Antifa hashtags, by looking more closely at their bios and timelines.

Right-wing conservative Trump supporters dominated the alt-left group, while the Antifa group was populated by a strange mix of accounts posting ebay related tweets and tweets in Japanese promoting the ‘Antifa Handbook’.

A deeper dive, this time into the most influential hashtag users (influence = most followed), produced illuminating results. For both keywords, we identified the majority of influential users as pro-Trump conservatives, with the exception of one rather prolific Japanese bot.

We found three significant users who overlapped with both terms: both pro-Trump. The common thread connecting both altleft and Antifa hashtag users appears to be their support for Trump, rather than specific alt-right features.

The below Gephi graph shows the different communities grouping around the Antifa keyword on Twitter. As can be seen, the biggest cluster represents American, right-wing Trump supporters.

We also created an equivalent for the alt-left hashtag communities. This graph is less consistent, but the striking feature about it is the large purple cluster, with one Twitter account ‘kwilli1046’, at its centre, meaning this user is extremely influential.

A screenshot of his profile is shown below. ‘Kevin W’ has around 88k followers and a bio link to a page on Gab, the alternative ‘free speech’ social networking platform where many of the far-right have ended up after being kicked off Twitter.

In conclusion, we found that the ‘alt-left’ does not exist outside of an online construct. In addition, the alt-left is mainly a term propagated by pro-Trump accounts in their attempts to create a ‘faux-public’. With so many of these accounts being pro-Trump, this could potentially point to the use of state-sponsored bots and organised trolling to push the alt-left as a form of disinformation. This could be a worthwhile topic for building on the findings of this research.

 

Can ‘Online Surges’ Drive Long Term Attitude Change?

It comes as little surprise to learn that today’s wave of anti-Muslim online sentiment is being led by specific Islamophobic organisations, and channelled through public figures such as Tommy Robinson and Pamela Geller. And over the last three years, a spate of incidents tied to Muslim perpetrators, including vehicle attacks and knifings, have added fuel to the fire and, in the minds of some, justified their anti-Muslim viewpoints. Indeed, one often notices a sense of warped, self-righteous ‘public duty’, among online commenters who bash Muslims and link them ceaselessly to terrorism. After all, aren’t terrorists a danger to our society; public enemy number one? Surely it’s acceptable to point that out. When looking through the lens of online news and social media, especially in the unbridled comments sections, the casual observer may come to feel that the majority of Britons hate, fear, and dehumanise Muslims.

But that particular picture of public opinion could be misleading. The true makeup of this climate of hate may come as more of a surprise. A soon-to-be-published report (and numerous bloggers and journalists) claims that many of the social media accounts spreading anti-Muslim sentiment online aren’t who they claim to be. Many aren’t even human, while others don’t represent ‘organic’ human opinions. Here we have the bots and paid sock-puppets. One is generated by algorithms, the other operates from a pre-existing messaging playbook. Attempts to engage them in dialogue often feel like arguing with a brick wall, or an extremely resilient ideologue; impervious to reason of any kind. Oddly, people often describe ardent Trump supporters in this way.

In terms of the climate around Muslims and Islam, these media manipulators use a range of tools to try and shift public opinion on a wide scale. A lot of psychological devices come into play here, for example the bandwagon effect. In this, people tend to do something just because others are doing it; such as blindly adopting a popular opinion around a contentious social or political issue. It’s a tool that’s been used for decades in political campaigns and commercial advertising. When bots and sock-puppets masquerade as ordinary British and American citizens (cleverly leveraging their profiles to appear so) who hate Islam, they are relying on the bandwagon effect to encourage real citizens to adopt similar views. When it looks like so many people are talking badly about a certain group or person, it’s easy to assume the rumours might well be true. The bot armies also latch onto people like Tommy Robinson to amplify his messages, and add their own, whenever a relevant story breaks.

Unfortunately, terrorist attacks have become a critical asset in a giant influence ops campaign. Just as PR stunts drive content marketing traffic in the commercial world, so attacks (or rumours of attacks) also drive anti-Muslim ‘brand-building’ in the world of organised Islamophobia. It’s an interesting symbiotic relationship that would merit further study. The prime goals of the campaign appears to be driving wedges into society, creating an atmosphere of fear and turning groups against one another. Persuasion, whether by means of disinformation or fact, has attitude change as its end goal. A range of psychological theories purport to explain the processes behind it, but for now it’s more important to focus on effects. Whoever is masterminding these influence ops wants to shift the pendulum and create a new anti-Muslim normal in public opinion. They are using every tool at their disposal to do so, including false amplification, echo chambers, and visual disinformation.

But is the campaign working? To find out, we need to measure subtle shifts in public sentiment over a span of years, and then find out how to tie them to anti-Islam messaging campaigns. Of course, results are likely to be skewed by certain factors. One could be the mainstream British media, in particular the Express and Daily Mail, where coverage of all things immigration, refugees and Islam often teeters on the brink of disinformation, especially in the ways in which topics are spun. Specific incidents, especially of visual disinformation, such as the Muslim woman at Westminster, could be used as starting points to track associated sentiment online. Fearful knee-jerk reactions to terrorist attacks are to be expected, but broader long-term shifts in sentiment are harder to track. What’s more, they are far more insidious, corroding society from the inside out.