Category: Propaganda

How personal branding paved the way for post-truth

Over a decade ago, an idea was born that seemed innocent at the time, even ground-breaking. It was the idea of personal branding; marketing one’s own skills like a product. In this piece, I’m going to reflect on how the personal branding mindset has played a role in creating today’s polarised and tribal online environment.

In his original Fast Company article,‘The Brand Called You’, author Tom Peters urges his readers to develop their personal brands by delivering talks and developing word-of-mouth marketing around their unique skills. He briefly mentions the importance of showing familiarity with new technology (such as email), but as a rather minor consideration. After all, it was 1997; the digital world hadn’t yet become an inextricable part of everyone’s lives.

Fast forward a few years to the early 2000s, where people had started publishing their own content using blogs and personal websites. The social media platform MySpace was launched in 2003, followed a couple of years later by Facebook. These tools were powerful and they allowed ordinary people to broadcast their message, whatever that might be, to large audiences. It was a whole new way to build the brand called you.

Digital tribalism

The growth of social media and blogs spawned a whole generation of online content creators, some successful, many not. People could now reinvent themselves personally and professionally simply by producing relevant online content and sharing it with audiences via social media. The trick to success was finding a bunch of people with whom your message resonated, i.e. your tribe.

The idea of ‘finding your tribe’ is central in branding strategy, both for commercial marketers and personal branders. Personal branding gurus often stress the importance of being bold and even divisive in the content you choose to publish. The goal in doing so is to eliminate those who aren’t on board with your opinions, leaving only your loyal, like-minded tribe remaining.

Arguably, this tribal approach has instilled in the digital generation a habit of being strongly opinionated online. It’s all too easy to be bold and divisive when you’re safely behind a screen. You can blog, make videos and write ebooks to your hearts’ content.

But creating effective content for personal branding takes up a lot of time and mental energy. Not everyone wants (or has the skills/motivation) to write original blog posts about their key career interests. Luckily, there’s another approach: content curation.

This popular and effortless alternative for building a personal brand community involves sharing other people’s content with your target audience, sometimes (but not necessarily) adding your own quote or original take.

Curation can be done quickly and with the minimum of effort; an appealing strategy in a time-pressed world. For example, content curation on Twitter could be as simple as retweeting articles and tweets relevant to the personal brand you wish to create. By doing this consistently, you can attract like-minded people, which then gives you a tribe, or brand community.

Another relevant factor in the development and solidification of personal branding is the deliberate design of online social networks. This encourages users to take actions which generate more likes, clicks and engagement from their audience.

Content curation and social networks’ design are symbiotic processes intended to complement one another, leading to a cycle where people create (or curate) content, gain approval from their tribe, experience a positive self-esteem boost, and repeat. This ongoing process generates increased traffic for the social networks and more revenue for their vital advertisers.

Personal branding meets politics

In 2008, online social networks made their big debut in politics as part of Barack Obama’s presidential campaign. Obama won the presidency, and followed up in 2012 with another win and another dose of digital political campaigning.

By then, more and more people were using social media and the first signs of manipulation were began to emerge. On top of that, attention spans were beginning to erode as people became used to a lifestyle lived almost wholly online. The introduction of Apple’s iPhone, and the resulting explosion in smartphone use exacerbated this shift, giving people access to social media in their pockets at all times.

It created the beginnings of a world where everyone on a bus or train would have their head down staring at a smartphone. Once we gained the possibility of sharing content at the touch of a ‘share’ button, content curation as part of maintaining a self image would soon become habitual for many. By 2016, social network use was prevalent, most people had a smartphone, and information was flowing non-stop.

Politics had firmly entered the personal branding arena, and campaign managers deployed increasingly clever strategies, such as digital profiling and social ads, to win over voting populations. This practice came to a head with the EU referendum in the UK, closely followed by the 2016 election of Donald Trump as US president.

Going tribal

To better understand what drove these events, it’s useful to first consider the innate human tendency to see the world in terms of ‘us vs them’. This is well demonstrated by the work of behavioural psychologist Henri Tajfel on what he called the ‘social identity theory of group conflict’, in particular the ‘minimal group paradigm’.

The minimal group paradigm shows that people define themselves in opposing groups over the most trivial of matters, such as a coin flip, grouping themselves into ‘Heads’ and ‘ Tails’. Once divided into groups, people tend to favour their own ingroup while disadvantaging, and even derogating, the outgroup. If people can get tribal over a simple coin flip, imagine what they’d be like over political ideology.

Further research has shown that not only do people tend to strongly favour their ingroup, but they also have a tendency to derogate the outgroup. This us vs them mentality manifests in many areas of life, from harmless rivalry over cities and sports (e.g. Boston vs New York or Manchester vs Liverpool), to more serious issues of racism, xenophobia and nationalism.

It also manifests in the digital world, exacerbated by today’s entrenched tendency for personal branding and ‘finding one’s tribe’. People receive positive reinforcement as part of the in-group whenever they broadcast their identity to their fellow brand community members. They usually do this by sharing content, whether their own, or, more commonly, curated from others.

Two infamous political examples are the behaviour of Trump supporters versus Clinton supporters, or Leave versus Remain supporters. Both sides commonly derogate the other (e.g. libtard, Brexiteer) and view their ingroup as superior.

That’s not the only way social identity theory manifests itself in contemporary digital politics. In addition to derogating each others’ perceived political outgroup, it’s become common practice to derogate, and even dehumanise, certain outgroups in wider society, normally minorities such as Muslims, refugees or immigrants.

These groups have become easy targets because of an array of social and political events over recent decades that have put them squarely in the firing line. Ever since the terrorist attacks of 9/11, the British and US mainstream media has consistently highlighted attacks committed by Muslim perpetrators while downplaying similar ones conducted by non-Muslims.

What’s more, the Syrian civil war and the rise of ISIS triggered a massive influx of refugees from Syria and Iraq into Europe. Together, these events produced a climate of fear and uncertainty; fertile territory for innate ‘us and them’ attitudes to thrive in a digital sphere where online tribalism (in the personal branding sense) had long been a common practice.

Tribes before truth

This leads to a very current concern: the rise of online misinformation, often known as ‘fake news’. With such a huge flood of information now available via our smartphones, we don’t always have time to read everything in detail. So we take shortcuts and get lazy about processing information properly. We simply don’t have the time or inclination to think deeply about every piece of content we interact with online.

Nevertheless, we crave engagement and approval from our ‘tribe’. Perhaps we’ve become somewhat addicted to it, to the extent that we sometimes share articles without even reading them. Recent research found that between 50–70% of all URLs on Twitter are shared without being opened, suggesting that people share them based only on the headline. This has heavy implications for the spread of misinformation, and suggests too, that fact-checking probably won’t work.

In an online space rife with misinformation, why would someone share an article without reading it first? Arguably, broadcasting our affiliation to our digital tribe matters more to us than veracity. More critically, broadcasting this affiliation to our ingroup is likely to involve derogation of an outgroup. After all, we really want those likes and shares, and that’s often the best way to get them.

One of the key goals in sharing content on social media (especially Twitter) seems to be to signal that ‘we’ (the ingroup) are different from ‘them’ (the outgroup). This dichotomy shows up most disturbingly in stories about ‘Muslim rape gangs’, refugee ‘sex mobs’, and terrorist attacks that never happened (e.g. the fictitious Bowling Green massacre).

In this tense milieu, it’s easy for misinformation to get picked up and spread as part of the ‘tribal broadcasting’ process, or ‘content curation’ in personal branding parlance. If a certain news story fits people’s ingroup vs outgroup narratives, they’re probably going to share it on social media. Truth may come second to tribalism.

The real danger comes when this digital tribalism plays out in real world scenarios, such as an uptick in anti-immigrant hate crime, or violent events such as ‘Pizzagate’. Both have been linked to online misinformation.

You might ask what the social media giants are doing to address this issue. They’ve made various efforts to implement reporting tools so that users can report hate speech. They have also shut down particularly heinous accounts such as InfoWars, that exist purely to peddle misinformation and hate.

But digital tribalism in fact boosts all the metrics that spell success for social media firms, creating a self-reinforcing situation. One can’t help but wonder how far they’ll actually go to rein this in.

If only we could all quit social media, en masse. Would that solve the problem? Or does it run deeper than that?

Can ‘Online Surges’ Drive Long Term Attitude Change?

It comes as little surprise to learn that today’s wave of anti-Muslim online sentiment is being led by specific Islamophobic organisations, and channelled through public figures such as Tommy Robinson and Pamela Geller. And over the last three years, a spate of incidents tied to Muslim perpetrators, including vehicle attacks and knifings, have added fuel to the fire and, in the minds of some, justified their anti-Muslim viewpoints. Indeed, one often notices a sense of warped, self-righteous ‘public duty’, among online commenters who bash Muslims and link them ceaselessly to terrorism. After all, aren’t terrorists a danger to our society; public enemy number one? Surely it’s acceptable to point that out. When looking through the lens of online news and social media, especially in the unbridled comments sections, the casual observer may come to feel that the majority of Britons hate, fear, and dehumanise Muslims.

But that particular picture of public opinion could be misleading. The true makeup of this climate of hate may come as more of a surprise. A soon-to-be-published report (and numerous bloggers and journalists) claims that many of the social media accounts spreading anti-Muslim sentiment online aren’t who they claim to be. Many aren’t even human, while others don’t represent ‘organic’ human opinions. Here we have the bots and paid sock-puppets. One is generated by algorithms, the other operates from a pre-existing messaging playbook. Attempts to engage them in dialogue often feel like arguing with a brick wall, or an extremely resilient ideologue; impervious to reason of any kind. Oddly, people often describe ardent Trump supporters in this way.

In terms of the climate around Muslims and Islam, these media manipulators use a range of tools to try and shift public opinion on a wide scale. A lot of psychological devices come into play here, for example the bandwagon effect. In this, people tend to do something just because others are doing it; such as blindly adopting a popular opinion around a contentious social or political issue. It’s a tool that’s been used for decades in political campaigns and commercial advertising. When bots and sock-puppets masquerade as ordinary British and American citizens (cleverly leveraging their profiles to appear so) who hate Islam, they are relying on the bandwagon effect to encourage real citizens to adopt similar views. When it looks like so many people are talking badly about a certain group or person, it’s easy to assume the rumours might well be true. The bot armies also latch onto people like Tommy Robinson to amplify his messages, and add their own, whenever a relevant story breaks.

Unfortunately, terrorist attacks have become a critical asset in a giant influence ops campaign. Just as PR stunts drive content marketing traffic in the commercial world, so attacks (or rumours of attacks) also drive anti-Muslim ‘brand-building’ in the world of organised Islamophobia. It’s an interesting symbiotic relationship that would merit further study. The prime goals of the campaign appears to be driving wedges into society, creating an atmosphere of fear and turning groups against one another. Persuasion, whether by means of disinformation or fact, has attitude change as its end goal. A range of psychological theories purport to explain the processes behind it, but for now it’s more important to focus on effects. Whoever is masterminding these influence ops wants to shift the pendulum and create a new anti-Muslim normal in public opinion. They are using every tool at their disposal to do so, including false amplification, echo chambers, and visual disinformation.

But is the campaign working? To find out, we need to measure subtle shifts in public sentiment over a span of years, and then find out how to tie them to anti-Islam messaging campaigns. Of course, results are likely to be skewed by certain factors. One could be the mainstream British media, in particular the Express and Daily Mail, where coverage of all things immigration, refugees and Islam often teeters on the brink of disinformation, especially in the ways in which topics are spun. Specific incidents, especially of visual disinformation, such as the Muslim woman at Westminster, could be used as starting points to track associated sentiment online. Fearful knee-jerk reactions to terrorist attacks are to be expected, but broader long-term shifts in sentiment are harder to track. What’s more, they are far more insidious, corroding society from the inside out.

Disinformation, Influence Ops and the Humble Blog

Just a few days ago, Wired Magazine published an article revealing that the ongoing Russian disinformation campaign also involved a number of long form blog posts published on Medium.

In this post, I want to talk about why this seemingly new move from Russian info ops is actually not surprising at all and makes perfect logical sense in an overall content strategy. Blogs play a key role in an effective strategy, with content posted on Twitter, Facebook and other social platforms acting as ‘outposts’, all emanating from the blog, which is placed at the heart.

Some of the Medium bloggers had tens of thousands of followers. They published content around predictable topics such as ‘race, science and politics, indoctrination on US college campuses’ – and that good old chestnut, ‘lies from Hillary Clinton’. The fake blogger identities were carefully crafted to appeal to their target audience, with bios claiming to be conservative, Christian, and family-oriented.

This approach reflects that used on Twitter by many bot and sock puppet accounts; designed to instil a sense of community between the fake accounts and their Trump-supporting human target audiences. It’s an attempt to create an in-group, define its identity in opposition to the ‘out-group’ and build trust between its members. This makes the messages seem more believable in the target audience’s minds.

The most famous and prolific disinformation blogger of all is ‘Jenna Abrams’, the Kremlin operated account that fooled Americans for a number of years. From 2014 onwards, the ‘freewheeling American blogger’ who just happened to push a return to segregation and decried ‘PC culture run amok’ had plenty of time to build a significant target audience. The account was perfectly placed to propel divisive messages into the mainstream media, becoming an influential voice just in time for Trump’s election in 2016.

How did this fake identity become so influential? Firstly, ‘Abrams’ touted a lot of controversial opinions. Pushing openly xenophobic and racist opinions riled up a lot of journalists and celebrities on social media, which helped to boost Abram’s messages and help them gain more traction and visibility. Many of her tweets and posts went viral. When something goes viral it’s quite common for it to reach the attention of the mainstream media, and this is what happened with Abrams. The Abrams account was featured in many of the biggest media outlets and this in turn fed back into her following, attracting more people to view, share and comment on her content.

Conventional marketing wisdom has always claimed blogging as a highly effective way to build a personal or professional brand. During its early days, a whole host of blogging gurus and advice sites sprung up, such as Problogger, Neil Patel and Chris Brogan. They taught people how to harness the power of online content to define their brand and attract better professional opportunities. Indeed, many Internet celebrities such as Tim Ferriss, Gala Darling and Ash Ambirge of the Middle Finger Project came into existence thanks to the power of blogging.

I did it myself; spending around four years building up a professional identity in the field of nation brand strategy, by creating content about the topic and publishing it online. This, combined with a lot of social media outreach and networking, plus a bit of well-placed SEO, brought me a modest range of opportunities. They ranged from an invitation to keynote at an international conference, to an interview with CNN and a number of consulting opportunities based on my supposed expertise. This expertise was solely built through blogging.

My nation branding blog wasn’t pushing disinformation. But nevertheless it’s another case in point that shows just how easy it is to build a believable personal brand and gain an audience purely through online content creation. With this in mind, it doesn’t surprise me in the least that Russia’s expert info ops strategists are using blogging as part of their disinformation campaigns. I’d wager we can expect others like these in the future.

What’s more, the line between a personal blog and a professional news source has become blurred. Anyone can set up a website and promote their own content to the world. If it attracts enough viewers and enough engagement, then the site will start to gain value and become a trusted source. With enough traction, a simple blog might even turn into something like Breitbart, with the influence to significantly affect public opinion and bring toxic debates into the mainstream.

Finding links between ISIS, online disinformation and Islamophobia

I’m very interested in looking for links between the 2014 emergence of Isis and the disinformation campaigns currently plaguing social media. Both aim to poison public perceptions of Muslims. Could sentiment analysis help to draw the link between the two? The process could begin by mapping the sentiment in relation to certain keywords when political events occur involving Muslims. It could also be interesting to map public sentiment towards Muslims after terrorist attacks happen around Europe and the UK. This would likely produce spikes of negative sentiment, as would be expected.

But can social media evidence be found that points to a long-term trend for increased negativity towards Muslims? And can this be mapped convincingly to the starting point of the dramatic emergence of Isis in summer 2014? Can we then compare this period of two or three years with the period from 2001 (after 9/11) all the way through to 2013? I’m curious to know whether the online content put out by Isis can be linked in some way with various amplification campaigns by Russian bots or sock puppets. It is becoming common knowledge that actors sympathetic to Russia have been manipulating Western social media opinion in the run up to both Brexit and the election of Donald Trump. Russia has also been accused of orchestrating this manipulation in various national elections across the European Union.

The rise of the far right, which helped bring about Brexit and Trump, seems to have happened in tandem with the emergence of Isis. If Islamist extremists and far right extremists are, as some experts argue, just two sides of the same coin, then it would be logical for their trajectories to run in parallel. But how has this relationship played out on social media? Significant proportions of the public now spend massive amounts of time on social media, which has largely replaced television and newspapers as the architect of public opinion. Therefore, whoever controls social media messages has a good chance of controlling public opinion.

With phenomena such as viral content and the rise of memes, there is much opportunity for malicious actors with vested interests to sow chaos and promote confusion. But is it really feasible that the world’s present direction has been orchestrated by certain groups? For argument’s sake, let’s assume for a moment that Russia is the architect of the current situation. The term ‘disinformation’ comes from the Russian phrase dezinformatsiya, the Cold War name of a KGB department specialising in black propaganda.

Moreover, Russia has always been known for its commitment to long-term strategic military thinking. During the Cold War, Russia was the underdog, with fewer resources than the wealthy United States. To hold its own, Russia was forced to develop its capabilities in a more strategic manner, going above and beyond traditional military power. This parallels how the online world works, which has long been the domain of the underdog, the misfit, the bootstrapper, and the hustler. People who seem powerless have strategically used the Internet to gain the upper hand in many walks of life. It only takes one video or blog post going viral, and the resulting following can transform the nerdiest bedroom dweller into an overnight Internet celebrity, with the ability to reach a wide audience. It doesn’t seem much of a stretch to imagine a clever government could easily harness this sort of power to pursue its own interests.

Social media has become the lens through which millions of people view the world. If that lens is warped, then their perceptions can be easily manipulated. Some would argue with this perspective, saying that it denies people their agency, portraying them as passive actors who soak up messages without any critical thinking. The decline of the attention economy is also relevant here. Our attention spans have been hijacked. Studies have shown people are losing the ability to concentrate for long periods of time. Instead, our brains now seek the instant hits of dopamine available from notifications and popups. Facebook and Twitter have had a profound effect on our societies, where large swathes have willingly eroded their own abilities to focus. It is certainly not difficult to conceive that certain actors would take advantage of this to push their own agendas.

Reflections: Terrorism and Social Media Conference 2017

Last week, in a sleepy Welsh city by the sea, a group of social media and terrorism researchers came together to discuss the latest challenges in the field.

I learned a lot, met people doing admirable work and came away inspired with ideas to shape my own research in the future. This post is a short synopsis of topics from the conference that struck me as important, interesting and/or particularly thought-provoking.

The visual web

Maura Conway’s opening keynote was peppered with mentions of the visual web – and it’s importance in the study of terrorist and extremist activity. All extremist groups have a visual profile, and many use images as a central feature of their propaganda and recruiting efforts.

One look at the ISIS propaganda magazine, Dabiq, proves this point. And it’s not only about images, but also video, which terrorist groups have used for decades, from the grainy, muffled bin Laden recordings all the way through to the glossy ISIS productions. Far-right groups use images too – from the notorious Pepe the Frog to a range of logos featuring swords, swastikas and national flags.

The ‘post-truth’, digital era has ushered in a trend for using images as part of disinformation efforts; driving so-called ‘fake news’. A recent example springs to mind from the March 2017 Westminster attack. In the swirling social media aftermath of Khalid Mahmood’s actions there emerged a photo of a Muslim woman wearing a hijab, walking past victims across Westminster bridge, engrossed in her phone as she walked.

The image was quickly hijacked, attached to numerous false claims attacking the unknown woman for her apparent ‘disdain’ for the injured victims. These claims spawned thousands of comments where people released their Islamophobic feelings to the full, feeding into the milieu of anti-Muslim sentiment that presently hangs over society.

Of course, the truth was very different. The woman had been messaging friends and family to let them know she was safe after the attack. Despite the truth being outed, the damage had already been done. Social perceptions of Muslims as ‘bad’ had been further reinforced.

Back to Prof Conway’s speech; in which she highlighted the ‘strong signalling function’ of images, making them critical subjects for further analysis. Yet most terrorism analysts still focus primarily on text, because the analysis of images is more challenging. Visual analytics tools and techniques do exist, both qual and quant, with big data research on images being especially popular in communication science at the moment.

In short: we need to pay more attention to the visual nature of the internet – and focus more on these ‘low-hanging fruit’ of visual analytics in the study of extremism.

The far-right

TASM didn’t focus only on the Islam-related side of extremism, but showcased a balanced view across the spectrum, with plenty of emphasis on research into the far-right. I attended several interesting panel talks on this subject, and came away with a number of key points.

One piece of research compared Britain First with Reclaim Australia, aiming to draw out the nuances within the umbrella term ‘far-right’. Methodology involved corpus assisted discourse analysis (CADS) on a static dataset, showing text that Britain First and Reclaim Australia supporters had posted on social media over a three-month period.

The researchers used a social media insights tool, Blurrt, to gather raw data, then used Python scripts to sort it into a workable format before finally analysing using CADS. In particular, they focused on collocations to reveal telling patterns in ideas and sentiments across the two groups.

Findings included a strong pattern of ‘othering’ – the core ‘us versus them’ narrative (which is a common theme not just among far-right discourse but also in some mainstream media and foreign policy: e.g. the Iraq war – ‘Axis of Evil’).

It was unsurprising therefore to find that Muslims and immigrants were particularly targeted. In what appears to be an extension of the ‘us versus them’ theme, ‘metaphors of invasion’ were often found in the discourse of both groups.

Other common themes included mentions of ‘our women’, ‘our religion’ and ‘our culture’ as being under threat from the ‘invaders’. All these themes feel very masculine. It could be interesting to reflect on the proportion of these sentiments that come from male authors; and could also be worth analysing what far-right discourse looks like from a female perspective.

In general, researchers concluded that far-right propaganda is less ‘overtly’ violent than that of ISIS, and is mainly rooted in nationalistic tendencies. This raises many questions. Is this how the far-right have managed to fly ‘under the radar’ for so long? Are they seen as being defensive rather than offensive? (And hence the ‘good guys’ on some level).

Could that be a factor in the much-discussed media under-reporting of far-right crimes, while focusing almost hysterically on those perpetrated by jihadists? Or, are ISIS and similar viewed as ‘worse’ simply because they are more ‘other’ (i.e. racism)?

Resonant narratives

Just as in commercial marketing, narratives work best when they intersect with individual agency and contexts. In his panel talk, Dr Akil Awan pointed out that CVE campaigns must not neglect the real-world issues that allow extremist narratives to resonate in the first place.

So how do ISIS narratives achieve success? They play on themes of belonging and identity; important for people experiencing ‘dual culture alterity’, i.e. feeling alienated from both their parents’ culture and the culture of their country of upbringing. In these cases, a return to fundamentalism becomes an ‘anchor’; a default setting of identity in a sea of alienation.

Awan highlighted the disparity between perceptions and reality around the true numbers of Muslims living in European countries. The media drives much of this misperception; making people feel ‘under siege’, creating fear, driving societies apart and destroying any sense of cohesion. In such a milieu, it is easy for ISIS to ‘eliminate the grey zone’ by means of terrorist acts. The media has already primed society for ISIS to succeed.

Understanding perceptions is as important as understanding reality; because how people perceive something will guide their course of action in response to it. Current CVE campaigns (based around tools such as counter-narrative videos) are cheap to implement and make it look like governments are taking action.

But recognising the ‘lived experience’ of minority groups is one of the keys to successful CVE efforts; neglecting to do so is hypocritical and unlikely to be effective.

Conclusion

In closing, we heard from the arbiter of all this – Facebook. Dr Erin Saltman explained the tools Facebook uses to tackle the online side of extremism and terrorism. These tools include a database of extremist propaganda images that relies on machine learning to match images as they surface, and automatically remove them.

But machine learning has its limitations, and humans are still required to take into account context and nuance. At present, the two work in tandem to surface the content (machine learning) and then interpret it as needed (humans).

Other tools include Facebook Insights, commonly used in commercial marketing, but can also be leveraged to guide counter-speech initiatives and enable precise reading of audiences.

The age of social media, although still in its infancy, has already had profound impact on politics and society – as well as the individual psychology of internet users. The long-term effects are unknown, with many changes no doubt still on the way.