Home » Misinformation

Tag: Misinformation

coordinated inauthentic behavior

Three Easy Ways to Spot Coordinated Inauthentic Behavior Online

After spending almost a year tracking coordinated inauthentic behavior on behalf of Facebook and Google, I’ve developed a good sense of how an inauthentic social media user looks and acts online. 

Each platform has its own nuances. But many universal patterns indicate the likelihood of coordinated activity. Here I’ll discuss three common indicators – and how you can effectively spot them across any social media platform. 

But first, let’s take a look at what coordinated inauthentic behavior actually means. We’ll also briefly explore some controversy around its definition.

What is coordinated inauthentic behavior?

Two years ago, Facebook first coined the phrase ‘coordinated inauthentic behavior’ (known in the industry as CIB).

Facebook defines CIB as follows:  “When groups of pages or people work together to mislead others about who they are or what they’re doing.”

Facebook (and other platforms) are keen to highlight the ‘behavior’ side of the phrase. This helps to shield the platform from accusations of being biased against any particular political ideology.

People would be quick to make those accusations if Facebook simply focused on removing inauthentic content. It would raise the question of which content should get removed and which should stay. This would trigger wider concerns about freedom of speech and the First Amendment.  

The double standards controversy

Writing for Slate, Harvard researcher Evelyn Douek is concerned that Facebook’s definition of coordinated inauthentic behavior is lacking in clarity.

She argues that certain groups will consider certain kinds of CIB acceptable, whereas others will not. Douek draws on the example of the TikTok video that caused hundreds of young people to artificially inflate attendance numbers at a Donald Trump rally by reserving tickets en masse.

Douek contrasts that real-life scenario with a hypothetical example of Qanon supporters doing the same to Joe Biden. She highlights the risk of applying double standards to CIB, as well as to disinformation.

That’s a real concern, especially in deeply polarized times. Polarization is the key driving force behind this issue. We assume that ‘our’ side is doing good, while ‘their’ side is doing bad. That view influences how we judge the motives of coordinated inauthentic behavior. 

For the purpose of this post, we’ll use the official CIB definition. It’s still the standard that most social media platforms use. But it’s important to know that the term is not perfect, and has attracted controversy.

Is coordinated inauthentic behavior the same as misinformation or disinformation?

No. But they certainly play a role in it. For example, members of a Twitter botnet might work together to constantly pump out and amplify misleading tweets about a political figure.

Or groups of paid operatives might enter Facebook groups and astroturf the discussion about coronavirus by posting lots of comments about the dangers of vaccines. Astroturfing is a common technique of CIB, i.e. to create an appearance of legitimate ‘grassroots’ consensus on certain topics. 

OK, I’ve answered some key questions about coordinated inauthentic behavior. Now let’s look at three ways to spot it.

What are some key indicators of coordinated inauthentic behavior?

Identity Signaling

The concept of identity is at the heart of many coordinated inauthentic behavior and disinformation efforts. CIB campaigns often play on existing social and political divisions within their target audience.

For example, they might astroturf a widespread sense of approval for a certain government policy, such as a tougher stance on immigration. Immigration is an emotive issue for many people, and has the potential to drive ingroup vs outgroup sentiments.

When examining accounts for signs of inauthenticity, I consider overt identity signals, especially political ones, to be a red flag. These could include national flags, divisive political hashtags (e.g. #MAGA (make America great again) or #FBPE (follow back pro-Europe)), or a bio stuffed with identity-promoting keywords like “Army vet, patriot, Trump supporter, family man, God lover”.

Taken together, those signs indicate that the profile primarily exists to promote a certain political identity – a common giveaway of astroturfing or coordinated inauthentic behavior.

Copy Paste Sharing

It’s common to find groups of accounts sharing links or posts accompanied by the exact same text (e.g. in a quoted tweet or a Facebook share). This isn’t normal behavior for an ‘organic’ social media user, so it’s a suspicious sign.

Copy paste sharing usually indicates a campaign designed to amplify a certain message. It is likely that Twitter accounts constantly tweeting the same messages in tandem are automated (i.e. bots).

Aggressive Political Agenda

When I’m seeking signs of coordinated inauthentic behavior, I always examine the posting history of a social media account. I check whether all its posts support a specific political agenda (usually in an aggressive and antagonistic way). If so, that’s another red flag.

Sure, regular people can also post aggressively in support of a political agenda. But it’s less likely that those posts will make up the whole of their posting history. A one-topic account is a key sign of coordinated inauthentic behavior. 

Summary

In this post we examined the origins of the term ‘coordinated inauthentic behavior’. Also, we explored one of the key debates around the validity of its definition. Finally, we looked at three simple ways to spot coordinated inauthentic behavior on social media platforms.

  • First, I looked at identity signaling, where accounts project a strong sense of a certain identity (usually political) via profile hashtags, profile imagery, bio information, or posting history. 
  • Second, I discussed copy paste posting, where multiple accounts share something with the exact same accompanying text. This is often a sign of automated coordinated inauthentic behavior. 
  • Finally, I highlighted the significance of one-topic accounts that support a certain political agenda, usually in an aggressive way.

What’s the Difference Between Disinformation and Misinformation?

What’s the difference between disinformation and misinformation?

(I get asked this question a lot, so I thought it was time to write about it).

The difference between disinformation and misinformation lies in the presence of intent.  

The Difference Between Disinformation and Misinformation

Let’s look at a couple of examples to understand the difference between disinformation and misinformation.

Misinformation encompasses a wide range of misleading content, from rumour to satire to human error. However, there’s a crucial difference: misinformation doesn’t acknowledge intent.  

Disinformation, on the other hand, has a more specific and sinister meaning. Disinformation is created with an intention to deceive.

For example, a disinformation campaign could involve a doctored video. It could consist of a political candidate’s gaffes, spliced together to imply that he has dementia.

Malinformation and Fake News

We also have malinformation. This is information based on reality, used maliciously to harm its target.

The target could be an individual, a country or an organisation. The 2016 leak of Hillary Clinton’s private emails is one such example of malinformation.

Finally, the one everyone knows best: fake news.

Donald Trump popularised the term in 2016 (although Hillary Clinton was apparently the first to use it).

However, disinformation researchers prefer not to use it, as it is both politicised and imprecise. ‘Fake news’ unhelpfully conflates the three primary types of misleading information already mentioned.  

New digital technology means that misleading online content shifts and evolves rapidly. There are other subcategories of dubious online content worth knowing about. Here’s a rundown.

Other Misleading Information

Satire

Satire has existed for thousands of years. It highlights failings in society and politics using humour, irony, sarcasm or mockery.

Shakespeare often used satire in his work. For example, in Hamlet, he pointed out that royalty is not always fit to run a country properly. Now in the internet age, satirical news websites have become popular, such as The Onion and NewsThump.

But it’s not always clear where satire ends and disinformation begins. It’s easy for political news websites to spread divisive narratives under the guise of satire. The limited attention spans of most online news audiences make it even easier.

In fact, many social media users will share partisan political content that triggers their tribal instincts. They may not notice that it comes from a website that has a ‘satire’ disclaimer. 

Manipulated Images  

Images have a more immediate impact than text, making them an effective tool in disinformation. Manipulation of images is easy with freely available online tools, or Photoshop, and can look very convincing.

Genuine images can be used in misleading contexts, such as during the Westminster Bridge terrorist attack of 2017.

In this case, a Twitter account later attributed to the Kremlin shared an image of a woman wearing a hijab. The tweet included a hostile caption claiming that the woman was ignoring injured victims.

Designed to trigger anti-Muslim sentiments, alt-right influencers shared the tweet. It garnered thousands of retweets and likes. But although the image was genuine, the context was not.  

Deepfakes

As a new form of disinformation, deepfakes have attracted a lot of hype in the last couple of years. These AI-generated images are a type of synthetic media where one person’s face and/or voice replaces the face/voice of another.

A deepfake can make it look like a person is saying something they’re not. This has many obvious use cases for disinformation. So far, porn has been the main area where deepfakes are being used. But in a handful of cases they’ve played a role in disinformation efforts.  

We may have overstated the immediate risk of deepfakes. But they do have potential to upend the information environment. My biggest concern is that deepfakes would destroy the notion of absolute truth.

Once upon a time a taped voice recording would hold up in court (e.g. Watergate). In later years, CCTV footage became the absolute truth. But a world in which deepfakes are prevalent would no longer have an absolute truth. It would cast doubt on every possible form of recorded evidence.

Shallowfakes and Cheapfakes

In addition to deepfakes, we need to consider shallowfakes, also known as ‘cheapfakes’. People create these doctored videos without the help of fancy AI tools, using simple video editing software.

Shallowfakes are far more common than their AI-generated cousins. And social media platforms seem to overlook them. Facebook, for example, only bans doctored videos made with AI, i.e. deepfakes.

In February 2020, shallowfakes caused quite a stir. A video circulated on social media showing Nancy Pelosi shredding a copy of Donald Trump’s speech during his state of the union address.

Memes

The word ‘meme’ has become synonymous with random humorous cultural images superimposed with chunky white text. Below, a small selection of my recent COVID-19-related favourites.

Distracted boyfriend antivaxxer disinformation meme
Hannibal Lecter in mask COVID-19 meme
Meme about failed plans in 2020

In fact, the word ‘meme’ can refer to any piece of cultural content (video, image, song, article, etc) that social media users spread virally. (That typical chunky text/image combo that we all call a meme is actually known as an ‘image macro’).

Meme creators often use the image macro format to convey partisan political sentiments. Both sides of the political spectrum shared inflammatory memes during the 2016 US presidential campaign.

Alt-right supporters also used the same format to spread some highly offensive views, such as racism and anti-semitism in ‘Pepe the Frog’ memes.

Image macro political memes are generally hyper-partisan in nature and play a role in perpetuating conflict between opposing groups (e.g. Democrats and Republicans).

Image macro meme of Hillary Clinton disinformation

Memes are totally devoid of any nuanced explanation. The viewer gets a quick hit of emotion that has a triggering effect. This taps into an oversimplified outrage that has become a core feature of today’s political life online. 

AI-Generated Voiceovers 

These are a bit weird and random. During a recent project for YouTube, I discovered some strange videos spreading false information about COVID-19.

The voiceover didn’t sound human at all. It was robotic and monotone, as if reading from a text. I don’t know their official name, if they have one at all, but perhaps something to keep an eye on.

From Disinformation to Misinformation (and back again?)

In closing, I’ve been thinking about this question: Does content shift from disinformation to misinformation as it travels across the internet? 

Malicious intent defines disinformation. Could a piece of content shift between definitions according to the intention of the most recent individual who shared it?  

For example, a person shares a narrative in their local Facebook group, claiming COVID-19 is curable with megadoses of bleach. It’s been debunked, of course, but (somehow) the person doesn’t know.

They innocently think they’re helping their network by passing on ‘valuable’ health information that might help cure the virus. They don’t intend to deceive. So shouldn’t we class it as misinformation?

Let’s say that same piece of content originated in a troll farm. Its creators intended it to deceive populations and compromise public health efforts. It started life as disinformation

We could say the same for conspiracy theories. These are often spread by ‘true believers’ – genuinely invested in their mission to enlighten the ‘sheeple’ and save the world.

Are they being malicious if they believe it’s all true? Does that still count as disinformation? It would be easier to make this distinction if we could reliably trace the content back to its source. But that’s not always easy to do. 

Those who create disinformation know how to take advantage of natural human biases and triggers. In many cases, it’s enough to simply ‘seed’ harmful disinformation into the social media stream. Ordinary social media users will then do the heavy lifting. Therein lies much of the danger.