After spending almost a year tracking coordinated inauthentic behavior on behalf of Facebook and Google, I’ve developed a good sense of how an inauthentic social media user looks and acts online.
Each platform has its own nuances. But many universal patterns indicate the likelihood of coordinated activity. Here I’ll discuss three common indicators – and how you can effectively spot them across any social media platform.
But first, let’s take a look at what coordinated inauthentic behavior actually means. We’ll also briefly explore some controversy around its definition.
What is coordinated inauthentic behavior?
Two years ago, Facebook first coined the phrase ‘coordinated inauthentic behavior’ (known in the industry as CIB).
Facebook defines CIB as follows: “When groups of pages or people work together to mislead others about who they are or what they’re doing.”
Facebook (and other platforms) are keen to highlight the ‘behavior’ side of the phrase. This helps to shield the platform from accusations of being biased against any particular political ideology.
People would be quick to make those accusations if Facebook simply focused on removing inauthentic content. It would raise the question of which content should get removed and which should stay. This would trigger wider concerns about freedom of speech and the First Amendment.
The double standards controversy
Writing for Slate, Harvard researcher Evelyn Douek is concerned that Facebook’s definition of coordinated inauthentic behavior is lacking in clarity.
She argues that certain groups will consider certain kinds of CIB acceptable, whereas others will not. Douek draws on the example of the TikTok video that caused hundreds of young people to artificially inflate attendance numbers at a Donald Trump rally by reserving tickets en masse.
Douek contrasts that real-life scenario with a hypothetical example of Qanon supporters doing the same to Joe Biden. She highlights the risk of applying double standards to CIB, as well as to disinformation.
That’s a real concern, especially in deeply polarized times. Polarization is the key driving force behind this issue. We assume that ‘our’ side is doing good, while ‘their’ side is doing bad. That view influences how we judge the motives of coordinated inauthentic behavior.
For the purpose of this post, we’ll use the official CIB definition. It’s still the standard that most social media platforms use. But it’s important to know that the term is not perfect, and has attracted controversy.
Is coordinated inauthentic behavior the same as misinformation or disinformation?
No. But they certainly play a role in it. For example, members of a Twitter botnet might work together to constantly pump out and amplify misleading tweets about a political figure.
Or groups of paid operatives might enter Facebook groups and astroturf the discussion about coronavirus by posting lots of comments about the dangers of vaccines. Astroturfing is a common technique of CIB, i.e. to create an appearance of legitimate ‘grassroots’ consensus on certain topics.
OK, I’ve answered some key questions about coordinated inauthentic behavior. Now let’s look at three ways to spot it.
What are some key indicators of coordinated inauthentic behavior?
The concept of identity is at the heart of many coordinated inauthentic behavior and disinformation efforts. CIB campaigns often play on existing social and political divisions within their target audience.
For example, they might astroturf a widespread sense of approval for a certain government policy, such as a tougher stance on immigration. Immigration is an emotive issue for many people, and has the potential to drive ingroup vs outgroup sentiments.
When examining accounts for signs of inauthenticity, I consider overt identity signals, especially political ones, to be a red flag. These could include national flags, divisive political hashtags (e.g. #MAGA (make America great again) or #FBPE (follow back pro-Europe)), or a bio stuffed with identity-promoting keywords like “Army vet, patriot, Trump supporter, family man, God lover”.
Taken together, those signs indicate that the profile primarily exists to promote a certain political identity – a common giveaway of astroturfing or coordinated inauthentic behavior.
Copy Paste Sharing
It’s common to find groups of accounts sharing links or posts accompanied by the exact same text (e.g. in a quoted tweet or a Facebook share). This isn’t normal behavior for an ‘organic’ social media user, so it’s a suspicious sign.
Copy paste sharing usually indicates a campaign designed to amplify a certain message. It is likely that Twitter accounts constantly tweeting the same messages in tandem are automated (i.e. bots).
Aggressive Political Agenda
When I’m seeking signs of coordinated inauthentic behavior, I always examine the posting history of a social media account. I check whether all its posts support a specific political agenda (usually in an aggressive and antagonistic way). If so, that’s another red flag.
Sure, regular people can also post aggressively in support of a political agenda. But it’s less likely that those posts will make up the whole of their posting history. A one-topic account is a key sign of coordinated inauthentic behavior.
In this post we examined the origins of the term ‘coordinated inauthentic behavior’. Also, we explored one of the key debates around the validity of its definition. Finally, we looked at three simple ways to spot coordinated inauthentic behavior on social media platforms.
- First, I looked at identity signaling, where accounts project a strong sense of a certain identity (usually political) via profile hashtags, profile imagery, bio information, or posting history.
- Second, I discussed copy paste posting, where multiple accounts share something with the exact same accompanying text. This is often a sign of automated coordinated inauthentic behavior.
- Finally, I highlighted the significance of one-topic accounts that support a certain political agenda, usually in an aggressive way.