Reports over the weekend claimed there was a 4,000 per cent increase in Russian bot activity on Twitter, following the Salisbury spy poisoning and the chemical attack in Syria.
The media said “government analysis” had linked specific Twitter accounts to the Kremlin.
One newspaper wrote: “Russian-controlled ‘bot’ accounts have bombarded social media with anti-Western propaganda 45,000 times since the chemical attack in Syria as part of a ‘dirty disinformation war’ by the Kremlin.”
Another ran the headline “Putin bid to brainwash the West”, and talked about “Russian automated accounts”.
Two of these accounts were picked out for special mention: @Ian56789 and @Partisangirl. Some papers called them “bots” or “propaganda bots”.
But both of account holders have since strongly denied they are Russian bots – and have even gone on camera to show they are real people.
@Ian56789 told Sky News the claim was a “100 per cent total lie and complete fabrication”, while @Partisangirl posted a video on Twitter, saying: “I am not a robot; I am a human being.”
So what’s going on?
The definition of a “bot” is often vague, but it’s generally defined as an automated (or semi-automated) social media account.
Typically, they post propaganda or spam using fictitious personas. For instance, reports say the Russian-based Internet Research Agency employs hundreds of people to spread propaganda using fake identities.
But if someone is posting content independently – through their own choice, on their own personal account – they would not generally be considered a “bot”.
However, they might still be regarded as a “troll”.
A “troll” is normally defined as “someone who leaves an intentionally annoying message on the internet, in order to get attention or cause trouble”. This is extremely subjective, but could arguably include anyone who regularly posts political propaganda, disinformation or offensive content.
For the two accounts in question, there is no doubt they have posted lots of offensive tweets and conspiracy theories. So, depending on your definition, it might be fair to describe them as “trolls”.
For instance, @Ian56789 claimed: “Zionists in US & UK wanted the Holocaust & funded Hitler, so that Israel could be set up on a wave of sympathy after WW2”. And the @Partisangirl account has tweeted that 9/11 was an “inside job”.
It’s also true that neither of the accounts use their full real name on Twitter (although they are used elsewhere online).
However, the videos of themselves confirm they are real individuals. Indeed, @Partisangirl even has a verification tick, meaning Twitter has confirmed that it’s authentic. It therefore seems incorrect to call them “bots”.
Government sources have now confirmed to FactCheck that the accounts are not bots.
So why the confusion?
There has been speculation that the research behind these claims came from the Atlantic Council, an American think-tank whose reports have mentioned these particular accounts.
It is possible this fed into the UK government’s analysis, but FactCheck has confirmed the main story was based on the government’s own research.
Government sources told us that their experts monitored a broad range of Twitter accounts – not just automated bots.
Their analysis formed the basis of a press briefing to selected journalists, which included naming the two Twitter accounts.
However, they insisted they did not claim these accounts were automated bots themselves – but merely that they were suspicious and part of a broader disinformation campaign.
There may be many legitimate questions and concerns about Kremlin-backed Twitter accounts spreading fake news.
And there is no doubt that bots are a major problem on social media.
Plus, it’s clear the two specific Twitter accounts have posted things that many people would find deeply troubling, including disinformation and conspiracy theories.
However, the implication that they are automated “bots”, controlled directly by the Kremlin, appears to be false.