The Russian town of Gelendzhik, on the banks of the Black Sea, has a population of 55,000. According to Twitter, one of its inhabitants is Svetlana Lukyanchenko, a voracious user of the social platform who signed up in May 2016 — less than a month before Britain voted to leave the European Union.
Sveta1972, as she called herself online, did not fit the profile of someone interested in British politics. Yet in the four days before the vote on June 23 she posted or retweeted at least 97 messages mentioning “#Brexit”.
Her messages were mainly pro-Brexit and often repeated conspiracy theories. On June 21 she retweeted a story by the website Zero Hedge that said Britons were “appalled and disgusted” by a Brexit postal ballot “fraud”.
Later that day she tweeted that the EU was an “unelected assembly of corporatist agents” imposing debt and austerity “on all member states”.
After the vote she lost interest, confining her 12,000-plus tweets to subjects such as lighting design websites or free e-books on web traffic. Despite her Russian origins, most of her tweets were in English, Spanish or Italian.
According to researchers at Swansea University, working with the University of California, Berkeley, Sveta1972 was one of thousands of suspect Russian accounts tweeting copiously about Brexit in the run-up to the vote.
Tracking 156,000 accounts in Russia, the research found that their mentions of #Brexit spiked on the day of the vote and the day after, before dropping off almost entirely. The accounts include genuine commentators but many appeared to be either fully automated bots or semi-automated “cyborgs” — bots with some human involvement.
These included Stormbringer15, a virulently pro-Kremlin Twitter power-user with 241,000 posts mainly asserting Russia’s rights over Ukraine. On the day after the referendum he tweeted a fake picture of President Putin giving a medal to Nigel Farage.
Tweets posted by the Russian accounts were often sent at 4am UK time, or 7am in Russia. They were probably seen many millions of times.
The findings will fuel concerns that the Russian state has weaponised social media. In August President Trump thanked an apparent fan on Twitter and retweeted her praise. The account of the fan was later suspended. It bore signs of being part of a Russia-backed disinformation campaign.
Damian Collins, chairman of a parliamentary committee investigating fake news, said that he had written to Facebook and Twitter to ask for “urgent clarification and information regarding Russian-linked accounts”.
Separate research from the Oxford Internet Institute and City University has revealed the scale of bot activity around Brexit. Oxford researchers found that 30 “highly automated” accounts posted 135,597 tweets from June 20-24. These were viewed about 11 billion times. While there is no direct connection between the bots and the Russian state, intelligence officials in Britain and the US are confident that many have been backed by Moscow.
The most active account over Brexit was the apparently London-based Israel Bombs Babies, which had tweeted 1.55 million times since joining in September 2011. On the day of the vote, it tweeted 492 times about Brexit.
One typical post said: “UK & EUROPEAN CITIZENS CAN BE CONSCRIPTED INTO SYRIAN WAR UNDER LISBON TREATY #EU #brexit #referendum #voteleave”.
Ben Nimmo, from the Atlantic Council’s digital forensic research lab, said that such content was typical of a Russian troll factory. “Pro-Russian, pro-Assad, pro-Ukraine rebels, anti-Clinton, anti-Nato, anti-White Helmets, anti-EU,” he said. “The question is whether it’s pro-Kremlin or actually Kremlin-run; that’s something which only Twitter can answer definitively.”
Twitter has divulged to US investigators 2,752 accounts believed to be linked to Russia’s Internet Research Agency, a “trolling” organisation in St Petersburg. Employees there were told to amplify “negative attitudes” towards the EU, leaked documents show.
It was revealed this week that one of the 2,700 accounts had tweeted an image of a hijab-wearing Muslim woman crossing Westminster Bridge, apparently ignoring a victim of the terrorist attack on March 22. The tweet was quoted in multiple news reports and caused upset to the unnamed woman.
Russian-linked bots also left messages supporting Remain. “They just stirred up dissent and discord in order to weaken this country and make themselves stronger,” Keir Giles, senior consulting fellow at Chatham House, said.
“It is possible to determine with a reasonable degree of confidence whether an account is being run by Russia by observing their behaviours. But we cannot be clear on the national security challenge until we have full access to the data available. Extracting this information from the social media companies is like pulling teeth . . . there seems to be little way in which they can be induced to do so.”
Twitter said it recognised the importance of maintaining the integrity of the election process and would support investigations into election interference. It said that it took action against bots.
Fifteen of the bots identified by Oxford remain active on Twitter. This week they put out 13,764 tweets, focusing on Israel, Syria and Mr Trump. One of the biggest trending topics was “Build2Indy”, calling for a second independence referendum for Scotland.
Q&A What is a Twitter bot? Software that is trained to post tweets and retweets autonomously on the microblogging site. Some bots are simply programmed to retweet any tweets containing specific keywords or hashtags. More complicated bots are trained to mimic humans by creating and posting tweets and “liking” and retweeting others. The most advanced bots are “cyborg” accounts, which are closely controlled by a human and tweet a mixture of manmade and automated posts.
How common are they? A study this year from the University of Southern California and Indiana University found that up to 15 per cent of Twitter accounts were bots rather than people. The researchers said that the estimate was conservative because more complex bots could be mistaken for humans even by experts.
Who uses bots? Twitter bots are widely used by businesses to promote their products. Creators also sell the services of bots as fake Twitter followers to give the impression of fame and professional success. Recently, the spotlight has moved on to the use of bots by Russian state-backed groups to influence last year’s US presidential election. Twitter has admitted that at least 2,752 now-deactivated Russian accounts linked to a St Petersburg “troll factory” tweeted on election-related subjects.
What is a botnet? A network of linked bots created by an individual or group and designed to spread messages on Twitter, including advertising and propaganda. A network of hundreds or thousands of bots might be tasked to like and retweet content from specific accounts, or mention specific topics. Researchers at City University in London discovered a botnet of nearly 3,500 accounts that tweeted heavily around the Brexit referendum before falling quiet.
Does Twitter allow bots? Sort of. Users are allowed to build systems that “automatically broadcast helpful information”. Bots are not supposed to send “spammy” messages or to keep reposting the same tweets. Nevertheless many bots break these rules and remain active on the site.
How can you spot a bot? Bots often, but not always, tweet with a speed and regularity that would be difficult if not impossible for a human to maintain. They often have generic user names that can’t be linked to identifiable individuals and use profile pictures lifted from elsewhere on the internet.
Where else are bots used? Bots are used on many platforms besides Twitter. They were used on Facebook and Instagram to influence last year’s US election. Jonathan Albright, research director at the Tow Center for Digital Journalism, recently said that Instagram, which is owned by Facebook, was more pervasive than Twitter for the sharing by bots of political “memes” and viral “outrage videos”.