Exclusive: How NATO fights fake news

By Chia Jie Lin

Interview with Aivar Jaeski, former deputy director of the NATO Strategic Communications Centre of Excellence and former colonel of the Estonian Defence Forces.

Walk on the streets of St. Petersburg, Russia and you might come across a nondescript building by the name of the “Internet Research Agency” (IRA). While it looks innocuous, the IRA is one of the world’s largest propaganda machines. Known as a “troll farm”, its employees use online bots to spread fake news and sway voter opinions towards a pro-Russian stance.

Fake news is now rampant across the world, with terrorists and states setting up troll farms to promote their narratives. In the first half of 2018, the Kremlin invested US$10 million in the IRA’s trolling operations, double that of 2017, Quartz reported.

“Twitter has become a backbone of command and control for some of these organisations,” says Aivar Jaeski, former deputy director of the North Atlantic Treaty Organisation’s (NATO) Strategic Communications Centre of Excellence (StratCom), a multi-national task force that monitors fake news spread by terrorist and state-sponsored bots on the internet.

Jaeski tells GovInsider how the NATO task force is using machine learning to hunt down bots and training officials to combat fake news.

Hunting bots


Terrorist groups like ISIS use online bots as their main tool to radicalise people to commit terrorist acts, Jaeski warns. People “can be violent against the state or an organisation if their perceptions are influenced badly,” he says. Since 2015, Twitter has suspended over 360,000 accounts and banned another 125,000 accounts on grounds of promoting terrorism, BBC News reported.

The youth are prime targets for radicalisation, as extremist narratives often appeal to their desire to be a hero. “Terrorist organisation networks are very good at recruiting people from Europe, especially youngsters,” Jaeski observes. In 2017, 49% of radicalised jihadists in the UK were aged between 15 and 25, according to a BBC News database.

Meanwhile, state-sponsored bot networks like the IRA’s use online debates to intensify tensions and conflict along racial and political lines. A NATO study published earlier in March found that some Russian bots promoted commentary that supported President Donald Trump, while others supported the Black Lives Matter campaign, drawing users into heated debates on Twitter and Reddit.

The key to fighting fake news is to expose these bot networks, according to Jaeski. The StratCom tracks down bot networks and then publishes their results to the public. We “counter those extremist narratives, first of all, by revealing their lies and their networks”, he says. This initiative allows governments to know which accounts to shut down, and teaches citizens to be more discerning towards who they follow. “We educate our own people, especially government officials, to understand where the danger is and how it looks,” he adds.

To hunt down bots, StratCom’s coders use machine learning software - a “Botometer” - to estimate whether a social media account is a bot. The coders then employ a set of tools to visualise and compare these activity patterns with human accounts. “Robots have certain behaviour and are recognisable,” Jaeski says. For instance, bot accounts post at artificially regular intervals, which contrasts human accounts that post at random intervals.
Image: NATO StratCom Centre of Excellence - CC BY 2.0

The bot-hunting process is iterative. As the machine learning algorithms pick up more data on bot accounts, their predictions become more accurate. Over time, they can learn to identify anomalous user behaviour that human coders may have missed, according to a NATO study.

Educating officials


Beyond tracking bots, Jaeski believes that it is essential to educate government officials to recognise bots, as they are the first line of defense against fake news. “We need to educate our government officials [so] that they know how to recognise bots,” he advises.

The task force has created an online course that teaches officials from NATO member nations to identify bots and fake news. “For NATO members, they are allowed to take this online course and it's located in NATO servers,” Jaeski remarks. This then allows officials to be “better prepared for counter-actions” against bots, he adds.

Another upside to the online course is that it allows countries to collaborate against bot networks, especially since their officials possess the same skills. “Collective institutions are always stronger than single establishments - sharing information and knowledge, and educating each other,” he advises. For instance, Estonian officials learn from the same online course as officials from Belgium and Canada.

Conversely, countries that do not work with others will become highly susceptible to fake news. “They are missing opportunities and face a bigger threat on influence operations than they can even recognise,” Jaeski warns.

NATO has several joint command centres in Belgium, America and Germany that are staffed by officials from its member nations. “We want to avoid future wars; that is the goal,” he says. These centres keep transatlantic communication lines open, allow member nations to monitor bot networks together, and coordinate their cyber-defences.

No nation will emerge from this war of fake news unscathed. But countries can learn from NATO by arming themselves with bot-hunting algorithms and educating their officials.