How Sweden is fighting deep fakes

By Nurfilzah Rohaidi

Interview with Joel Brynielsson, Research Director at the Swedish Defence Research Agency.

“We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things,” ex-US President Barack Obama once said.

Or did he? Last year, a US filmmaker showed just how easy it was for anyone to create a rather convincing fake video of an important figure, using software available today. Deep fakes are realistic-looking and -sounding fake videos or audio, named after the deep learning AI algorithms that make them possible. A human simply has to input real audio or video of a person, and the algorithm does the rest.

For as much as AI can help us detect tumours or drive cars, it can also play a much more sinister role in enabling the spread of misinformation and fake news. GovInsider spoke to Dr Joel Brynielsson, Research Director at the Swedish Defence Research Agency, on what can be done to tackle the spread of deep fakes.
 

The deep fakes conundrum


A Bloomberg reporter recently described deep fakes as “fake news on steroids”, and remarked that they are “extremely easy” to make. Brynielsson notes how “you can use these kinds of synthetic images for creating situations which do not actually exist. A riot is going on - but there is no riot, because it is all made up.”

Fake news or influence operations are “an increasing threat” to governments today, Brynielsson says, and allow malicious actors to convey their narratives using convincing stories and content. It is not a stretch of the imagination to say that deep fakes could help sway elections, or cause violence and unrest. “We need to be careful when we read everything on the internet, we need to be critical,” Brynielsson says on the sidelines of the recent Singapore Defence Technology Summit organised by the Defence Science and Technology Agency.

A related problem concerns content, whether fake or real, that has been tampered with for the purpose of fooling AI systems. Such content is specifically made to be misclassified or go undetected to bypass intelligence systems, for example. This forms part of Brynielsson’s research agenda – “when AI algorithms look at the ‘wrong’ parts in the figure, this is a clear sign that the content has been messed with”, he explains.
 

Sniffing out the fakes


In June, US researchers revealed a digital forensics technique that can spot deep fakes 92% of the time. It does this by using machine learning to pick out subtle and unique cues in speech and facial movement. This could mean the way a person nods while saying particular words, reports MIT Technology Review. Deep fake algorithms today are not able to mimic such small gestures.

However, such techniques may not be a permanent solution. Experts have noted how it is only a matter of time before algorithms become sophisticated enough to overcome even these deep fake detectors, according to a report on The Verge.

Brynielsson emphasises how important it is for governments to build “robust AI solutions”, especially as AI becomes more prevalent - and especially when it comes to the safety of citizens. “We do not want our AI to be fooled,” he remarks. “If we make the wrong predictions in defence and security then you can have very bad consequences.”

Brynielsson hopes to put together a comprehensive AI strategy for his research agency by this year, which would be governed by national intelligence and security interests. “It involves having the armed forces on board,” he notes. “Because they put money into small projects all the time, and I think we should target the big questions and have a long term plan.” Deep fakes research would be one big problem statement to focus on.

The spread of fake news and misinformation cannot be stopped. But with advances in research and technology, there will be new and evolving responses to this challenge. “If we can’t discriminate between serious arguments and propaganda, then we have problems,” Obama once said - and this is a fact.