24 million fraudulent voters voted for Hillary Clinton. Saudi Arabia funded French President Emmanuel Macron’s election campaign. CIA planned to stop the Trump inauguration.
Fake news has become pervasive in our daily lives, turning into a dangerous tool that threatens human rights, information objectivity, and even democracy. “If it’s weaponised, it’s comparable to warfare,” says Andy Chun, professor at the City University of Hong Kong.
In these times, AI has risen as a tool to help combat the advent of fake news. But how successful can it be? GovInsider caught up with Andy Chun, leading pioneer of AI, to find out more.
Fake news; real consequences
Repercussions of fake news have recently taken a global stage. The US has been investigating into foreign meddling into the 2016 elections, and Facebook has faced global backlash for its inability to curb access to fake news.
The starkest consequence of fake news goes back a decade to Estonia. The first country in the world to create a digital identity and a global leader in e-government, Estonia is at the forefront of technology. In April 2007 it became an example of fake news sparking the first ever cyber attack on a country. Protests started in Estonia when false reports on Russian news claimed that a Soviet-era World War Two memorial and war graves were being destroyed when the statue was simply being moved.
Estonia erupted into riots and entire branches of the banking system, news media outlets and whole government bodies were taken offline due to unprecedented levels of internet traffic. Seizing the confusion, attackers hacked into Estonia’s online institutions and brought them down.
“If it’s weaponised, it’s comparable to warfare.”
Role of AI
Rooting out fake news means fact-checking all articles and AI has the capability to do so at a much faster rate than any human. “In terms of fact checking AI could look at all online sources, all encyclopedias, all dictionaries, all past newspapers and in seconds fact check them,” Chun adds.
The mechanism is simple, much like a reporter, the AI is taught to weed out fake news articles, by first checking against a historical record of fake news, checking against reports of reputable news sources and also checking the style of an article for factors pointing to click baits.
“In many ways, AI tries to mimic humans so the easiest way to explain is how a human, a professional journalist, would try to figure out if news is real or fake,” he explains. This adds responsibility on news outlets as well – “news articles must state the facts, observations, sources”, Chun adds.
AI needs to be taught to recognise these qualities of good journalism, and if an article rates high on this score it can be deemed more reputable. Using AI to rate articles, news outlets could be given reputation or objectivity scores, Chun explains.
Social media plays an important part in stopping the spread of fake news. Facebook has recently bought an AI startup Bloomsbury.ai to reportedly help trawl for fake news content. The startup has previously worked in the past to build a news platform with machine learning and human fact checkers to identify fake news.
Facebook has also recently redeveloped its algorithm to prioritise content from family and friends and reputable news sources on its feed. The consensus, however, is clear – “AI is evolving as well, so it’s an ongoing battle. There’s no end, it’s going to be never-ending,” Chun highlights.
No one stop solution
AI, however, isn’t the one-stop solution it is expected to be. News-checking AI can run into several problems categorising sarcasm or satirical content, have difficulty coping with changing cultural references, picking out fraudulent images and even mapping political bias, Chun says.
The key to making AI more accurate lies in the cooperation between machine and humans. Websites and social media should allow users to report fake news so that the algorithms can learn from the data, he believes. AI could also pick out suspicious news article for humans to have the last word on whether a piece is fake. “It’s a balance of having automated and self-reporting mechanism,” he adds.
Human biases must be accounted for in each stage – even app developers can unknowingly write in their political biases to algorithms. “Then you might argue that the Democrats report all Republican views as fake and so on, there’s no good solution,” he explains.
Education also plays an important role in curbing the rise of fake news. The use of AI and government legislation must be combined with educating the public to pick out fake news and to be wary of sources. Educating people from a young age to source information, fact check what they are reading and to not believe everything they hear or see is a core part of the solution, he says.
“AI cannot be solely responsible for solving the problem,” Chun notes, “there’s a whole ecosystem of things to do.”
Fake news has often been likened to a modern Hydra, the serpent with many heads from Greek myth – remove one, another two take its place. But with combined human and AI efforts the consequences of fake news can be minimised.