Battling cybercrime in the disinformation age

By Recorded Future

Interview with Roman Sannikov, Director of Cybercrime and Underground Intelligence at Recorded Future.

Tens of thousands of minority voters in the US received robocalls that falsely claimed voting by mail would allow the government to use voters’ personal information and track them down for mandatory vaccines.

Investigations revealed the calls were created and funded to deter voters of colour from participating in the election, AP News reported. Disinformation has grave consequences - it can undermine democracy and create strife and racial tensions, says Roman Sannikov.

Sannikov is the director of Cybercrime and Underground Intelligence at Recorded Future. He shares how his team engages cybercriminals in the dark web and takes down malicious sites to combat disinformation.
 

Monitoring the dark web


Sannikov’s main role is to combat disinformation, a lot of which is interlinked with nation-state interests, he says.

His team crawls through the dark web and underground forums to listen in on criminal discussions. That gives them a sensing of what attacks they’re planning and who they’re tackling, Sannikov says. “Sometimes it's a matter of hacking accounts of legitimate government employees that they can then use to disseminate malicious content.”

His team also directly interacts with these threat actors to learn about their attack techniques, he adds.

Recorded Future’s research group, Insikt, previously engaged threat actors selling disinformation wares on Russian-speaking underground forums. They contacted two separate groups to promote and discredit a fake entity.

“This isn’t something that they had to reinvent the wheel for. Whenever they’re doing this, they literally check the boxes for us in terms of what we wanted to do, how much we wanted to pay,” Sannikov said in a Recorded Future podcast.

To promote the fake company, the first group suggested planting positive articles of fictitious interviews and creating social media promotion using bots. The other group of threat actors suggested filing fake claims of unethical and illegal practices with local law enforcement to discredit the company.

The process only took around 6 weeks and cost US $6,000. “Awareness and vigilance” is key, said Sannikov, and he advised organisations to quickly reach out to sources of erroneous information before it gains traction on social media.
 

Taking down malicious sites


A huge part of disinformation also comes from typosquatting, a technique creates fake domains based on typos or spelling errors. The URL of a typosquatting site might look like Goggle.com instead of Google.com, Sannikov says.

This technique isn’t new, he adds, but its use has increased dramatically during the pandemic. Fake sites that imitate official Covid-19 information sites have popped up. “Anytime that there is an emergency, and there's a thirst for information, people tend to be less skeptical.”

Citizens might unknowingly key personal information into these fake sites, which will be stolen by hackers. Attackers can also download malicious content into user devices and create botnets, Sannikov says.

Sannikov’s team takes client domains and runs them through algorithms that identify common misspellings. It then investigates sites with close similarities to client domains for malicious activity. Anytime a malicious site is detected, Recorded Future works with internet security organisation Fraud Watch to report and take the site down.

Typosquatting is bad for a private company, but has even more drastic consequences for the public sector, says Sannikov. That’s because people depend on agencies for factual information during emergency situations like Covid-19 or natural disasters.

“Government agencies do have a responsibility to their people to provide accurate information,” he adds. “It just really behooves the public sector to keep a very close eye on anything that appears to be coming from them.”