Organisations scarily unprepared for generative AI’s security risks – ExtraHop report

By ExtraHop

Generative AI is already driving improvements in productivity, but a new report suggests that leaders are underestimating what it takes to protect systems against security risks. GovInsider speaks to Daniel Chu, VP of Systems Engineering at ExtraHop, to learn more.

As artificial intelligence becomes pervasive in government, how can leaders protect their systems? Image: Canva

Last year, Samsung employees leaked confidential data to ChatGPT – data that could potentially be used to train the model and be inadvertently exposed to other end-users. The company responded by banning the use of generative AI, reported Bloomberg.

 

This will not be an isolated incident, according to a new report by cybersecurity leader ExtraHop. The report found that 81 per cent of Singaporean organisations report their employees are using generative AI in the workplace – but only 40 per cent have invested in monitoring tools and only 38 per cent are training employees in the responsible use of AI.

 

In fact, of the 1,200 IT and security leaders surveyed across six countries, 73 per cent reported that employees use generative AI but less than half are investing in measures to monitor and train employees.

 

“This might be a sign of overconfidence by general management and a misunderstanding of the risks that comes with leveraging generative AI tools,” says Daniel Chu, VP of Systems Engineering at ExtraHop, to GovInsider.

Disconnect between perception and reality

 

The Generative AI Tipping Point report found gaps between how leaders perceive the use of generative AI within their organisations and the concrete steps they have taken to ensure employees are using generative AI responsibly.

 

For instance, over four-fifths of all respondents reported high confidence in defending against AI threats – despite over half not having any technology to monitor employee use or to train employees.

 

Similarly, nearly one in three organisations worldwide have banned these tools outright but only five per cent of respondents shared that their employees never use such tools.


“When you have good technology that improves productivity, a blanket ban is going to be perceived as not understanding the employee. There’s going to be pushback,” notes Chu. 

 

Employees may resort to workarounds like using personal devices to access these tools. And when employees use generative AI tools without oversight, this can lead to the leakage of sensitive information – both proprietary and personal. 

 

Hallucinations” can also pose security risks, particularly when generative AI generates source code that is insecure or includes malware.

 

“There’s too many technical workarounds. The security practitioners’ job is not necessarily to prevent the use of generative AI, but to manage the risks involved,” he says.

Embrace tech in a risk-aware manner

 

Security leaders should support organisations in embracing generative AI to improve productivity, while managing risks in a responsible manner, he explains.

 

First, leaders should establish platforms to better understand how such tools are helping employees across the organisation, from developers to marketing professionals. For instance, ExtraHop has released a dashboard that enables security leaders to view traffic to OpenAI tools, he notes.

 

“We have seen far more cases than leaders expected. There are times when organisations realise that their third-party solutions are leveraging generative AI in the backend, even though leaders have implemented a blanket ban on OpenAI services,” he says.

 

Understanding how employees currently use generative AI will enable leaders to shape more nuanced governing policies moving forward, he says. Such policies can also provide guidance on how employees can better use these tools responsibly.

 

Next, security leaders can take this opportunity to train employees on generative AI use cases within their organisation.

 

“This is a great opportunity to show employees not just what they can't do, but also how generative AI can make their lives better,” he says.

 

According to the report, 60 per cent of respondents across the globe believe that governments should take the lead in setting clear regulations for businesses to follow. Governments can be most effective when it comes to defining the ethical use of personal information and in mitigating bias in AI models, says Chu.

Internal models a good first step

 

In the public sector, agencies have responded to generative AI risks by building their own internal versions of such models to prevent data leaks. Is that enough? Not quite, says Chu.

 

“It helps with alleviating concerns around data exposure. But once you have a crown jewel, there are the usual data breach implications involved in terms of securing the model from unauthorised access,” he explains.

 

This means that organisations still need to guard against employees gaining unauthorised access to sensitive data and even insider threats from within the network.

 

Agencies should be wary about which data to train their internal models with as well. For instance, organisations should refrain from training large language models with personally identifiable information, as such data could be accidentally generated to other end-users.

 

“The other unique aspect is that generative AI models are very complex. The number of people in the world who truly understand these nuances are few and far between. And when you have complexity, that’s when you have risks,” he says. 

 

“This is a really tremendously powerful tool that’s going to be with us forever. That really underscores how important it is for users to really understand the risks and implications and that things are evolving by the day.”


To read ExtraHop's latest report, click here.