The generative AI wave is inevitable – what should critical information infrastructure owners do?
Generative AI is unstoppable, and cybersecurity leaders will have to be at the forefront when organisations begin implementing such models, shared speakers at a recent event held by BeyondTrust and the Association for Information Security Professionals (AiSP).
At a recent event held by BeyondTrust and the Association for Information Security Professionals (AISP), speakers shared on challenges faced by critical information infrastructure owners. Image: BeyondTrust
“Does security have a seat at the table for the entire process of your company adopting generative AI? If not, you need to get a seat at the table. You need to be one of the drivers for it. Because if you build it wrong or insecure, fixing it will be very hard,” declared Nathaniel Callens, Chief Information Security Officer, Grab.
He was speaking at Securing Critical Infrastructure in an AI-Powered Era, a recent event hosted by the Association of Information Security Professionals (AiSP) and BeyondTrust, a leading provider of identity and access security solutions.
At the event, key cybersecurity leaders shared essential strategies to protect critical information infrastructure (CII) and the risks and opportunities associated with the emergent popularity of generative AI.
Gen AI is inevitable. Let’s secure the playground
First, Callens highlighted that the use of generative AI in the workplace will be a game changer and that information security professionals need to play an active role in charting their organisations’ approach towards adopting AI.
“If you think for a minute that your developers in your company aren’t using it, you need to get them to tell you the truth. Because I promise you they absolutely are. There are estimations right now that the average developer will be anywhere from twice as effective to four times as effective using generative AI,” said Callens.
He explained that generative AI will unleash the next productivity wave, ranging from AI-powered chatbots that provide better customer service, to AI models which can perform medical analysis, and AI that can speed up resume screening.
And Grab is “all in”, with over 400 machine learning models in production, he shared, including an internal “GrabGPT” built on OpenAI’s ChatGPT that employees can tap on.
But generative AI poses some critical risks. For one, malicious actors can test the limits of public-facing generative AI products to reverse-engineer proprietary models and discover vulnerabilities to exploit. This can lead to them obtaining private company data as well.
Beyond that, malicious actors could engage in “data poisoning” by submitting false or malicious data to skew how the chatbot performs.
“The question is really about balancing risk and reward… You have to create the playground for [employees] to work in and then secure the playground as best you can,” he said. Internal models of generative AI foundation models are one way of doing so, as employees can work within a secured space with no data leakage.
He also highlighted that generative AI can also be used to power the next generation of cybersecurity, from improving threat detection and analysis to enhancing the operations of security tools.
Cyber Security Agency: No current plans to introduce Generative AI regulations
Christopher Anthony, Director of Critical Information Infrastructure at the Cyber Security Agency of Singapore (CSA) also spoke, sharing best practices and strategies that CII owners can adopt to maintain the cybersecurity posture of national assets.
Critical infrastructure refers to 11 key sectors essential for the country, including utilities such as energy and water, financial services, healthcare, transportation, government, and security services.
During his presentation, he highlighted that CII owners can refer to CSA’s Code of Practice to understand the obligations they are expected to comply with. Such obligations include the storing of access logs, performing biannual threat hunting exercises, and conducting cybersecurity risk assessments when migrating to cloud environments.
Learn how you can leverage BeyondTrust solutions to comply with the CCoP guidelines.
He shared that the current Code of Practice, released in 2022, emphasises proactive defence and the importance of tailoring defensive requirements for the different contexts that every CII sector faces.
He also noted that it was important for CSA to work with industry players. He shared that CSA worked with large cloud service providers such as Amazon Web Services (AWS), Google, and Microsoft Azure to co-create the cloud cybersecurity guidelines that CII owners will need to be aware of.
When asked by a participant about including a new section on generative AI, Anthony shared that CSA was exploring the possibility, although there are no current plans to do so.
Securing endpoints, limiting excessive access
Finally, Leon Tan, PAM Evangelist for BeyondTrust, spoke about the importance of securing organisations against privilege-based attacks, which will only become more prevalent in the age of AI innovation.
He explained that angles of attack are constantly growing and the current attack surface contains many possible points of entry, such as the hybrid cloud architecture, Internet of Things devices, SaaS applications, and other tools.
This has led to a proliferation of privileged accounts that have administrative access and users who have too much access, he said.
Such accounts are prime targets for malicious actors, as they can use these accounts to move across the network and access sensitive data. In fact, in a recent research conducted by BeyondTrust on Zero Trust Priorities for Singapore Companies, it was found that more than half of IT leaders believe that users in their organisation have excessive privileges beyond what is required to do their jobs.
This is why it is critical for organisations to focus on privilege management and remove excessive user privilege, he said.
Organisations can protect themselves by implementing least privilege principles on IT assets such as desktops and servers. This can mean removing administrator rights from users who do not need them, verifying user identity at every step of a session, and auditing access to privileged information.
Organisations can also implement accountability for privileged accounts, such as through monitoring every privileged session, analysing the behaviour of privileged users, and rotating passwords and keys, he shared.
Also read “Top 3 Cyber Challenges for Singapore”