Three ways AI will deliver better cybersecurity
By Protegrity
Eliano Marques, Chief Data & AI Officer at Protegrity, shares how AI can be used to reduce its own vulnerabilities.
Malicious actors can poison the data that fuels AI, weakening its defences. Cybersecurity teams can use AI’s own analytical ability to flag any unusual behaviour and detect these threats early.
Eliano Marques, Chief Data & AI Officer at Protegrity, shares how data protection tools help AI stay secure. He also shares innovative ways to securely build machine learning models across different organisations.
The state of AI security
Cybersecurity risks arise from the training data that AI algorithms learn from. This training data can be tampered with, causing the AI to behave in unintended and potentially harmful ways.
Gartner reports that thirty percent of all AI cyberattacks by 2022 will involve malicious tampering of training data, among other techniques. Marques wrote that the ability to protect this data “is increasingly at the center of security,” Marques wrote.
But AI can help take charge of its own protection. There are three ways that the technology can keep the information, including training data, out of the hands of malicious actors.
1. Create privacy enhanced datasets
The first way AI can help protect data is by anonymizing and hiding sensitive information. This makes malicious tampering very difficult to pull off.
One way to do this is through synthetic data, which is where an AI tool generates its own alternative to real-life information. This alternative information is still helpful as it still keeps the statistical value of the original sensitive data. However, because the AI anonymises any sensitive data, it is now less vulnerable to malicious actors, Marques explains.
Sometimes synthetic data will not fit the use case, and organisations need to enhance the privacy of a production dataset. This requires a combination of techniques like dataset generalization, redaction, and tokenization to deliver a dataset that can be measured for its data utility and audited for its degree of privacy.
2. Secure AI development
The second method of securing training data is improving how AI is developed.
Tech tools will allow the algorithm to be transported to the data rather than taken to the algorithm. This means that the AI will learn from multiple data sources without the information needing to leave its secure storage.
This new type of AI technology has much potential in the future of its cybersecurity, Marques says. It could be used for algorithm development in the healthcare industry; he gives one example.
If multiple hospitals wanted to use their patient data to train an AI model, they would be hesitant to put that information in one shared space. But with the ability to bring an AI model to each hospital, the patient data can be studied while not leaving its secure storage.
“Research institutes, law enforcement, and government agencies” will all be able to combine their findings and conduct machine learning “without anyone seeing anyone else’s data,” he wrote.
3. Analysing user behaviour
The third way AI can be used in cybersecurity is to analyze user behaviour within an organisation’s network quickly. The algorithm’s analytical ability means that it excels at detecting threats early and diagnosing when an attack has happened, Marques highlights.
Protegrity’s AI analyses previous trends to help identify and highlight suspicious behaviour. This can be done at a faster pace, as compared to many organisations where humans are manually reviewing records and behaviour, he says.
AI’s analytical abilities can open up massive new possibilities for digital government. However, it relies on secure data, meaning that protecting this information through anonymisation, secure development, and threat detection will be vital to realizing its limitless potential.