With more cybercriminals weaponizing artificial intelligence for complex attacks, differentiating what is fact from fiction in the case of cybersecurity remains difficult.
To note, the growing number and severity of these attacks directly correlate to the expanding attack surface, which introduces more possible entry points for criminals to capitalise on, especially those involving vulnerabilities and compromised systems. These new avenues particularly enable cybercriminals to steal more data, launch ransomware attacks, or manipulate systems for financial gain.
However, many countries were not exempt from confronting this global challenge. It is, thus, crucial to safeguard the digital safety of the public, as cybersecurity professionals count on AI to strengthen their defence efforts.
In UpTech Media’s latest exclusive interview, we spoke with Nathan Wenzler, chief cybersecurity strategist at Tenable, to understand the factors that make AI vulnerable to adversarial attacks designed to exploit their algorithms.
More importantly, the piece will look into ensuring how organisations should address and utilise technology efficiently while also promising solutions that are transparent and secure.
Vulnerability of AI-powered security systems to manipulation or adversarial attacks
In the conversation, Wenzler pointed out that the primary challenges concerning AI algorithms arise from errors in coding or design rather than malicious factors. This suggests concerns about attacks targeting algorithms that power AI tools as largely unwarranted.
He further stressed that concerns around cyber attackers specifically targeting the algorithms themselves are mostly wasted effort and energy. Wenzler then added, “Fundamentally, AI tool interfaces are applications. This means they can be secured with access controls and code security measures, like other applications.”
As for malicious actors, however, he also noted their potential to “poison” these datasets with inaccurate information. The said approach leads the algorithm to believe the data is correct, providing users with bad information who are likely to trust whatever output they’re given and take action based upon this flawed information.
“In essence, the attackers are exploiting the trust that users place in AI tools, which is a far easier attack to leverage than to attempt to compromise the application itself and modify or manipulate the underlying algorithm to somehow generate malicious responses,” explained Wenzler.
He also briefly elaborated on this concept, stating, “This attack on trust is a much more subtle and potentially damaging style of attack, posing a greater challenge for identification, mitigation, and remediation.”
Therefore, to handle the matter at hand, he advised organisations to implement strong data validation techniques.
Additionally, he suggested using smaller, easier-to-manage datasets that align with the type of analysis the tool is intended for, making it easier to secure and validate on a regular basis.
Ensuring transparency and security using artificial intelligence
According to Wenzler, the power of generative AI lies in its capacity to analyse huge data assets quickly, despite their varying data types, find patterns and relevance between the various forms of information.
He subsequently explained, “This sort of capability can be leveraged in any of the places where security practitioners must analyse security data and make sound decisions based on a complete understanding of what they’re reviewing, especially when the scope of the datasets is unreasonably large for any human to try and analyse manually.”
To illustrate this, Wenzler cited one example, such as an SOC analyst receiving endless numbers of alerts, vulnerability findings, login attempts, and other data all fed into their central monitoring systems.
He then noted, “Trying to manually identify which issues deserve immediate attention and the need to raise the alarm and call a security incident can take a huge amount of time, a wealth of expertise on every single technology type involved, and a total understanding of what every finding means in relation to the organisation and its assets.”
Given that most SOC analysts will not quickly digest all this data, he asserted that they are likely to turn to online search tools to research individual findings and try to understand what each means.
However, this move can take a long time in an event where the attacker could already penetrate the environment and the defenders have not yet been mobilised to stop them.
“GenAI tools can not only help sift through the mountains of data to prioritise and surface the most problematic findings, but can also translate the complicated technical information into something more easily understood, allowing for your average SOC analyst to more quickly understand the situation at hand and make the right decisions to respond to the security issue at hand,” Wenzler expounded.
Although it appeared simple, Wenzler emphasised that such automation and the delivery of easily digestible explanations of a threat might offer tangible value to an organisation. This encompasses reducing the amount of time spent analysing a problem and providing more time to correct the problem before an attacker is able to take advantage of it.
Meanwhile, in the discussion regarding the key limitation of AI tools for cybersecurity programmes, he said it rests with the datasets used by the various GenAI or ML models used. “If the data is flawed, inaccurate, or otherwise compromised, then the output provided won’t be correct, and security practitioners run the risk of basing critical risk mitigation decisions on flawed information,” Wenzler stated.
In addition to concerns about their interfaces, he also reminded organisations that the best practice in this case is to utilise standard controls that are put in place for any sort of application.
“GenAI applications are still just applications, and so executing strong access controls, performing application security testing, and validating outputs on a regular basis like one would do for any managed application will apply to mitigate some of the security concerns here,” he further shared.
Policy recommendations to enhance cyber resilience across all organisations
In discussions about policy recommendations for enhancing the cyber landscape, Wenzler admitted teams have struggled to decode the optimal actions they need to take upon security risks that threaten the cyber scene.
Interestingly enough, he shared how their company was able to provide security teams with greater context around vulnerabilities discovered within their systems. Launched in 2023, Wenzler also said that the company has made a few enhancements to it.
This move is aimed at empowering more of their customers to quickly summarise relevant attack paths, ask questions of an AI assistant, and receive specific mitigation guidance based on the intelligence gathered. In turn, it enabled them to gradually eliminate the guesswork from the remediation process and save valuable time through their recommendations of the most effective path forward.
Beyond this, Wenzler further emphasised the vital role of all organisations, whether public or private, when it comes to augmenting cybersecurity efforts and defences.
As a matter of fact, he stated that the sharing of threat intelligence and attack information is among the most significant areas for this common goal.
Afterwards, he remarked, “The more organisations in both sectors share what their monitoring systems are detecting in terms of new cyberattacks, attacker behaviour and techniques, and the types of malicious software being used in specific attacks, the more organisations across the country can build better defences quickly and tune their own monitoring capabilities to increase awareness and lower response time to similar attacks.
Expanding on these points, he said this kind of collaboration is beginning to extend to other parts of the world, each contributing anonymized threat intelligence so that all partner organisations increase their own detection capabilities significantly. This proves that greater opportunities for strengthened collaboration between public and private organisations await, given the growing convergence of technologies used by both entities.
“Building on this collaborative approach, governments need to take a more proactive approach in raising awareness with both private sector organisations and the individual constituents they serve,” Wenzler concluded.