Singapore – Around 70% of cloud workloads using AI services contain unresolved security vulnerabilities, exposing sensitive AI data and models to risks such as manipulation, data tampering, and leakage due to preventable security gaps. This is according to the latest report from exposure management company Tenable.
Findings from the report indicate that cloud AI workloads aren’t immune to vulnerabilities, noting approximately 70% of cloud AI workloads contain at least one unremediated vulnerability. Interestingly, it found CVE-2023-38545, a critical curl vulnerability, in 30% of cloud AI workloads.
Another significant finding is the widespread presence of Jenga®-style cloud misconfigurations in managed AI services. According to the report, about 77% of organisations have the overprivileged default Compute Engine service account configured in Google Vertex AI Notebooks. This implies all services built on this default Compute Engine are at risk.
It was also noted that AI training data is susceptible to data poisoning, threatening to skew model results. In particular, the report found 14% of organisations using Amazon Bedrock do not explicitly block public access to at least one AI training bucket, and 5% have at least one overly permissive bucket.
Meanwhile, Amazon SageMaker notebook instances grant root access by default. Consequently, around 91% of Amazon SageMaker users have at least one notebook that, if breached, could allow unauthorised access, potentially enabling modifications to all its files.
Liat Hayun, VP of Research and Product Management, Cloud Security at Tenable, stated, “When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust.”
“Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organisations to achieve responsible AI innovation,” Hayun further remarked.