Singapore – Exabeam has announced the release of a connected system designed to analyse AI agent behaviour and provide visibility into AI security posture, extending its intelligence and automation capabilities for security operations.
The platform combines AI agent behaviour analytics, unified investigation of AI-related activity, and insight into the security controls governing AI agent use, supporting organisations as AI adoption accelerates.
As AI agents are increasingly deployed across enterprise environments, organisations are encountering risks linked to data exposure, policy circumvention, and unauthorised system changes, often with limited visibility into how or why such actions occur.
“AI agents have the potential to radically transform how businesses operate and serve their customers, but only if they can be governed responsibly,” Pete Harteveld, CEO of Exabeam, stated.
He added, “These new capabilities from Exabeam provide insight and give organisations a path to continuously improve, ensuring we protect our customers, their customers, and the broader ecosystem from emerging AI-driven threats.”
Building on functionality introduced in September 2025 to detect AI agent behaviour through integration with Google Gemini Enterprise, the latest release further advances analytics focused on autonomous AI activity.
The updated system places AI agent behaviour at the centre of detection and investigation, enabling security teams to review AI-related events within a single, timeline-driven view.
It also strengthens organisations’ ability to assess their readiness for AI usage by tracking security posture maturity over time and providing targeted recommendations. Enhanced analytics support more accurate modelling of emerging AI agent behaviours as adoption continues to grow.
“Securing the use of AI and AI agent behaviour requires more than brittle guardrails; it requires understanding what normal behaviour looks like for agents and having the ability to detect risky deviations,” Steve Wilson, chief AI and product officer at Exabeam, commented.
“These capabilities give security teams the behavioural insight needed to identify risk early, investigate AI agent activity quickly, and continuously strengthen resilience as AI usage and agents become integral to enterprise workflows.”
Together, these capabilities are intended to provide security leaders with a structured framework to monitor AI activity, accelerate investigations and strengthen defences as AI agents become embedded in enterprise workflows.
Industry analysts increasingly expect oversight of AI agents to emerge as a distinct security discipline alongside identity, cloud and data protection, reflecting the need for new approaches to secure dynamic, decision-making systems.

