Singapore – As technology evolves every day in the modern age of technology, the pressure for enterprises and organisations to keep up with computing demands is ever-increasing.
The introduction of AI agents across global markets, especially within the APAC region, is now a common trend to maintain the technological status quo. However, instead of closing the gap with computing and operational demands, AI agents have introduced a new gap in autonomous workflows: the visibility gap in agentic identities.
Saviynt’s latest 2026 report highlighted that there is an 86% gap in AI identity access policies across international markets, particularly APAC. This reveals that while AI agents are the new autonomous and digital employees, more guardrails and policies are needed to improve transparency on their actions.

Exploring this further, UpTech Media conducted an exclusive interview with Tim Wedande, Field Chief Technology Officer at Saviynt, who shared his insights on how organisations should approach AI agents as digital employees.
Convenience and the privilege dilemma
Every company, small or large, has a specific onboarding process in place to ensure incoming employees are acquainted with their responsibilities, work, and access within an organisation.
When it comes to AI agents, this is not the case. Tim shared with UpTech Media, “They come through integrations, pilots, and vendor tools that ask for ‘full access’ to get up and running quickly.”
He added, “In practice, this means broad, ongoing permissions across many applications that are granted once and seldom questioned again.”
Unlike human employees, AI agents do not have monthly, quarterly, or annual reviews to monitor performance for a role change or promotion. Once an agent is onboarded, they autonomously function across the permissions and access they are provided.
Scaling within an organisation is completely different for an AI agent that can analyse large volumes of customer data, manage calling internal APIs, and trigger workflows when needed across company systems.
Tim commented, “Product mature organisations tend to underestimate how fast ‘temporary’ access becomes permanent, and before long, agents are running with broader privileges than any human administrator would ever be given.”
The convenience of AI agents can deliver and provide operational efficiency with evolving demands in the digital age. The next step is to ensure processes are in place to monitor and manage AI agents across their autonomous workflows.
Closing visibility gaps with defined guardrails
From Saviynt’s latest 2026 report, the findings show that 92% of the surveyed organisations mentioned lacking full visibility into AI identities, while 95% doubt they are able to detect any misuse if it occurs.
Why is this the case? It comes down to how AI agents behave. Tim mentioned, “AI agents act differently. They authenticate using tokens and service accounts, run continuously, and are often placed behind middleware, which makes their actions indistinguishable from normal system traffic.”
Additionally, there are three key factors that often lead to AI agents going unnoticed across company-wide operations. Tim explained that it begins with a visibility gap in identity due to a lack of clean inventories where agents exist and run autonomously.
This is followed by unclear and transparent logs to indicate which AI agent used a specific API key on behalf of a specific team. Without proper ownership and access to data, the use of tools and resources goes unnoticed behind the scenes.
Lastly, it circles back to behaviour—with a twist. Current monitoring tools are tuned to analysing human patterns. However, when it comes to examining machine-speed bursts of activity or cross-system workflows, monitoring guardrails are still playing catch-up with the evolving intelligent systems.
The question then shifts to: what technical guardrails can organisations implement to prevent unwanted AI agent movements across isolated core systems?
According to Tim, it comes down to destroying the hidden ‘trust shortcuts’ that have been built over years of integration work. He added, “Organisations need hard stops and explicit approvals between domains rather than allowing an agent to treat the network as one space.”
He explained, “At a technical level, this requires that each agent has a unique identity in each system, with narrow permissions, and that all cross-system calls are routed through controlled interfaces.”
One example Tim shared surrounded customer support agents and how their actions should only be allowed through approved and well-designed APIs with strict policies and never through shard databases or generic user accounts. Meanwhile, for regulated sectors such as finance services or telecommunications, organisations should also implement the ‘three lines of defence’ practice to AI agents, in addition to software and the human workforce.
At the end of the day, the autonomous movements of AI Agents should be monitored closely and kept within specific areas of information using strict policies to maintain security, trust, and compliant measures.
Accountable ownership stays with humans
In addition to setting up secure guardrails and strict policies for AI agents, an important factor in managing where AI agents operate and what they access comes down to ownership.
However, as Tim highlighted, owning an AI agent means having ownership over the decisions that govern an agent’s movements and the choices they make.
He shared, “For organisations, the practical approach is to view each agent as a digital staff member embedded in a business process, with a named owner on the business side and a sponsor in technology.”
Tim further emphasised how the accountable owner of an AI agent should be the executive who already owns outcomes in a specific domain, for example, the head of retail banking, operations, or customer experience. Their job is to define what the agent is allowed to do, what data it may touch, and which decisions still require human sign‑off.
Alongside executives or department heads, human security teams should also play a role in monitoring AI agents.
“Technology and security teams then act as stewards. They provide guardrails such as identity management, logging, approvals, and controls, but the business owner remains responsible for appropriateness and impact,” Tim shared.
While AI agents can operate autonomously and accomplish large volumes of tasks compared to their human counterparts, human intervention and ownership are still crucial at the end of the day to ensure secure ownership and accountability for agent actions.
******
As more enterprises and organisations onboard AI agents within operations across global markets, especially in the APAC region, Tim highlights the six essentials for ensuring proper governance, ownership, accountability, and visibility for agentic employees.
It begins with creating an AI identity registrar for listing all known agents, followed by ensuring there are business owners assigned for each agent, then setting non-negotiable guardrails and policies, ensuring agents are brought into existing identity lifecycles, requiring change control for any expansion of an agent’s access, and finally running regular scenario exercises to predict the next unwanted move.

