Singapore – Singapore has unveiled a new Model AI Governance Framework (MGF) designed to guide the responsible deployment of agentic AI, marking a further step in the country’s approach to trusted and reliable AI adoption. The framework was announced at the World Economic Forum and has been developed by the Infocomm Media Development Authority (IMDA) as an extension of Singapore’s original AI governance framework, first introduced in 2020.
MGF focuses specifically on agentic AI systems, which are capable of planning across multiple steps and taking actions on behalf of users to achieve defined objectives. While these systems offer organisations opportunities to automate routine processes, improve customer service functions and enhance enterprise productivity, they also introduce additional risks due to their increased autonomy, access to sensitive information and ability to alter digital environments.
The framework outlines how organisations can identify and manage these risks by combining technical safeguards with organisational and procedural controls. It emphasises that, despite higher levels of automation, responsibility and accountability must remain with humans. Particular attention is given to preventing issues such as unauthorised actions including shadow AI, operational errors and excessive reliance on automated systems that have previously performed well.
“As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance for agentic AI,” April Chin, co-chief executive officer at Resaro, stated.
“The framework establishes critical foundations for AI agent assurance. It helps organisations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails.”
Intended for organisations deploying agentic AI either through in-house development or third-party solutions, the MGF provides an overview of emerging risks and practical approaches to mitigation. It encourages careful selection of use cases, limits on system authority and access, clearly defined points for human review and approval, and the implementation of controls throughout the lifecycle of AI agents. The guidance also highlights the importance of transparency, user education and training to ensure responsible use.
The framework was shaped through input from public sector bodies and private enterprises, reflecting a range of perspectives on the governance challenges associated with agentic AI. It is positioned as a living document that will continue to evolve as technologies and use cases develop. Further work is also underway to establish testing guidelines for agentic AI applications, building on existing initiatives focused on the safety and reliability of large language model-based systems.
“Building trust in agentic AI is an ongoing, shared responsibility, and IMDA’s framework is a constructive first step,” Serene Sia, country director of Malaysia and Singapore at Google Cloud, commented.
“Google has been playing a key role in establishing the foundation for interoperable and secure multi-agent systems. We remain committed to responsible innovation and look forward to contributing best practices as this technology advances further.”
This latest MGF forms part of Singapore’s broader efforts to promote trustworthy AI at both national and regional levels. Alongside tools such as AI Verify and earlier governance models, it supports the country’s collaboration with international partners, including through its AI Safety Institute and its leadership role in ASEAN discussions on AI governance.
Collectively, these initiatives aim to balance innovation with effective safeguards, reinforcing Singapore’s position as a contributor to global standards for responsible artificial intelligence.

