New Model AI Governance Framework for Agentic AI to guide Singapore organisations on responsible deployment

New Model AI Governance Framework for Agentic AI to guide Singapore organisations on responsible deployment

Summary

Singapore’s Infocomm Media Development Authority (IMDA) has published the Model AI Governance Framework for Agentic AI, announced by Minister Josephine Teo at the World Economic Forum on 22 January 2026. Building on the 2020 Model AI Governance Framework, this new guidance focuses on agentic AI — systems that can reason and act autonomously on users’ behalf — and sets out practical controls so organisations can deploy agents while keeping humans ultimately accountable.

The framework centres on four pillars: assess and bound risks up front; make humans meaningfully accountable; implement technical controls across the agent lifecycle; and enable end-user responsibility through transparency and training. It addresses agent-specific threats (eg memory poisoning, tool misuse, privilege compromise) and recommends measures such as scoped permissions, identity and authorisation policies, human checkpoints for high-stakes actions, rigorous testing, continuous monitoring and user education.

Key Points

  • The MGF for Agentic AI is a first-of-its-kind update from IMDA, launched at WEF on 22 Jan 2026, extending Singapore’s AI governance guidance to agentic systems.
  • Four-step approach: 1) assess and bound risks, 2) ensure meaningful human accountability, 3) apply technical controls across design, testing and deployment, 4) enable end-user responsibility via transparency and training.
  • Risk bounding includes selecting suitable use cases, limiting agents’ autonomy, tools and data access, and running agents in contained environments for high-risk tasks.
  • Identification and authorisation best practices: give each agent a unique identity, link it to a supervising party, and ensure permissions do not exceed the authoriser’s own privileges.
  • Human oversight: define significant checkpoints (eg irreversible actions, high-stakes decisions, outlier behaviour) requiring human approval and audit the effectiveness of such approvals.
  • Testing & technical controls: test agents for policy compliance, tool calling, robustness and full multi-step workflows; test agents individually and in combination; mirror production environments for realistic testing.
  • Monitoring & incident handling: continuous logging, anomaly detection, real-time intervention mechanisms, and clear fail-safes to take agents offline when needed.
  • End-user responsibilities: inform users about agent capabilities, data use and escalation contacts; provide training to preserve core skills and to detect common failure modes.
  • The framework is intended as a living document — IMDA solicits feedback and case studies to refine guidance over time.

Why should I read this?

Short version: if your organisation is even thinking about letting AI do tasks for people, this is the playbook you need. It tells you what to lock down, where humans must step in, how to test and monitor agents, and how to stop things going sideways. Read it now so you don’t have to learn the hard way.

Source

Source: https://www.humanresourcesonline.net/new-model-ai-governance-framework-for-agentic-ai-to-guide-singapore-organisations-on-responsible-deployment