13 February 2026

On 22 January 2026, Singapore announced the launch of the Model AI Governance Framework for Agentic AI (“Framework”) at the World Economic Forum in Davos, Switzerland. The Framework provides guidance to organisations on how to deploy AI agents responsibly and recommends technical and non-technical measures to mitigate risks, while emphasising human accountability. The Framework supports the responsible development, deployment, and use of AI, consistent with Singapore’s practical and balanced approach to AI governance, where guardrails are put in place while providing space for innovation.

This first-of-its-kind framework for reliable and safe agentic AI deployment was developed by the Info-communications Media Development Authority of Singapore (“IMDA”) and builds upon the governance foundations of the Model Governance Framework for AI, which was introduced in 2020.

Agentic AI and its risks

Compared to traditional and generative AI, AI agents can take actions, adapt to new information, and interact with other agents and systems to complete tasks on behalf of humans. This allows organisations to automate repetitive tasks, such as those related to customer service and enterprise productivity, and drive sectoral transformation by freeing up employees’ time to undertake higher value activities.

However, these greater capabilities also bring with them new risks. AI agents’ access to sensitive data and ability to make changes to their environment, such as updating a customer database or making a payment, may, for example, result in unauthorised or erroneous actions. The increased capability and autonomy of AI agents also create challenges for effective human accountability, such as greater automation bias, or the tendency to over-trust an automated system that has performed reliably in the past. Hence, it is important to understand the risks agentic AI could pose and ensure that organisations implement the necessary governance measures to harness agentic AI responsibly. This includes maintaining meaningful human control and oversight over agentic AI.

Guidance on managing risks in deployment of agentic AI

Targeted at organisations looking to deploy agentic AI, whether through developing AI agents in-house or using third-party agentic solutions, the Framework provides a structured overview of the risks of agentic AI and emerging best practices in managing these risks.

The Framework provides organisations with guidance on technical and non-technical measures that need to be put in place to deploy AI agents responsibly:

  • Assess and bound the risks upfront: Organisations need to understand the new risks posed by the AI agent’s actions and adapt their internal structures and processes to account for such risks. To manage these risks early, organisations could limit the scope of impact of their AI agents by designing appropriate boundaries at the planning stage, such as limiting the AI agent’s access to tools and external systems. They could also ensure that the AI agent’s actions are traceable and controllable through establishing robust identity management and access controls for AI agents.
  • Make humans meaningfully accountable: It is important to clearly define the responsibilities of different stakeholders, both within the organisation and with external vendors, while emphasising adaptive governance, so that the organisation is set up to quickly understand new developments and update its approach as the technology evolves.This includes defining significant checkpoints in the agentic workflow that require human approval, such as high-stakes or irreversible actions, and regularly auditing human oversight to check that it remains effective over time.
  • Implement technical controls and processes: Organisations should ensure the safe and reliable operationalisation of AI agents by implementing technical measures across their lifecycle. During development, organisations should incorporate technical controls for new agentic components such as planning, tools, and still-maturing protocols, to address increased risks from these new attack surfaces. Before deployment, organisations should test AI agents for baseline safety and reliability, including new dimensions such as overall execution accuracy, policy adherence, and tool use. New testing approaches will be needed to evaluate AI agents. During and after deployment, as AI agents interact dynamically with their environment such that not all risks can be anticipated upfront, it is recommended to gradually roll out AI agents alongside continuous monitoring after deployment.
  • Enable end-user responsibility: As a baseline, users should be informed of the AI agent’s range of actions, access to data, and the user’s own responsibilities. Organisations should consider layering on training to equip employees with the knowledge required to manage human-agent interactions and exercise effective oversight.

Living document

In developing the Framework, IMDA incorporated feedback from both government agencies and the private sector. As AI is a fast-developing space, IMDA views the Framework as a living document and welcomes feedback to refine the framework and the submission of case studies demonstrating how the Framework can be applied for responsible agentic deployment.

Reference materials

The following materials are available on the IMDA website www.imda.gov.sg: