endear.ai
← Back to Blog
AI GovernanceAetherComplianceZero Trust

AI Governance in Practice: Interpreting ISO/IEC 42001 for Real Systems

ISO/IEC 42001 sets the right direction for AI governance — but standards alone don't enforce themselves. This post bridges the gap between governance frameworks and the runtime architectures that make them real.

Hao Wang, Founder & CEO·April 6, 2026·10 min read
AI Governance in Practice: Interpreting ISO/IEC 42001 for Real Systems

Introduction

As AI systems move from experimentation into production, organizations face a critical and often underappreciated challenge: how do we govern systems that are dynamic, probabilistic, and increasingly autonomous?

Unlike traditional software, AI systems do not follow deterministic paths. Their behavior depends on training data, runtime context, user inputs, and the tools they are given access to. Governing these systems requires more than policy documents — it requires architecture.

Standards like ISO/IEC 42001 provide a valuable foundation, defining requirements for accountability, risk management, and lifecycle control. But they deliberately stop short of answering the most important operational question: how do we actually implement governance in real systems running in production?

Standards define what governance should achieve. Architecture determines whether it actually happens.
Key Takeaways
  • ISO/IEC 42001 defines the right principles — risk management, accountability, lifecycle governance, continuous improvement — but stops short of telling you how to enforce them at runtime
  • Without runtime enforcement, governance becomes documentation rather than control: auditors can review it, but the AI system ignores it
  • Policy-as-code using Open Policy Agent transforms governance from a concept into a system capability — the policy document and the enforcement mechanism become the same artifact
  • AI systems must be governed across four control dimensions: input, processing, output, and action — each requiring distinct enforcement points in the architecture
  • A complete governance architecture has four layers: context, policy engine, enforcement, and audit — all working together on every AI action
  • In healthcare, AI governance is not compliance overhead — it is patient safety infrastructure, making systems trustworthy enough to deploy in production

The Gap Between Governance and Implementation

Most AI governance frameworks are built around three pillars: policies, processes, and responsibilities. These are necessary — but they are not sufficient. Governance frameworks describe the *what*. They rarely describe the *how*.

The fundamental problem is one of timing. Governance frameworks are designed and documented at planning time. AI systems behave at runtime. This creates a dangerous disconnect: an organization can have excellent governance documentation and still have AI agents reading databases they should not touch, generating outputs that should be filtered, or invoking external APIs without any approval mechanism.

Without runtime enforcement, governance becomes documentation — not control. Auditors can review it, but the system ignores it.

The solution is to close this gap by translating governance principles into executable system components that run alongside the AI at every step.

What ISO/IEC 42001 Gets Right

ISO/IEC 42001 is the most comprehensive AI-specific governance standard available today. Its core principles are sound and worth building on:

  • Risk-based management — identifying and mitigating AI-specific risks throughout the lifecycle
  • Accountability and oversight — clear ownership of AI systems and their outputs
  • Lifecycle governance — controls that apply from design through decommissioning
  • Continuous improvement — mechanisms for monitoring, feedback, and adjustment

These principles are not just bureaucratic checkboxes. They reflect hard-won lessons from regulated industries — finance, healthcare, aviation — where the cost of ungoverned autonomous systems is measured in patient harm, financial loss, and legal liability.

The challenge is translation. Each of these principles must be mapped to a concrete system capability. Risk management must become a policy evaluation engine. Accountability must become an audit log. Lifecycle governance must become deployment controls. Continuous improvement must become monitoring dashboards and feedback loops.

From Policy to Enforcement

The journey from a governance principle to a running system control involves four distinct steps, each of which must be explicitly designed and implemented.

**Policy definition** answers the question: what is allowed, and what is restricted? This means writing down, in precise terms, the rules that govern AI behavior — which data can be accessed, which tools can be invoked, which outputs are permitted, and under what conditions exceptions apply.

**Decision logic** answers the question: how are policies evaluated? This is where policy-as-code comes in. Rather than relying on human judgment at runtime, decision logic is encoded in a system that can evaluate policies consistently, at machine speed, against every action the AI attempts to take.

**Enforcement points** answer the question: where in the system architecture are decisions actually enforced? Policy evaluation is useless if nothing acts on it. Enforcement points sit between the AI and the resources it wants to access — databases, APIs, tools, external services — and block or permit access based on policy decisions.

**Auditability** answers the question: can every decision be traced, explained, and reviewed? This is non-negotiable for regulated industries. Every policy evaluation — allow or deny — must be logged with enough context to reconstruct exactly what happened and why.

Policy-as-Code: The Missing Link

Policy-as-code is the technical approach that bridges the gap between written governance principles and runtime enforcement. Instead of policies living in a PDF that no system reads, they are expressed in a declarative language that a policy engine evaluates continuously.

Open Policy Agent (OPA) is the leading open-source policy engine for this purpose. In an OPA-based governance architecture:

  • Policies are written in Rego, a purpose-built policy language
  • Every AI action triggers a policy evaluation before it executes
  • Evaluations are consistent — the same context always produces the same decision
  • Enforcement is automated — no human needs to review routine decisions
  • Exceptions and overrides are explicitly encoded, not improvised

This transforms governance from a management concept into a system capability. The policy document and the enforcement mechanism are the same artifact — a change to the policy immediately changes what the system permits.

Applying Governance to AI Systems

AI systems must be governed across four distinct control dimensions, each corresponding to a different point in the AI's execution flow.

Input control

Before any data enters the AI system, it must be evaluated for compliance. Is this PHI that requires special handling? Does the user have the right to query this data? Is the input attempting a prompt injection attack? Input controls are the first line of defense and the most cost-effective place to enforce policy — it is far cheaper to reject a non-compliant input than to filter a non-compliant output.

Processing control

Inside the AI system, controls govern what transformations are permitted. Which models are approved for production use? Are models operating within their defined operational parameters? Is a fine-tuned model being used only for the use cases it was validated for? Processing controls are particularly important in healthcare, where model drift or out-of-distribution inputs can produce clinically dangerous outputs.

Output control

Before any AI response reaches a user or downstream system, it must be evaluated. Does it contain PHI that should be redacted? Does it include advice that exceeds the AI's authorized scope? Does it reference information the requesting user is not permitted to see? Output controls are the last line of defense before harm reaches the outside world.

Action control

For agentic AI systems — systems that can take actions in the world, not just generate text — action controls are the most critical governance layer. Which tools can this agent invoke? Under what conditions? Does a proposed database write require human approval? Should an email be sent automatically or queued for review? Action controls are what Aether is built to enforce.

Reference Architecture for AI Governance

A complete AI governance architecture integrates these four control dimensions through four system layers.

The **context layer** assembles the information needed to make policy decisions: user identity, data classification, risk scores, session context, and any other relevant metadata. Without rich context, policies are blunt instruments. With it, they can be precise — permitting the same action for one user that they deny for another.

The **policy engine** is the decision-making core. It takes context as input and produces allow, deny, or escalate decisions as output. It must be fast — sub-20ms is achievable with modern policy engines — consistent, and auditable. Critically, it must be the single source of truth for policy decisions across all enforcement points.

The **enforcement layer** is where policy decisions become system behavior. API gateways enforce input and output policies. Agent runtimes enforce action policies. Data access proxies enforce data layer policies. The enforcement layer is distributed across the system, but all enforcement points consult the same central policy engine.

The **audit layer** captures the complete decision record. Every policy evaluation — with its context, decision, and reasoning — is written to an append-only log. This log is the evidence base for compliance audits, incident investigations, and continuous improvement.

Aether: Governance as Infrastructure

At Endear AI, we built Aether specifically to address the implementation gap between governance frameworks and running systems. The Aether Policy Console provides policy-as-code governance for AI agents operating in regulated environments.

In practice, Aether intercepts every tool call an AI agent makes, evaluates it against your OPA policy definitions in real time, and either permits the action, denies it with a structured error, or escalates it to a human reviewer. Every decision is logged to an immutable audit trail with full context.

The result is governance that is not aspirational — it is executable, verifiable, and continuously enforced, even as the AI's behavior evolves.

Mapping to ISO/IEC 42001

The architecture described here maps directly to ISO/IEC 42001's core requirements:

The standard's risk management requirements are fulfilled by the policy engine — every AI action is evaluated against defined risk criteria before it executes. Accountability requirements are fulfilled by the audit layer — every decision is traceable to a specific context, policy, and outcome. Governance requirements are fulfilled by the policy-as-code system — policies are versioned, reviewed, and consistently enforced. Control requirements are fulfilled by the enforcement layer — policies are not suggestions but system-level constraints.

Why This Matters for Healthcare

Healthcare AI carries stakes that make governance non-negotiable. PHI must be protected not just in storage but in every AI interaction. Clinical decision support must be transparent enough to explain to a physician why it made a recommendation. Prior authorization agents must have complete audit trails for CMS and payer compliance.

AI governance is not a compliance overhead in healthcare — it is patient safety infrastructure. Every ungoverned agent action is a potential breach, a potential clinical error, a potential regulatory violation. The good news is that well-designed governance architecture does not slow AI systems down. Done correctly, it makes them trustworthy enough to deploy in production.

Closing Thoughts

ISO/IEC 42001 points in exactly the right direction. Risk management, accountability, lifecycle governance, and continuous improvement are the right principles for governing AI systems.

But principles must become architecture. Policies must become code. Accountability must become audit logs. Governance must become enforcement.

The organizations that will lead in regulated AI are not those with the best governance documents — they are those that have encoded governance into their systems, enforced it at runtime, and built the audit infrastructure to prove it.

This is the direction we are building toward at Endear AI. If you are deploying AI agents in a regulated environment and want to understand how to make governance real rather than aspirational, we would love to talk.

See Aether in action.

Policy-controlled agent execution for regulated industries. Early access open now.

Request Early Access