Back to Insights
Insights in Action: From Decision to Result

AI Governance Doesn't End at the Model: The Last Mile Is in the Decision

It's not enough to govern the model or its use. True governance occurs the moment a recommendation becomes a choice and produces consequences.

Matias Rein
Corporate Insights
April 1, 2026
7 min read Read
AI Governance Doesn't End at the Model: The Last Mile Is in the Decision

The debate on Artificial Intelligence governance has matured significantly in recent years. Today, there’s a relevant set of references dealing with risk management, human oversight, transparency, documentation, monitoring, and continuous improvement. The NIST AI RMF — the United States National Institute of Standards and Technology’s framework for AI risk management — was created to help organizations identify, measure, treat, and monitor risks associated with AI systems. The European AI Act adopts a risk-based logic and requires, for higher-criticality systems, measures such as documentation, operation logs, adequate information provided to the end-user, and appropriate human oversight. ISO/IEC 42001, the international management system standard for artificial intelligence, organizes the topic around policy, risk, data governance, transparency, monitoring, and continuous improvement.

All of this is important — yet it doesn’t exhaust the problem.

The point is that a large part of the debate remains focused on the AI system and its lifecycle, or at most on its responsible use, while the decisive question usually arises one step later:

"Was the decision that used artificial intelligence good, explainable, accountable, and connected to value?"

This is exactly where the idea of the last-mile AI governance comes in.

Here, “last mile” is not a new regulatory label. It is a practical lens. It refers to the stretch between the output produced by an AI system and the decision effectively assumed by a person, an area, or an organization. It’s the point where recommendation turns into choice, choice turns into execution, and execution begins to produce economic, regulatory, operational, and reputational consequences.

In simple terms: it’s not enough to govern the model. It’s not enough to govern the use of AI. We also need to govern the decision that uses AI.

This formulation doesn’t contradict what already exists in the current debate. On the contrary, it follows from it. The OECD — Organisation for Economic Co-operation and Development — addresses transparency in its recent guidelines as the obligation to inform when artificial intelligence is being used in a forecast, recommendation, decision, or interaction. And it treats explainability as the ability to let stakeholders understand how an outcome was produced and, when appropriate, be able to challenge it.

In other words: the debate itself already points to the decision. What is still missing, in many cases, is treating it as an explicit object of governance.


Where Risk Really Materializes

A model can be technically acceptable and, even so, the decision built from it can be bad.

This can happen for various reasons. There may be incomplete context. The data may be inadequate for that specific case. Real alternatives may not have been considered. There might be over-reliance on the recommendation produced by the AI. It might be unclear who is accountable for the decision. There might be no way to explain, later on, why that choice was made that way.

In regulated environments, this becomes even more evident. The Bank for International Settlements (BIS), when discussing the effects of AI in the financial sector, draws attention to topics such as explainability, data governance, and model risk management. The challenge is not just to make the system work. It is to guarantee that its use in relevant decisions does not produce undue opacity, fragile accountability, or the deterioration of prudential controls.

Therefore, the last mile should not be treated as an implementation detail. It is the point where four questions converge at once:

  • Epistemic: with what quality of evidence was this decision supported?
  • Fiduciary: who is accountable for it?
  • Regulatory: can this outcome be explained, audited, and challenged when necessary?
  • Economic: did this decision preserve or destroy value?

The contemporary debate on AI governance already provides important pieces of this puzzle. What still needs to mature is the stitching together of these pieces the moment the organization acts on the AI’s output.


What Last-Mile Governance Tries to Solve

The governance of the last mile is not meant to replace model governance. Nor does it replace responsible AI use programs. It operates on another layer.

Its focus is on questions like:

  • What was the role of AI in this decision?
  • Did it come in as context, recommendation, prioritization, or evidence?
  • Were there real alternatives before the choice?
  • Was data quality sufficient for this specific decision?
  • Was the chain of responsibility clear?
  • Will an ex-post explanation be possible for an auditor, regulator, customer, patient, citizen, or board?
  • Was the decision monitored after execution?
  • Did the outcome generate learning, or just an immediate effect?

These questions are not merely philosophical. They have operational consequences. In choices about pricing, credit, fraud, logistics, triage, procurement, health, public policy, or resource allocation, the absence of clear answers to these questions opens the door to three classic problems: over-reliance on automation, opacity regarding responsibility, and difficulty demonstrating due diligence after the fact.


Three Objections That Need Addressing

A thesis like this only holds up if it tackles the most predictable objections right away.

1. "But this is already covered by human oversight"

Partly, yes. But human oversight, alone, does not solve the decision governance problem. A human might supervise poorly. They might just formally validate something they didn't really understand. The NIST emphasizes the need to make decision-making processes and human roles more explicit. In other words: human oversight is a necessary condition. But, in isolation, it isn't sufficient.

2. "This replaces model governance"

It doesn't. Model governance remains indispensable. Development, validation, monitoring, and technical explainability stay central. The Federal Reserve's SR 11-7 guidance remains current. The point is different: even with good model governance, governing the decision built upon it remains necessary.

3. "More governance will just stall operations"

Understandable, but only when governance is poorly designed. The serious response is proportionality. Not every decision requires the same level of rigor or review. Good governance must follow the same logic as the European AI Act: more rigor where there's more impact; less friction where the risk is lower. The problem isn't governing too much; the problem is governing poorly.


What Changes in the Current Debate

The contribution of the “last mile” idea is not to create an artificial dispute with the existing debate. It’s to show that it’s still incomplete when it stops at the system.

Today we’ve made good progress regarding risk, transparency, and accountability. The next step is to ask, with more operational precision: how do we govern the decision that uses artificial intelligence?

  • Who signs off on it?
  • With what evidence?
  • With what alternatives?
  • Under what criteria?
  • With what capability for ex-post explanation?
  • With what tie to a real outcome?

This might be the least spectacular layer of the debate — and one of the most important. Because economic, fiduciary, and regulatory risks aren’t ultimately resolved in the model.

They are resolved in the decision.

Conclusion

Governing artificial intelligence is necessary.
Governing the decision that uses artificial intelligence is becoming indispensable.

Because the risk doesn't end in the system. It doesn't end in the recommendation. It materializes when someone decides. And it's exactly there that last-mile governance gains its relevance.

References

1. NIST. AI RMF — AI Risk Management Framework. nist.gov

2. European Commission. European AI Act. digital-strategy.ec.europa.eu

3. ISO. ISO/IEC 42001 — Information technology — Artificial intelligence — Management system. iso.org

4. OECD. OECD Principles on AI and Due Diligence Guidelines. oecd.org

5. Federal Reserve. SR 11-7 — Guidance on Model Risk Management. federalreserve.gov

6. BIS. Regulatory and prudential challenges of AI in finance. bis.org