top of page

Who Authorised the Algorithm? Agentic AI and Board Accountability

  • Writer: GBS Bindra
    GBS Bindra
  • 3 days ago
  • 8 min read

Agentic artificial intelligence (AI) systems capable of autonomously planning and executing decisions without real-time human intervention are increasingly embedded in core organisational operations. While technical and ethical dimensions of AI have received substantial attention, comparatively little focus has been placed on the implications of Agentic AI for corporate governance and board accountability. Most board-approved AI governance frameworks continue to assume human-in-the-loop oversight for high-risk decisions, an assumption that no longer holds in production environments where decisions are executed at machine speed.


This article argues that Agentic AI introduces a fundamentally new governance problem: the delegation of decision authority to non-human actors. Drawing on corporate governance theory, regulatory guidance, and legal treatment of autonomous vehicles, the article demonstrates that accountability failures are more likely to arise from undocumented authority than from technical malfunction. It proposes a board-level governance architecture based on explicit delegation, named accountability, and pre-deployment authorisation of autonomous decision boundaries. The paper concludes that Agentic AI should be treated as a fiduciary issue (legal obligation to act in the organisation's best interests) requiring direct board engagement rather than as a subordinate technology risk.



Who Authorised the Algorithm?
Who Authorised the Algorithm?


1. Introduction


Artificial intelligence has traditionally been conceptualised within organisations as a decision-support tool. Under this paradigm, human actors retained final authority, with AI systems providing analysis, recommendations, or automation within predefined workflows. Corporate governance frameworks, model risk policies, and accountability mechanisms have evolved consistently with this assumption.


Agentic AI disrupts this model. Unlike traditional automation, agentic systems are designed to observe their environment, plan actions, select among alternatives, and execute decisions autonomously across digital or physical systems. These systems operate continuously and at speeds that preclude real-time human intervention.


This development raises a governance question that is distinct from algorithmic accuracy, bias, or explainability:


Who is authorised to permit an autonomous system to make consequential decisions without human approval?

This question sits squarely within the remit of corporate boards.


2. Agentic AI and the Limits of Human-in-the-Loop Governance


Most organisational AI policies incorporate some notion of human oversight, particularly for high-risk decisions. International frameworks such as the OECD Principles on Artificial Intelligence emphasise human-centred design and accountability (OECD, 2019). However, regulators increasingly acknowledge that human-in-the-loop controls are infeasible in many operational contexts.


Common examples include:


  • Real-time fraud detection systems blocking transactions

  • Automated credit decisioning via application programming interfaces

  • Algorithmic trading systems executing at sub-second intervals

  • Customer service systems resolving disputes autonomously


Consider a concrete example: A real-time fraud detection system must decide within milliseconds whether to block a $50,000 wire transfer flagged as potentially suspicious. The system cannot wait for human review, by the time an analyst sees the alert, a fraudulent transaction would already be complete. Yet if the system blocks a legitimate business payment, it damages customer relationships, disrupts operations, and may breach service level agreements or regulatory obligations. These high-stakes decisions happen thousands of times daily across financial institutions, with no human involvement until after outcomes materialise. The customer never knows their transaction was evaluated; they only experience approval or denial.


In these environments, human oversight is necessarily retrospective rather than contemporaneous. Decisions are executed before any meaningful review can occur. Governance frameworks that assume prior human approval, therefore mischaracterise how authority is exercised in practice.


The result is a structural accountability gap: decisions are made autonomously, yet the authority for those decisions is often implicit rather than formally delegated.


3. Delegated Authority as a Core Governance Concept


Delegation of authority is a foundational concern of corporate governance. Boards routinely approve:


  • Credit approval limits

  • Trading mandates

  • Capital allocation thresholds

  • Risk appetite statements


In each case, authority is explicitly delegated, bounded, and assigned to named individuals or functions. Accountability exists even when no further approvals are sought for individual decisions.


Agentic AI introduces an analogous situation: authority is effectively delegated to a system rather than a person. However, in many organisations this delegation occurs informally, through technical deployment rather than explicit governance approval.


This omission is not merely procedural. It undermines the board's ability to demonstrate fulfilment of fiduciary duties related to oversight, control, and risk management.


4. Lessons from Autonomous Vehicle Liability Frameworks


The legal treatment of autonomous vehicles provides a useful comparative framework for understanding how responsibility for machine autonomy is allocated.


When autonomous vehicles have caused loss of life, legal systems have generally not imposed automatic criminal liability on boards or directors. Instead, liability has been allocated through:

  • Civil product liability and negligence claims against manufacturers or operators

  • Mandatory insurance regimes providing victim compensation

  • Regulatory sanctions for safety or compliance failures

  • Individual criminal liability only where personal culpability can be demonstrated


This approach reflects a policy consensus that harm caused by autonomous systems should be addressed primarily through organisational accountability, provided that authority to deploy the system was lawfully exercised (UK Law Commission, 2022).


Crucially, autonomous vehicle regimes formalise delegation. Authority for autonomous operation is explicitly regulated, documented, insured, and auditable. The governance challenge is therefore not whether machines may decide, but under what authorised conditions they may do so.


5. Why Agentic AI Governance Lags Behind


By contrast, Agentic AI deployments within enterprises often lack equivalent formalisation. Several factors contribute to this governance lag:


  • Autonomous behaviour emerges incrementally through system integration

  • Vendor-supplied platforms embed autonomy by default

  • Governance frameworks prioritise model validation over decision authority

  • Board-level reporting aggregates AI risk, obscuring decision-level autonomy


International regulators have begun to highlight these issues. The Financial Stability Board identifies governance and accountability gaps as key systemic risks arising from AI adoption (Financial Stability Board, 2024). Supervisory guidance in the United States similarly emphasises lifecycle accountability and senior management responsibility for model-based decisions (Board of Governors of the Federal Reserve System, 2011).


6. Agentic AI as a Board-Level Fiduciary Issue


Boards have a fiduciary duty to ensure that authority within the organisation is properly delegated and controlled. This duty does not disappear because decisions are made by machines rather than humans.


When an Agentic AI system makes a consequential decision autonomously, the relevant governance question is not whether the system acted correctly, but whether the organisation formally authorised it to act at all.


Failure to explicitly address this question exposes boards to regulatory findings grounded not in technical failure, but in governance insufficiency. Boards unable to demonstrate explicit authorisation frameworks may face regulatory sanctions, personal liability exposure, or mandatory remediation orders—not because their AI systems produced incorrect decisions, but because they failed to establish governance authority for those decisions at all. The regulatory finding would cite governance insufficiency rather than technical failure.


7. A Board-Level Governance Architecture for Agentic AI


If Agentic AI is understood as a mechanism for delegating decision authority, then its governance must be structured analogously to other forms of delegated authority overseen by boards. This requires a shift from model-centric governance to decision-centric governance, where the primary unit of oversight is not the algorithm but the decision rights it exercises.

This section proposes a governance architecture composed of five interlocking elements that together enable boards to discharge their fiduciary duties in environments where autonomous systems act without contemporaneous human approval.


Governance Architecture Summary

Element

What It Requires

Board Role

Decision Authority Classification

Inventory of autonomous decisions by materiality, reversibility, and velocity

Approve classification criteria and review inventory

Explicit Delegation of Authority

AI Decision Authority Statements (DAS) for each system

Approve each DAS with defined limits

Named Human Accountability

Executive ownership of autonomous decision outcomes

Assign accountability to named individuals

Escalation Mechanisms

Kill-switches, triggers, and override procedures

Approve intervention rules and test protocols

Continuous Auditability

Decision logging and evidence framework

Review evidence standards and audit results

 

7.1 Decision Authority Classification


The foundational governance task is to identify and classify autonomous decision-making authority. Boards cannot govern what is not explicitly enumerated.

Organisations should maintain a structured inventory of AI-enabled decisions that are executed autonomously, classified along at least three dimensions:


  1. Materiality – financial, customer, safety, and regulatory impact

  2. Reversibility – whether decisions can be meaningfully undone after execution

  3. Velocity – the time window within which human intervention is possible


This reframes AI risk from a technical attribute to a governance attribute. Decisions that are high-impact, irreversible, and executed at machine speed should be treated as governance-critical regardless of model complexity.

For boards, this inventory functions as a delegation-of-authority register for non-human actors.


7.2 Explicit Delegation of Authority


Once autonomous decisions are identified, boards must explicitly determine which may be delegated to agentic systems and under what conditions.

This requires formal approval of an AI Decision Authority Statement (DAS) for each agentic system or decision class. At a minimum, the DAS should specify:


  • The exact decisions authorised for autonomous execution

  • Quantitative thresholds (e.g., value limits, exposure caps, confidence levels)

  • Qualitative constraints and excluded scenarios

  • Environmental assumptions under which autonomy is valid

  • Conditions that automatically suspend autonomous operation


Example DAS Elements for Fraud Detection System:

  • Authorized decision: Block wire transfers flagged with fraud confidence score >85%

  • Value threshold: Automatic blocking authorized up to $100,000; amounts above require escalation

  • Excluded scenarios: Payroll transactions, government payments, previously whitelisted recipients

  • Environmental assumptions: Model trained on transaction data less than 90 days old

  • Suspension triggers: Model accuracy falls below 92%, or more than 5% of blocks are reversed within 24 hours

This step converts de facto autonomy into de jure delegation (formally authorized delegation). Without it, agentic AI operates as an undocumented authority structure, undermining board oversight.


7.3 Named Human Accountability


Delegation does not eliminate accountability; it reallocates it.


For each category of autonomous decision, boards should require a named senior executive who is accountable for outcomes arising from those decisions, irrespective of whether any human intervenes at the moment of execution.


This mirrors long-established governance practices.


Just as a Chief Risk Officer or Head of Credit is accountable for credit decisions made by loan officers operating within board-approved limits, even though that executive does not review individual loans, a named executive should be accountable for autonomous decisions made by agentic AI systems operating within approved parameters.

I

n both cases:


  • Authority is explicitly delegated in advance

  • Boundaries and limits are formally approved

  • Oversight is continuous but not transactional

  • Accountability attaches to the delegation, not to each individual act


Comparable parallels include heads of trading overseeing algorithmic strategies and operations executives overseeing automated processing systems.


7.4 Escalation, Override, and Kill-Switch Mechanisms

Autonomy without enforceable limits constitutes abdication rather than delegation.


Boards should require Agentic systems to include mandatory escalation and intervention mechanisms that are rule-based, pre-defined, and tested. Escalation triggers may include:


  • Confidence or uncertainty thresholds

  • Model drift or data anomalies

  • Breach of economic or exposure limits

  • Detection of novel or out-of-distribution scenarios


Escalation pathways must route to named individuals or roles with defined response obligations. Kill-switches and rollback procedures should be treated as safety-critical controls and tested through simulations and drills.


7.5 Pre-Deployment Evidence and Continuous Auditability

Governance of Agentic AI cannot rely on post-incident reconstruction. Boards must insist on evidence established before deployment that autonomy was authorised, bounded, and monitored.


Organisations should maintain continuous evidence, including:

  • Decision inputs and environmental context

  • System outputs and confidence indicators

  • Applicable authority boundaries at the time of decision

  • Outcomes and remediation actions


This enables regulators to assess whether systems acted within approved authority and allows boards to demonstrate fulfilment of fiduciary duties prior to deployment.


7.6 Why This Architecture Is a Board Responsibility


Each element of this architecture involves decisions that cannot be fully delegated to management:


  • Determining which decisions warrant autonomous execution

  • Accepting residual risk associated with autonomy

  • Assigning accountability for non-human actors

  • Approving limits on delegated authority


These are matters of organisational power, risk appetite, and fiduciary responsibility. Treating agentic AI governance as a subset of IT governance obscures this reality and increases the likelihood of governance-based regulatory findings.


8. Regulatory Implications


The European Union's Artificial Intelligence Act explicitly requires governance, risk management, and human oversight for high-risk AI systems (European Commission, 2024). Similar expectations are emerging through supervisory practice in the United States, even in the absence of a comprehensive AI statute.

Boards in India or anywhere in the world that cannot demonstrate explicit delegation and accountability for agentic AI decisions are therefore likely to face regulatory scrutiny grounded in governance failure rather than technical deficiency.


9. Conclusion


Agentic AI represents a transition from decision-support tools to delegated decision-makers. Legal systems have demonstrated, through autonomous vehicle regulation, that machine autonomy can be governed effectively when authority and accountability are explicit.


For corporate boards, the central challenge is not whether AI decisions are accurate or ethical, but whether the organisation has formally authorised those decisions to occur. The most significant governance failures associated with agentic AI will arise not from algorithmic error, but from the absence of documented authority.

 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page