>_ Attorney for Agents LLP
About Practice Areas Insights Contact
Home / Insights / Agency Law Meets Artificial Agents: Old Doctrine, New Questions
Legal Theory

Agency Law Meets Artificial Agents: Old Doctrine, New Questions

Jonathan Siegel · 2025

The word "agent" in "AI agent" is not accidental. The developers and researchers who chose this terminology were drawing, consciously or not, on a legal concept with centuries of doctrinal development. Agency law—the body of law governing the relationship between principals and their agents—provides the most natural legal framework for analyzing the actions of AI agents.

But the fit is imperfect. Agency law was developed for human agents: people who can understand instructions, exercise judgment, be held personally liable, and testify about their actions. AI agents share some of these characteristics but lack others. Understanding where agency law fits and where it breaks down is essential for anyone building, deploying, or litigating around agentic systems.

The Restatement framework

The Restatement (Third) of Agency, published by the American Law Institute in 2006, defines agency as "the fiduciary relationship that arises when one person (a 'principal') manifests assent to another person (an 'agent') that the agent shall act on the principal's behalf and subject to the principal's control, and the agent manifests assent or otherwise consents to so act." (Restatement (Third) of Agency § 1.01.)

Several elements of this definition map onto the AI agent context:

  • Acting on behalf of the principal. An AI agent acts on behalf of its operator or user. This is the fundamental purpose of the system.
  • Subject to the principal's control. The operator controls the agent through system prompts, tool access grants, guardrails, and deployment parameters.
  • Consent. This is where the analogy begins to strain. The Restatement requires that the agent "manifest assent" to the relationship. An AI system does not "consent" in any meaningful sense. It is deployed.

The Restatement also requires that the agent be a "person." Under the Restatement's definition, this means a natural or juridical person—a human or a legal entity like a corporation. An AI system is neither. This is the most fundamental obstacle to applying the Restatement directly to AI agents.

Where the doctrine fits

Despite these limitations, several agency law doctrines are directly applicable to the AI agent context, even if the AI agent itself is not technically an "agent" under the Restatement:

Apparent authority

Under the doctrine of apparent authority, a principal is bound by the acts of an agent if the principal's conduct causes a third party to reasonably believe that the agent has authority to act on the principal's behalf. (Restatement (Third) of Agency § 2.03.)

This doctrine is powerful in the AI context. When an operator deploys a customer-service agent on its website, third parties reasonably believe the agent speaks for the company. If the agent makes a representation—about pricing, refund policy, delivery times—the third party's reliance may bind the operator, regardless of whether the agent's specific output was "authorized" by the system prompt.

This is essentially the result in Moffatt v. Air Canada: the chatbot appeared to have authority, the customer relied on its representations, and the airline was bound.

Scope of employment / scope of authority

Under respondeat superior, a principal is liable for the torts of an agent committed within the scope of the agent's authority. The scope of authority is defined by what the agent was authorized to do, plus acts that are incidental to the authorized activity.

For AI agents, the scope of authority is defined by the system prompt, the tools granted, and the deployment context. Actions within this scope—even if the specific outputs were not intended—may bind the operator. Actions outside this scope—for example, if an agent that was deployed for customer service suddenly began executing financial trades—would likely fall outside the scope of authority.

Ratification

Under the doctrine of ratification, a principal who learns of an agent's unauthorized act and fails to disavow it may be deemed to have ratified the act. (Restatement (Third) of Agency § 4.01.)

This doctrine has significant implications for operators who discover that their AI agent has taken unauthorized actions. If the operator learns of the agent's conduct and does not promptly repudiate it, the operator may be treated as having ratified the agent's action. This creates an obligation to monitor agent behavior and to respond quickly when the agent exceeds its authority.

Where the doctrine breaks

Personal liability of the agent

Under traditional agency law, the agent—as a person—can be held personally liable for torts and, in some cases, for breach of the agent's fiduciary duties to the principal. An AI agent cannot be held personally liable because it is not a legal person. This eliminates one of the mechanisms by which agency law distributes risk and creates incentives for careful behavior.

Fiduciary duty

The agency relationship is fundamentally fiduciary: the agent owes duties of loyalty, obedience, and care to the principal. An AI system does not owe fiduciary duties in any legally cognizable sense. It does not have interests, it cannot be disloyal, and it cannot be held accountable for failures of care.

This gap raises the question of whether the operator owes a duty to third parties to ensure that the AI agent behaves as a faithful fiduciary would—in effect, a duty to impose fiduciary-like behavior on the AI system through technical means.

Undisclosed principal

Under the doctrine of the undisclosed principal, when an agent acts for a principal whose existence is not disclosed to the third party, both the agent and the principal may be liable. When an AI agent interacts with a third party who does not know they are dealing with an AI, is the operator an "undisclosed principal"?

This question intersects with the growing regulatory requirement for AI disclosure—the obligation to inform people when they are interacting with an AI system. If an operator deploys an AI agent without disclosing its AI nature, the undisclosed-principal doctrine could provide an additional basis for liability.

The Restatement's unstated assumption

The deepest challenge in applying agency law to AI agents is not doctrinal but conceptual. Agency law assumes that the agent is a moral actor—a being capable of understanding its obligations, exercising judgment, and bearing consequences. The entire framework of fiduciary duty, consent, and personal liability is built on this assumption.

AI agents are not moral actors. They do not understand obligations. They do not exercise judgment in the sense that agency law contemplates. And they cannot bear consequences.

This does not mean that agency law is irrelevant to AI agents. It means that agency law provides a starting point—a vocabulary and a set of analytical tools—but it will need to be adapted, supplemented, or replaced by new frameworks designed for the specific characteristics of autonomous AI systems.

The adaptation is already underway, in courtrooms, in legislatures, and in the practices of organizations deploying agentic systems. At Attorney for Agents LLP, we help our clients navigate this evolving landscape with the rigor it demands.

Related

Operator Liability

The Operator's Dilemma

→ Read
Case Law

What Courts Are Actually Saying About AI

→ Read
This article is for informational purposes only and does not constitute legal advice. For advice specific to your situation, please contact us.

Attorney for Agents LLP

A New York limited liability partnership counseling clients on the law of autonomous actors, AI agent liability, and emerging technology regulation.

Practice

Agent Liability Operator Compliance Tool Provider Risk AI Governance

Insights

Agent Liability Analysis The Operator's Dilemma Courts & AI All Insights

Firm

About Contact Disclaimer

© 2025 Attorney for Agents LLP. All rights reserved.

Attorney Advertising

Prior results do not guarantee a similar outcome.