>_ Attorney for Agents
About Practice Areas Insights Contact
  • en
  • es
Home / Insights / Lo Que Los Tribunales Están Diciendo Realmente Sobre la IA
Case Law Survey

Lo Que Los Tribunales Están Diciendo Realmente Sobre la IA

Matias Bebeni & Jonathan Siegel · 2026

The judicial system is beginning to confront AI in earnest. The cases decided so far do not address agentic liability directly—no court has yet ruled on the liability of an operator whose AI agent autonomously caused harm in a complex multi-party system. But the decisions that do exist reveal how courts are thinking about AI, and those patterns are instructive for predicting how agentic liability will be analyzed when it arrives.

What follows is a survey of the most significant judicial decisions involving AI systems, organized by the legal question each case addresses.

Can AI be a legal actor?

Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022)

Stephen Thaler developed an AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and filed patent applications listing DABUS as the sole inventor. The USPTO refused to issue the patents because the Patent Act refers to inventors as "individuals," which the Patent Office interpreted to mean natural persons.

The Federal Circuit affirmed. Writing for the court, Judge Stark held that the plain language of the Patent Act requires inventors to be natural persons. "There is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings."

Why it matters for agentic law

Thaler establishes that, at least under current statutory frameworks, AI systems cannot hold legal rights as "individuals." This has direct implications for agentic liability: if an AI agent cannot hold rights, it also cannot bear legal responsibility. Liability must therefore fall on the humans and entities in the agent stack—operators, providers, and users.

Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018)

While not an AI case, Naruto is frequently cited in AI discussions. A crested macaque named Naruto took a series of selfie photographs. PETA sued on Naruto's behalf, claiming copyright ownership. The Ninth Circuit held that animals lack statutory standing to bring copyright claims.

The relevance to AI is analogical: if a non-human entity cannot hold copyright, the same logic may apply to AI-generated works. The Copyright Office has since issued guidance consistent with this position, refusing to register works generated entirely by AI without human authorship.

What happens when humans use AI irresponsibly?

Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. June 22, 2023)

This case became the most widely discussed example of AI misuse in legal practice. Attorney Steven Schwartz used ChatGPT to research a legal brief and submitted the brief to the court without verifying the AI's output. The brief cited six cases that did not exist—they were fabrications generated by ChatGPT.

Judge P. Kevin Castel sanctioned Schwartz and his colleague, Peter LoDuca, ordering them each to pay a $5,000 fine. The court's opinion was direct:

The court found that the attorneys had "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and fake internal citations" without performing basic verification. The sanctions were imposed for the failure to verify, not for the use of AI research tools.

The court was careful to note that it was not prohibiting the use of AI in legal research. The problem was the failure to verify. This distinction—between the use of AI and the failure to oversee AI—is central to the developing framework for operator liability.

Why it matters for agentic law

Mata establishes that the human who deploys or relies on an AI system has an independent obligation to verify the system's outputs. This principle will be foundational in operator liability cases: the operator cannot disclaim responsibility by pointing to the AI's autonomy.

Who is responsible for AI-generated content?

Moffatt v. Air Canada (BC Civil Resolution Tribunal, 2024)

As discussed in our lead analysis, Air Canada was held liable for its chatbot's incorrect representations about bereavement fare policy. The tribunal rejected Air Canada's argument that the chatbot was "a separate legal entity." The deployer bore the consequences of its AI system's outputs.

In the Matter of DoNotPay, Inc. (FTC, 2024)

The FTC's settlement with DoNotPay addressed an AI system marketed as "the world's first robot lawyer." The FTC alleged that DoNotPay had not tested whether its AI could actually perform the legal tasks it claimed to handle. The company settled for $193,000 and agreed to refrain from making unsubstantiated claims about its AI's legal capabilities.

The case signals that regulators will hold AI deployers accountable for the gap between their marketing claims and their systems' actual capabilities—particularly in professional domains like law and medicine.

How do platforms interact with AI?

Gonzalez v. Google LLC, 598 U.S. 617 (2023)

The plaintiffs in Gonzalez argued that YouTube's algorithmic recommendation of ISIS recruitment videos fell outside Section 230 immunity because the algorithm "created" content by selecting and promoting it. The Supreme Court vacated the Ninth Circuit's decision on procedural grounds without directly resolving the Section 230 question.

The case is significant for what the Court did not do: it did not narrow Section 230 immunity for algorithmic content curation. This suggests that platforms using AI for content recommendation retain broad immunity—at least for now.

Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023)

In the companion case to Gonzalez, the Court held that Twitter's mere provision of a platform—even with algorithms that passively amplified ISIS content—did not constitute "aiding and abetting" under the Anti-Terrorism Act. The Court emphasized that liability for aiding and abetting requires a more active role than passive facilitation.

For tool providers, Taamneh provides some comfort: merely providing a platform or tool that an agent uses for harmful purposes may not, standing alone, create liability. But the Court's analysis focused on passive facilitation; more active involvement could yield a different result.

CFTC v. Ooki DAO, No. 3:22-cv-5416 (N.D. Cal. 2022)

This case broke new ground in a different way. The CFTC brought an enforcement action against Ooki DAO, a decentralized autonomous organization. Because the DAO had no legal representative, the CFTC served process through the DAO's online chat bot—and the court allowed it.

The case is relevant to agentic liability because it raises the question of how legal process interacts with autonomous systems. If a DAO can be served through its chat bot, what are the implications for AI agents that interact with the legal system?

The patterns

Several patterns emerge from this survey:

  1. AI is not a legal person. Courts consistently refuse to grant AI systems legal personhood, standing, or rights. Liability falls on the humans and entities behind the AI.
  2. Deployers are responsible. Whether through respondeat superior, consumer protection, or professional responsibility, courts and regulators hold the deployer accountable for AI outputs.
  3. The duty to verify is paramount. Mata and the DoNotPay settlement both emphasize that the human deployer must verify AI outputs. The use of AI does not relieve the human of independent professional obligations.
  4. Passive facilitation has limits. Taamneh suggests that mere provision of tools or platforms is not, by itself, sufficient for liability. But this may change if the tool provider has knowledge of and involvement in the harmful use.
  5. The frameworks are being built case by case. There is no comprehensive statute or precedent governing agentic liability. The law is being assembled from pieces of agency law, products liability, consumer protection, professional responsibility, and platform immunity—and it will take years to fully develop.

Related

Agent Liability

Who Is Liable When an AI Agent Acts?

→ Read
Legal Theory

Agency Law Meets Artificial Agents

→ Read
This article is for informational purposes only and does not constitute legal advice. For advice specific to your situation, please contact us.

Siegel Bebeni LLP

A California limited liability partnership, d/b/a Attorney for Agents, counseling clients on the law of autonomous actors, AI agent liability, and emerging technology regulation.

Practice

Agent Liability Operator Compliance Tool Provider Risk AI Governance

Insights

Agent Liability Analysis The Operator's Dilemma Courts & AI All Insights

Firm

About Contact Disclaimer

© 2026 Siegel Bebeni LLP. All rights reserved.

Attorney Advertising

Prior results do not guarantee a similar outcome.