The judicial system is beginning to confront AI in earnest. The cases decided so far do not address agentic liability directly—no court has yet ruled on the liability of an operator whose AI agent autonomously caused harm in a complex multi-party system. But the decisions that do exist reveal how courts are thinking about AI, and those patterns are instructive for predicting how agentic liability will be analyzed when it arrives.
What follows is a survey of the most significant judicial decisions involving AI systems, organized by the legal question each case addresses.
Stephen Thaler developed an AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and filed patent applications listing DABUS as the sole inventor. The USPTO refused to issue the patents because the Patent Act refers to inventors as "individuals," which the Patent Office interpreted to mean natural persons.
The Federal Circuit affirmed. Writing for the court, Judge Stark held that the plain language of the Patent Act requires inventors to be natural persons. "There is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings."
Thaler establishes that, at least under current statutory frameworks, AI systems cannot hold legal rights as "individuals." This has direct implications for agentic liability: if an AI agent cannot hold rights, it also cannot bear legal responsibility. Liability must therefore fall on the humans and entities in the agent stack—operators, providers, and users.
While not an AI case, Naruto is frequently cited in AI discussions. A crested macaque named Naruto took a series of selfie photographs. PETA sued on Naruto's behalf, claiming copyright ownership. The Ninth Circuit held that animals lack statutory standing to bring copyright claims.
The relevance to AI is analogical: if a non-human entity cannot hold copyright, the same logic may apply to AI-generated works. The Copyright Office has since issued guidance consistent with this position, refusing to register works generated entirely by AI without human authorship.
This case became the most widely discussed example of AI misuse in legal practice. Attorney Steven Schwartz used ChatGPT to research a legal brief and submitted the brief to the court without verifying the AI's output. The brief cited six cases that did not exist—they were fabrications generated by ChatGPT.
Judge P. Kevin Castel sanctioned Schwartz and his colleague, Peter LoDuca, ordering them each to pay a $5,000 fine. The court's opinion was direct:
The court found that the attorneys had "abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and fake internal citations" without performing basic verification. The sanctions were imposed for the failure to verify, not for the use of AI research tools.
The court was careful to note that it was not prohibiting the use of AI in legal research. The problem was the failure to verify. This distinction—between the use of AI and the failure to oversee AI—is central to the developing framework for operator liability.
Mata establishes that the human who deploys or relies on an AI system has an independent obligation to verify the system's outputs. This principle will be foundational in operator liability cases: the operator cannot disclaim responsibility by pointing to the AI's autonomy.
As discussed in our lead analysis, Air Canada was held liable for its chatbot's incorrect representations about bereavement fare policy. The tribunal rejected Air Canada's argument that the chatbot was "a separate legal entity." The deployer bore the consequences of its AI system's outputs.
The FTC's settlement with DoNotPay addressed an AI system marketed as "the world's first robot lawyer." The FTC alleged that DoNotPay had not tested whether its AI could actually perform the legal tasks it claimed to handle. The company settled for $193,000 and agreed to refrain from making unsubstantiated claims about its AI's legal capabilities.
The case signals that regulators will hold AI deployers accountable for the gap between their marketing claims and their systems' actual capabilities—particularly in professional domains like law and medicine.
The plaintiffs in Gonzalez argued that YouTube's algorithmic recommendation of ISIS recruitment videos fell outside Section 230 immunity because the algorithm "created" content by selecting and promoting it. The Supreme Court vacated the Ninth Circuit's decision on procedural grounds without directly resolving the Section 230 question.
The case is significant for what the Court did not do: it did not narrow Section 230 immunity for algorithmic content curation. This suggests that platforms using AI for content recommendation retain broad immunity—at least for now.
In the companion case to Gonzalez, the Court held that Twitter's mere provision of a platform—even with algorithms that passively amplified ISIS content—did not constitute "aiding and abetting" under the Anti-Terrorism Act. The Court emphasized that liability for aiding and abetting requires a more active role than passive facilitation.
For tool providers, Taamneh provides some comfort: merely providing a platform or tool that an agent uses for harmful purposes may not, standing alone, create liability. But the Court's analysis focused on passive facilitation; more active involvement could yield a different result.
This case broke new ground in a different way. The CFTC brought an enforcement action against Ooki DAO, a decentralized autonomous organization. Because the DAO had no legal representative, the CFTC served process through the DAO's online chat bot—and the court allowed it.
The case is relevant to agentic liability because it raises the question of how legal process interacts with autonomous systems. If a DAO can be served through its chat bot, what are the implications for AI agents that interact with the legal system?
Several patterns emerge from this survey: