Last August, a junior civil judge was resolving a property dispute in a courtroom in the southern Indian city of Vijayawada. This type of case is one of the thousands that go through trial courts every year. In order to support a decision, the judge needed legal precedent. Using an AI tool, she discovered what appeared to be four pertinent prior rulings and included them in her order. Each of the four cases was made up. There was no record of any of them in India’s legal system. They were hallucinations—confident-sounding fabrications created by a system that lacks a mechanism for knowing what it doesn’t know—in the language that has grown uncomfortably familiar.
The state high court recognized the fraudulent citations when the defendants contested the order, but it upheld the decision nonetheless, citing the soundness of the underlying legal principles. It stated that the judge had behaved honestly. The case advanced to India’s Supreme Court, which was far less inclined to overlook the issue, as a result of the decision to accept the error rather than fully address it. The incident was deemed a matter of “institutional concern,” with “a direct bearing on integrity of adjudicatory process,” by the Supreme Court, which also stayed the lower court’s order. The court stated that using artificial intelligence judgments was not a straightforward mistake in judgment. It was wrongdoing.
| Information | Details |
|---|---|
| Issue | AI-generated fake case citations used in legal proceedings |
| Key Incident — India | Junior civil judge in Vijayawada, Andhra Pradesh cited 4 non-existent AI-generated judgments in a property dispute ruling (August 2025) |
| India Court Response | Supreme Court of India called it an act of “misconduct” — not merely a decision-making error |
| Judge’s Explanation | First time using an AI tool; believed citations were genuine; no intent to misrepresent |
| High Court Stance | Accepted good-faith error but still upheld the original ruling — drawing further criticism |
| Supreme Court Action | Stayed the lower court’s order; issued notices to Attorney General, Solicitor General, and Bar Council of India |
| Key US Case — Georgia | Appeals Judge Jeff Watkins vacated a divorce ruling partly based on fictitious AI-generated case citations |
| Lawyer Sanctioned | Diana Lynch — fined $2,500; cited 11 additional hallucinated or irrelevant cases on appeal |
| Expert Warning | It is “frighteningly likely” many US courts will overlook AI errors — especially in overwhelmed lower courts |
| States with AI Judicial Guidelines | Only two US states had moved to require judges to sharpen AI competency as of mid-2025 |
| Supreme Court Quote | “Exercise of actual intelligence over artificial intelligence” |
| Broader Concern | AI hallucinations have influenced judicial decisions in at least two documented federal instances |
It’s difficult to ignore the fact that this story is taking place concurrently in several nations with disparate legal systems and the same general structure. Last year, an appeals judge in Georgia, the United States, named Jeff Watkins overturned a divorce decision after learning that it had been partially based on case citations created by artificial intelligence (AI), which Watkins referred to as potential “hallucinations” caused by generative AI. Diana Lynch, the attorney in question, received a $2,500 sanction.
She responded to the opposing side’s appeal by citing eleven more cases that were either entirely unrelated to the issue at hand or hallucinogenic. A request for legal fees was supported by one of them. According to Watkins, it added “insult to injury.” Lynch’s website went down soon after the story gained attention, and she did not reply to questions from the media.

Precedent is the foundation of the legal profession. To demonstrate that what they’re requesting has already been granted by a court somewhere, at some point, under similar circumstances, a lawyer cites a case. Only when the cases are genuine does that system function. The argument’s foundation crumbles when an AI tool fills in the citation field with convincing-sounding but completely fake rulings, but if no one checks, the argument may still prevail. Everyone who is following these cases should be uncomfortable with that part. Experts are not exaggerating when they say that it is “frighteningly likely” that many courts will ignore AI errors. Judges reviewing AI-drafted filings may not always have the time or technical expertise to identify issues because lower courts are overburdened and dockets are lengthy.
The number of cases that have already been impacted but were not discovered is still unknown. There have been two publicly acknowledged cases of AI hallucinations impacting federal court rulings in the United States. It’s highly likely that the true number is higher, and those that escaped detection are, by definition, unknown.
The structural disparity between how quickly AI tools have entered professional practice and how slowly the organizations that depend on those professionals have adapted to verify the output is the true issue, not specific judges or attorneys who made mistakes. Of all places, the courtroom relies on the notion that facts can be verified. That concept is currently under subtle but real pressure.
