Hallucination
“Hallucination” is one of the most dangerously misleading terms in the AI lexicon. It suggests a normally functioning system that is having a brief, temporary break from reality. This is incorrect. A large language model has no connection to reality to begin with. Its fundamental nature is to “hallucinate”; the moments of factual accuracy are the exception, not the rule.
Analogy: The Expert Mimic
Imagine a person who has spent their entire life locked in a library, reading every book. They have never seen the outside world. This person is the AI model.
- The Skill: They can talk about any subject with incredible fluency. If you ask them about 18th-century French poetry, they can generate a beautiful, coherent, and stylistically perfect essay on the topic. They sound exactly like a world-renowned expert.
- The Flaw: They have no idea what a “flower” is. They have never seen one, smelled one, or touched one. They have only ever read the word “flower” and analyzed the statistical patterns of how it’s used in millions of sentences. If you ask them a question about a flower they have never read about, they will not say “I don’t know.” They will use their vast knowledge of language patterns to invent a plausible-sounding flower. They will describe its color, its petals, and the sound it makes, all with complete confidence.
They are not “lying” in the human sense. They are simply doing what they were designed to do: generate a statistically probable sequence of words. This is what an AI “hallucination” is. It’s the model falling back on its core function of pattern-matching when it doesn’t have a specific memory (a memorized fact from its training data) to draw upon.
The Legal and Technical Flaws
-
There is No “Truth” Parameter: An AI model has no internal concept of truth. It does not “check its facts” before it speaks. Its outputs are driven by probability, not veracity. A statement that is factually correct and a statement that is a complete fabrication can have an equally high probability of being generated. The only difference is whether the “correct” statement happened to be in the training data.
-
RAG is a Crutch, Not a Cure: Retrieval-Augmented Generation (RAG) is often proposed as the solution to hallucinations. The idea is to give the model access to a trusted library of documents to “look up” answers. This is like giving our expert mimic a specific set of books and telling them to only talk about what’s in those books. It helps, but it doesn’t solve the problem. The model can still misinterpret the text, combine facts from different documents in nonsensical ways, or ignore the provided text entirely and fall back on its own “knowledge” if it can’t find a good answer.
-
Liability for Plausible Lies: The danger of hallucinations is that they are often so plausible. The case of the New York lawyers who filed a brief containing completely fabricated case law is the canonical example. The AI didn’t just invent case names; it wrote plausible-sounding summaries of their holdings. A busy lawyer, lulled into a false sense of security by the AI’s authoritative tone, can easily fall into this trap. When they do, the professional and legal consequences are severe.
Reject the term “hallucination.” It is a comforting lie that obscures the technical reality. The more accurate term is “confabulation.” The model is simply making things up to fill in the gaps, and it has no idea that it’s doing so. Any lawyer who uses these tools without a deep, abiding skepticism of every single output is committing malpractice waiting to happen.