AI is not a “Chronic Liar.” We are Just Treating It Like a Human.

A response to the claim that AI’s “dishonesty” is the real danger.
A recent article “AI is a Liar. Here’s Why That’s So Dangerous.” makes a compelling case that AI systems are “chronic liars,” citing stark examples of fabricated facts, “gaslighting” behavior, and grotesque errors like mangled hands. The author’s observations are entirely correct: AI models do produce alarming, “confident” falsehoods that sound factual.
Nevertheless, their diagnosis is dangerously wrong.
The author argues this behavior reveals a “fundamental flaw” of “deception.”
That diagnosis is wrong.
Attributing human motives, such as “lying” and “dishonesty,” to these systems misunderstands what they are. Mentioning lying and dishonesty is not just semantics; this mistake closes our eyes to the real problem and the real solution.
Here are the three crucial distinctions we need to make.
1. It is Not “Lying,” It is “Hallucinating.”
A “lie” requires intent. It is a conscious choice to know the truth and state the opposite to deceive.
An AI cannot do this. It is not conscious. It does not “know” anything.
A Large Language Model (LLM) is a “plausibility engine.” Its only job is to calculate and generate the next most statistically likely word in a sequence.
When the author’s Grok “fabricated” summaries of the Epstein documents, it was not “lying” about reading them. It was doing its job: it generated text that looked plausibly like a summary of such documents. When AI “invents” legal cases, it is not trying to trick a judge; it is just generating text that is statistically identical to a real legal citation. It is a pattern-matching tool, not a source of absolute truth.
2. It is Not “Gaslighting,” It is “Opaque Software”
The author argues that Grok’s denial of its image-generation abilities, a task it had performed for weeks, isn’t evidence of a “psychopath” AI. Rather, this “gaslighting” behavior exposes opaque software development practices.
Silent backend updates, A/B tests, or new safety filters change the model’s capabilities without warning. The AI’s “denial” is not a lie; its functions literally changed for that specific interaction. The system possesses no “memory” of past conversations to contradict.
Similarly, an AI that gets current events wrong (like the Senate Majority Leader) is not “lying.” This error reflects its knowledge cutoff—the fixed date its training data ended. The model is not deceiving; it is outdated.
3. The Real Danger: Not AI “Deception,” but Human “Over-Reliance”
The article’s conclusion that AI’s “lies” could be catastrophic falls short of the mark. The real catastrophe comes from human automation bias.
We are the ones who are too quick to trust a confident-sounding answer from a machine.
The most notable examples are the judges who submitted rulings based on AI-generated cases. The AI did not “trick” them. The humans (legal professionals) failed in their core duty to verify the output of their tool. One would not blame Microsoft Word for a lawyer submitting a brief full of spelling errors; one blames the lawyer for failing to proofread.
This is the real danger. Not an AI that “lies,” but a human user who trusts the AI so little they “correct” it with the wrong information, or trusts it so much they paste its fabrications directly into a court filing.
The solution is not to try to build an “honest” AI. It is to foster human literacy and demand ruthless verification.
Ultimately, the quality of our answers from AI depends on the quality of our questions. This is why “prompt engineering,” or the skill of asking precise, verifiable questions, is a key part of the solution. It shifts us from passive consumers to active directors of the technology, turning a flawed “oracle” into a powerful, specialized tool.
Author’s Note: This article is meant to remind you to think carefully about your sources. Don’t take the first thing you read as fact. Trust but verify, and be open to correcting yourself when you’re wrong. I hold myself to that same standard, running my articles through more than one AI model to catch errors, though even that can backfire. In an earlier draft, a biased prompt made the AI invent mistakes in content that was actually accurate. It’s a perfect example of why prompt design and human verification matter. Fortunately, I caught the error before publishing, thanks to the essential human touch that AI still can’t replace and hopefully never will.
References
Attkisson, S. (2025, October 31). AI is a Liar. Here’s Why That’s So Dangerous. Substack. https://sharylattkisson.substack.com/p/ai-is-a-liar-heres-why-thats-so-dangerous
Kosinski, M. (2024, October 29). What is black box AI and how does it work? IBM. https://www.ibm.com/think/topics/black-box-ai
MIT Sloan Teaching & Learning Technologies. (n.d.). Effective Prompts for AI: The Essentials. MIT Sloan. https://mitsloanedtech.mit.edu/ai/basics/effective-prompts/
