5 signs that ChatGPT is hallucinating

AI hallucinations
(Image credit: Shutterstock)

Hallucinations are an intrinsic flaw in AI chatbots. When ChatGPT, Gemini, Copilot, or other AI models deliver wrong information, no matter how confidently, that's a hallucination. The AI might hallucinate a slight deviation, an innocuous-seeming slip‑up, or commit to an outright libelous and entirely fabricated accusation. Regardless, they are inevitably going to appear if you engage with ChatGPT or its rivals for long enough.

Understanding how and why ChatGPT can trip over the difference between plausible and true is crucial for anyone who wants to talk to the AI. Because these systems generate responses by predicting what text should come next based on patterns in training data rather than verifying against a ground truth, they can sound convincingly real while being completely made up. The trick is to be aware that a hallucination might appear at any moment, and to look for clues that one is hiding in front of you. Here are some of the best indicators that ChatGPT is hallucinating.

Strange specificity without verifiable sources

One of the most annoying things about AI hallucinations is that they often include seemingly specific details. A fabricated response can mention dates, names, and other particulars that make it feel credible. Because ChatGPT generates text that looks like patterns it learned during training, it can create details that fit the structure of a valid answer without ever pointing to a real source.

You might ask a question about someone and see real bits of personal information about the individual mixed with a completely fabricated narrative. This kind of specificity makes the hallucination harder to catch because humans are wired to trust detailed statements.

Nonetheless, it's crucial to verify any of those details that might cause problems for you if you're wrong. If a date, article, or person mentioned doesn’t show up elsewhere, that’s a sign you might be dealing with a hallucination. Keep in mind that generative AI doesn’t have a built‑in fact‑checking mechanism; it simply predicts what might be plausible, not what is true.

Unearned confidence

Related to the specificity trap is the overconfident tone of many an AI hallucination. ChatGPT and similar models are designed to present responses in a fluent, authoritative tone. That confidence can make misinformation feel trustworthy even when the underlying claim is baseless.

AI models are optimized to predict likely sequences of words. Even when the AI should be cautious about what it writes, it will present the information with the same assurance as correct data. Unlike a human expert who might hedge or say “I’m not sure,” it's still unusual, though more common recently, for an AI model to say "I don't know. That's because a full‑blown answer rewards the appearance of completeness over honesty about uncertainty.

In any area where experts themselves express uncertainty, you should expect a trustworthy system to reflect that. For instance, science and medicine often contain debates or evolving theories where definitive answers are elusive. If ChatGPT responds with a categorical statement on such topics, declaring a single cause or universally accepted fact, this confidence might actually signal hallucination because the model is filling a knowledge gap with an invented narrative rather than pointing out areas of contention.

Untraceable citations

Citations and references are a great way to confirm if something ChatGPT says is true. But sometimes it will provide what look like legitimate references, except those sources don’t actually exist.

This kind of hallucination is particularly problematic in academic or professional contexts. A student might build a literature review on the basis of bogus citations that look impeccably formatted, complete with plausible journal names. Then it turns out that the work rests on a foundation of references that cannot be traced back to verifiable publications.

Always check whether a cited paper, author, or journal can be found in reputable academic databases or through a direct web search. If the name seems oddly specific but yields no search results, it may well be a “ghost citation” crafted by the model to make its answer sound authoritative.

Contradictory follow-ups

Confidently asserted statements with real references are great, but if ChatGPT contradicts itself, something may still be off. That's why follow-up questions are useful. Because generative AI does not have a built‑in fact database it consults for consistency, it can contradict itself when probed further. This often manifests when you ask a follow‑up question that zeroes in on an earlier assertion. If the newer answer diverges from the first in a way that cannot be reconciled, one or both responses are likely hallucinatory.

Happily, you don't need to look beyond the conversation to spot this indicator. If the model cannot maintain consistent answers to logically related questions within the same conversation thread, the original answer likely lacked a factual basis in the first place.

Nonsense logic

Even if the internal logic doesn't contradict itself, ChatGPT's logic can seem off. If an answer is inconsistent with real‑world constraints, take note. ChatGPT writes text by predicting word sequences, not by applying actual logic, so what seems rational in a sentence might collapse when considered in the real world.

Usually, it starts with false premises. For example, an AI might suggest adding non‑existent steps to a well‑established scientific protocol, or basic common sense. Like, as happened with Gemini, an AI model suggests using glue in pizza sauce so cheese would stick better. Sure, it might stick better, but as culinary instructions go, it's not exactly haute cuisine.

Hallucinations in ChatGPT and similar language models are a byproduct of how these systems are trained. Therefore, hallucinations are likely to persist as long as AI is built on predicting words.

The trick for users is learning when to trust the output and when to verify it. Spotting a hallucination is increasingly a core digital literacy skill. As AI becomes more widely used, logic and common sense are going to be crucial. The best defense is not blind trust but informed scrutiny.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Purple circle with the words Best business laptops in white
The best business laptops for all budgets
TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.