AI can write emails and summarize meetings, but here’s what it still can’t do in 2026
It’s important we get clear on AI’s limitations
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
Go on X or LinkedIn for five minutes and you’ll find plenty of people talking about what AI can do. It can summarize meeting notes, write code, turn a photo of you into a caricature, or give your emails a more assertive vibe. Those are just a few examples I saw in LinkedIn posts earlier today.
But for all the things AI can do, there are still plenty it can’t. In fact, some limitations trip up the most popular AI tools time and time again. I’m not dunking on the technology here (I do sometimes, but that’s not what this is). I think it’s good to talk about what AI can’t do so we’re clear on its boundaries.
When people are new to AI tools, or dazzled by the hype, they can easily misinterpret what these systems actually are and what they’re capable of. That’s how we end up with reports filled with made up statistics. Of course, different AI tools have different strengths. But here are some common things your favorite AI tool might still struggle with in 2026 and, importantly, why those struggles still exist.
1. Admit it doesn’t know something
This is the most important one — AI tools can hallucinate, which is the industry term for when they make things up.
What’s crucial to understand is that this isn’t a bug that’s going to be fixed in a future update. Instead, it’s at the core of how a lot of LLMs (large language models), like ChatGPT and Claude, work.
Despite how it may seem, they’re not retrieving facts from a big store of information. Instead, they’re predicting the next word based on patterns learned from churning over huge amounts of training data.
Hallucination can look like your favorite AI tool confidently stating incorrect information, inventing citations or blending some real sources with made up ones.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
What makes this worse is the confidence. These systems are designed to produce fluent language that sounds authoritative and we’re wired to trust authority. That makes it easy to overlook errors if we’re not careful.
That’s why it’s essential to fact check anything an AI tool tells you. This is good practice for everyday use, but it’s critical when the stakes are high, like they are with legal advice, medical information or financial decisions.
We’ve already seen multiple cases of people being caught out after submitting documents that included fabricated citations or incorrect claims generated by AI.
2. Counting
Have you seen the viral videos of people asking ChatGPT or Grok how many "r"s are in strawberry? If not, I’ll spoil them for you. AI often gets it wrong. ChatGPT has been known to confidently say there are only two, then after some pushing, concede that there are in fact three.
I’ve tested this myself and had mixed results. Sometimes it answers correctly. Sometimes it doesn’t. So what’s going on?
AI tools like ChatGPT, Claude ,and Grok don’t process text the way we do. They don’t scan each letter in order. Instead, they break language into “tokens”, which are words or smaller chunks of words the LLM has learned from its training data.
So when it sees “strawberry”, it isn’t counting each letter. It’s predicting a plausible answer based on patterns it has learned before.
Once you understand how it happens, the simple mistake makes more sense. But we tend to associate fluency with intelligence, so when we know ChatGPT can write an essay in seconds but can’t count letters, it feels jarring.
3. Replacing a therapist
There are conflicting opinions about whether people should rely on AI as a therapy tool. But the broad consensus tends to be: use it cautiously, and only as a supplement to real therapy.
Many people find value in sharing things with their chatbot of choice, especially given how inaccessible traditional therapy can be in many countries. They might ask ChatGPT to help interpret the tone of a text or clarify goals. But beyond that, experts warn it could do more harm than good.
Again, this all comes down to how these tools are designed. They tend to agree, reflect your views back to you and validate your experience. They are structurally optimized to be helpful and agreeable. Even with guardrails in place, they are more likely to affirm than challenge.
But true growth needs friction. It requires someone who can push back, notice blind spots, and establish boundaries. Sure, a small amount of validation can be reassuring. But too much without challenge can subtly distort how you see yourself and the world.
There are also practical limits too. An AI system can’t assess risk the way a trained professional can. It can’t intervene in a crisis and it can’t participate in the patient and therapist dynamic that makes therapy effective. It can simulate it, but misses out on the lived experience, training, professional accountability and duty of care.
4. Understanding lived experience
This one might sound obvious, but stick with me. Acknowledging that AI hasn’t lived and never will is central to understanding what it can’t do.
It doesn’t have a body, memories, a childhood, needs or stakes. That doesn’t matter much if you’re asking it to proofread a technical blog post or generate code.
But rely on it for philosophical debate, therapy or creative work and something important shifts. It’s important to understand that it’s not drawing from a past or from dreams or experiences. It’s drawing from existing material and then recombining it.
Because it hasn’t lived, it has no skin in the game. It can describe ethical frameworks, weigh arguments for and against controversial decisions, and simulate moral reasoning. But it can’t bear consequences or be held accountable the way a person can. So if an AI system causes harm, responsibility lies with the humans who built it, deployed it or use it. The model itself has no awareness or care.
This raises deeper questions about creativity, originality and moral agency. Those debates are ongoing. But for now, it’s enough to recognize that some forms of judgement do rely on experience, vulnerability and a sense of responsibility, AI doesn’t have those.
5. Updating knowledge in real time
AI tools are trained on vast amounts of data. But that data has cut-off points and those vary depending on the tool. That means a model might not know about recent events, evolving norms, or shifts in language unless you explicitly provide the context or check how up to date its knowledge is.
Sometimes this becomes a problem because older information is delivered with the same confidence as everything else. There’s no built-in signal that says, “This might be out of date”.
This really matters if you’ve started relying on AI as a news source or if you work in journalism, law, policy, or any fast-moving field. It’s increasingly normal for people to lean on these tools for research and summaries too. But you can’t guarantee information will be current.
Recognizing the limits of AI
When you first use AI, it can feel intelligent because it tends to handle language well. It can certainly give the impression of reasoning, empathy, creativity and even authority.
But it’s important to remember that underneath, it’s predicting patterns rather than understanding meaning. Recognizing these limits doesn’t diminish what the technology can do. But it’ll help you use it more clearly, more deliberately, and in ways that actually serve your goals.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.