5 Things I wish ChatGPT could do but can’t (yet) - with AI, sometimes less is more
ChatGPT is clever but not perfect. Here’s what I still wish it could do

Love it or hate it, there’s no denying ChatGPT can do a lot. It can summarize meetings, help you write a blog post, explain quantum physics like you’re five, craft the perfect break-up text, help you navigate important life decisions (though we’re not sure how we feel about that), and much, much more. But it’s far from perfect. And there are still plenty of things it can’t do that we really, really wish it could.
Some of these are small improvements I hope will arrive soon. Others are bigger, bolder, and more wish-list than expected roadmap. But all of them would make ChatGPT not just more helpful, they’d change the way we relate to it, trust it, and use it. So here are five features I’d love to see in the next generation of AI tools – and why they matter.
1. Say “I don’t know” and actually mean it
One of the biggest issues with ChatGPT is that it sometimes just… makes things up, which is called hallucinating.
At its core, ChatGPT is a prediction machine. It doesn’t know things the way a person does. Instead, it generates answers based on patterns in data. And crucially, it rarely admits when it doesn’t have an answer. Instead, it presses on confidently, even when it’s wrong. The result is that it produces bold, convincing, completely inaccurate responses that can be tricky to spot if you’re not fact-checking every line.
What I’d love is a bit more humility. A mode where ChatGPT just says, “I’m not sure about this.” Or, “I don’t know, but here’s my best guess.” Honesty over false confidence. Real transparency, even if that means admitting it doesn’t have all of the answers.
Of course, if I'm dreaming, I’d love an AI tool that just gets it right all of the time. But while that’s still out of reach, a little more self-awareness would go a long way.
2. Disclose its energy use
AI uses energy, a lot of it. But most people don’t give a second thought to what it costs, environmentally speaking, to fire off a dozen prompts, rewrite the same email five times, or generate an 800-word blog post in several seconds.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
And I don’t blame them. AI companies rarely talk about energy consumption. There’s no pop-up reminder to make you stop and think. But imagine if there was. It could be something simple, like a little counter next to each output that said: “This conversation has already used enough energy to power a lightbulb for 15 minutes. Please consider stopping.”
Would it make people stop using it altogether? Probably not. But it might encourage more of us to pause and consider what’s happening behind the screen. To treat AI less like a bottomless, magical resource and more like the energy-intensive system it really is.
Because if we’re going to use these tools more and more, we need to start reckoning with the environmental cost, too.
3. Introduce you to real humans
After your 16th query about burnout, speculative design, or how to start a creative business, ChatGPT could chime in with something a little different: “Would you like to connect with a real person who knows about this better than I ever could?”
It already has a deep sense of what you care about – maybe even a better sense than most apps or platforms. So why not use that insight to point you toward something… real? Like a coach, a community group, a local event, even just a newsletter or a podcast. Not as a way of pushing you off the platform necessarily, but as a bridge. A nudge toward deeper, more human expertise and connection.
It wouldn’t have to be about stopping the use of AI completely, but better recognizing its limits, especially when it comes to topics that touch on emotion, purpose, or real-life transformation. AI could become a connector. A smart tool that knows when to step back and hand you off to someone who gets it on a human level.
4. Notice when you’re spiralling
Not everyone has relied on ChatGPT a little too much recently. But I’d bet a lot of people reading this have.
Imagine it’s 3 am, you’re 47 prompts deep, and you’ve just asked ChatGPT to name your next business and fix your existential crisis. In moments like this, wouldn’t it be better if it stepped in gently and said: “Hey, it looks like you’ve been prompting non-stop for 20 minutes. Want to stretch or take a breath instead?”
Because sometimes what we need isn’t another rewrite, another idea, or another AI-generated life plan. Sometimes we just need a reminder that we have a body and that it might be asking for attention, too.
5. Help you stop using it
Sure, it might sound counterintuitive. But sometimes, the most helpful thing ChatGPT could do is tell you to stop.
Imagine you’ve just asked it to plan your day, name your new course, rewrite your Instagram caption, explain a philosophy concept, and decide what you should eat for dinner. At some point, it could gently say: “You’ve asked me for a lot. Want to try the next thing yourself and see how it feels?”
Okay, we might need to brainstorm the best way to say that without sounding incredibly patronizing. But the point still stands. Many of us are realizing that AI can quickly become a crutch. Especially when we’re overwhelmed, procrastinating, or stuck in decision fatigue. In those moments, a little nudge to step away and re-engage with the world (or, you know, actually do the thing yourself) might be the most supportive response of all.
Bonus points if it’s bold enough to call us out on our procrastination, too: “You've asked for six productivity tips in 30 minutes. Should we actually… start the work now?” Harsh? Yes. Necessary? Absolutely.
You might also like
- Sam Altman doesn’t think you should be worried about ChatGPT’s energy usage - reveals exactly how much power each prompt uses
- Google can now generate your AI videos more quickly than ever
- Sam Altman thinks superintelligence is within our grasp and makes 3 bold predictions for the future of AI and robotics: ‘We are past the event horizon’
Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.