AI-generated passwords aren't as secure as they appear
AI isn't random
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
There's a habit spreading across the internet, and it looks reasonable on the surface.
Someone needs a new password, doesn't want to use their dog's name again or bother with a password manager, and figures: why not ask ChatGPT?
A second later, they've got something like T#9vLmq$2Rk! staring back at them.
Article continues belowLooks strong. Feels sorted. They paste it in and move on.
CEO of Passpack.
But it’s not as strong as it looks. And research backs that up.
AI isn't random. It just looks that way
A study last year tested 1,000 passwords generated by leading AI models, including ChatGPT, DeepSeek, and Llama. The results are quite sobering.
Eighty-eight percent of passwords from DeepSeek and 87 percent from Llama failed to withstand attack. ChatGPT performed better but still produced passwords that could be cracked in under an hour nearly a third of the time.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The same architecture that makes AI useful is what makes it unsuitable here.
AI language models work by predicting what comes next based on patterns in their training data. That's what makes them so helpful for writing, summarizing, translating, really, any tasks where pattern recognition is the whole point.
But generating a truly random credential requires something AI can't do: producing output that has no relationship to anything that came before it.
What you get instead is the appearance of randomness. The output looks chaotic, but at a statistical level, it clusters. Character placement, length preferences, the ratio of symbols to letters. These tendencies are baked in. And modern cracking tools are specifically designed to exploit exactly this kind of regularity.
There's another dimension to this that gets overlooked. If you and a colleague independently ask ChatGPT to generate a strong password today, the results won't be identical, but they will likely share structural fingerprints.
The pool of genuinely distinct outputs is smaller than most might assume. Scale that across millions of people making the same request, and the "uniqueness" of your AI-generated password starts to look a lot less unique.
What happens to the prompt itself
Output quality is only half the problem. The other half is what you're handing over just by asking
On the free, consumer-facing tiers of most major AI platforms, prompts can be used as training data for future model versions. That's standard practice, and it's disclosed in the terms of service most people don't read.
Essentially, the context of your conversation – what you asked for, what service it was for, anything else you said in that session – may not remain private.
This is a different risk profile from enterprise or business-tier access, where data handling terms are typically more restrictive. But for the average person using ChatGPT on their phone to sort out a banking app password? It's worth knowing.
The broader point is that the moment a password – even a freshly generated one – enters a public AI conversation, you're in a different security posture than you were before you opened that tab. It's not necessarily a breach. But it is a security event, and most people don't think of it that way.
What to use instead
The fix isn't complicated. Credentials should be generated by tools built specifically for that purpose – password managers have existed for years and solve this precisely.
The core requirement is cryptographically secure randomization: outputs that have no statistical relationship to each other and no pattern for an attacker to grip onto.
Storage matters as much as generation. Unless you delete your AI chat logs, all of your passwords that you’ve had an LLM generate for you are going to be discoverable to anyone who accesses your account.
And given that ChatGPT, Claude and most other major LLMs operate browser session persistence (i.e. you don’t need to log back in once you close the initial session, unlike say a bank account), this adds a significant vulnerability.
The usual objection is convenience. AI tools are already open, already familiar. The tension between security and ease is as old as the industry. The question is whether the friction you're avoiding is the kind that was actually protecting you.
The smarter default
AI is a capable tool. It's just not the right one for this job. Pattern recognition is what makes it useful for writing and research; it's also exactly what makes it unsuitable for generating credentials that need to be genuinely unpredictable. Use a password manager for passwords. Use AI for everything else.
Most cybersecurity failures don't come down to exotic attacks or sophisticated exploits. They come down to small, everyday habits and decisions that accumulate into either a safe security posture or a vulnerable one. Knowing which tool to reach for, and why, is where good security starts.
We've rated the best business password manager.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
CEO of Passpack.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.