Don't trust AI to come up with a strong new password for you — LLMs are pretty poor at creating new logins, experts warn
Duplicate passwords from AI systems undermine claims of randomness
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
- AI-generated passwords follow patterns hackers can study
- Surface complexity hides statistical predictability beneath
- Entropy gaps in AI passwords expose structural weaknesses in AI logins
Large language models (LLMs) can produce passwords look complex, yet recent testing suggests those strings are far from random.
A study by Irregular examined password outputs from AI systems such as Claude, ChatGPT, and Gemini, asking each to generate 16-character passwords with symbols, numbers, and mixed-case letters.
At first glance, the results appeared strong and passed common online strength tests, with some checkers estimating that cracking them would take centuries, but a closer look at these passwords told a different story.
LLM passwords show repetition and guessable statistical patterns
When researchers analyzed 50 passwords generated in separate sessions, many were duplicates, and several followed nearly identical structural patterns.
Most began and ended with similar character types, and none contained repeating characters.
This absence of repetition may seem reassuring, yet it actually signals that the output follows learned conventions rather than true randomness.
Using entropy calculations based on character statistics and model log probabilities, researchers estimated that these AI-generated passwords carried roughly 20 to 27 bits of entropy.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
A genuinely random 16-character password would typically measure between 98 and 120 bits by the same methods.
The gap is substantial — and in practical terms, it could mean that such passwords are vulnerable to brute-force attacks within hours, even on outdated hardware.
Online password strength meters evaluate surface complexity, not the hidden statistical patterns behind a string - and because they do not account for how AI tools generate text, they may classify predictable outputs as secure.
Attackers who understand those patterns could refine their guessing strategies, narrowing the search space dramatically.
The study also found that similar sequences appear in public code repositories and documentation, suggesting that AI-generated passwords may already be circulating widely.
If developers rely on these outputs during testing or deployment, the risk compounds over time - in fact, even the AI systems that generate these passwords do not fully trust them and may issue warnings when pressed.
Gemini 3 Pro, for example, returned password suggestions alongside a caution that chat-generated credentials should not be used for sensitive accounts.
It recommended passphrases instead and advised users to rely on a dedicated password manager.
A password generator built into such tools relies on cryptographic randomness rather than language prediction.
In simple terms, LLMs are trained to produce plausible and repeatable text, not unpredictable sequences, therefore, the broader concern is structural.
The design principles behind LLM-generated passwords conflict with the requirements of secure authentication, thus, it offers protection with a lacuna.
"People and coding agents should not rely on LLMs to generate passwords," said Irregular.
"Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation."
Via The Register
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.