Watch out AI fans - cybercriminals are using jailbroken Mistral and Grok tools to build powerful new malware
Criminals bypassed guardrails to create malicious content

- AI tools are more popular than ever - but so are the security risks
- Top tools are being leveraged by cybercriminals with malicious intent
- Grok and Mixtral were both found being used by crimianls
New research has warned top AI tools are powering 'WormGPT' variants, malicious GenAI tools which are generating malicious code, social engineering attacks, and even providing hacking tutorials.
With Large Language Models (LLMs) now widely used alongside tools like Mistral AI’s Mixtral and xAI's Grok, experts from Cato CTRL found this isn't always in the way they’re intended to be used.
“The emergence of WormGPT spurred the development and promotion of other uncensored LLMs, indicating a growing market for such tools within cybercrime. FraudGPT (also known as FraudBot) quickly rose as a prominent alternative and advertised with a broader array of malicious capabilities,” the researchers noted.
Save up to 68% on identity theft protection for TechRadar readers!
TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.
Preferred partner (What does this mean?)
WormGPT
WormGPT is a broader name for ‘uncensored’ LLMs that are leveraged by threat actors, and the researchers identified different strains with different capabilities and purposes.
For example, keanu-WormGPT, an uncensored assistant was able to create phishing emails when prompted. When researchers dug further, the LLM disclosed it was powered by Grok, but the platform's security features had been circumnavigated.
After this was revealed, the creator then added prompt-based guardrails to ensure this information was not disclosed to users, but other WormGPT variants were found to be based on Mixtral AI, so legitimate LLMs are clearly being jailbroken and leveraged by hackers.
“Beyond malicious LLMs, the trend of threat actors attempting to jailbreak legitimate LLMs like ChatGPT and Google Bard / Gemini to circumvent their safety measures also gained traction," the researchers noted.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
"Furthermore, there are indications that threat actors are actively recruiting AI experts to develop their own custom uncensored LLMs tailored to specific needs and attack vectors.“
Most in the cybersecurity field will be familiar with the idea that AI is ‘lowering the barriers of entry’ for cybercriminals, which can certainly be seen here.
If all it takes is asking a pre-existing chatbot a few well-phrased questions, then it’s pretty safe to assume that cybercrime might become a lot more common in the coming months and years.
You might also like
- Take a look at our picks for the best malware removal software around
- Check out our choice for the best AI tools
- Identity fraud attacks using AI are fooling biometric security systems

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.