
Not even fairy tales are safe - researchers weaponise bedtime stories to jailbreak AI chatbots and create malware
Cato CTRL researchers can jailbreak LLMs with no prior malware coding experience.
Other versions of this page are available with specific content for the following regions:
Please login or signup to comment
Please wait...