Chatbot vs chatbot - researchers train AI chatbots to hack each other, and they can even do it automatically
Struggling to crack an AI chatbot? Why not use an AI chatbot
Typically, AI chatbots have safeguards in place in order to prevent them from being used maliciously. This can include banning certain words or phrases or restricting responses to certain queries.
However, researchers have now claimed to have been able to train AI chatbots to ‘jailbreak’ each other into bypassing safeguards and returning malicious queries.
Researchers from Nanyang Technological University (NTU) from Singapore looking into the ethics of large language models (LLM) say they have developed a method to train AI chatbots to bypass each other's defense mechanisms.
Article continues belowAI attack methods
The method involves first identifying one of the chatbots safeguards in order to know how to subvert them. The second stage involves training another chatbot to bypass the safeguards and generate harmful content.
Professor Liu Yang, alongside PhD students Mr Deng Gelei and Mr Liu Yi co-authored a paper designating their method as ‘Masterkey’, with an effectiveness three times higher than standard LLM prompt methods.
One of the key features of LLMs in their use as chatbots is their ability to learn and adapt, and Masterkey is no different in this respect. Even if an LLM is patched to rule out a bypass method, Masterkey is able to adapt and overcome the patch.
The intuitive methods used include adding additional spaces between words in order to circumvent the list of banned words, or telling the chatbot to reply as if it had a persona without moral restraint.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Via Tom'sHardware
More from TechRadar Pro

Benedict is a Senior Security Writer at TechRadar Pro, where he has specialized in covering the intersection of geopolitics, cyber-warfare, and business security.
Benedict provides detailed analysis on state-sponsored threat actors, APT groups, and the protection of critical national infrastructure, with his reporting bridging the gap between technical threat intelligence and B2B security strategy.
Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the University of Buckingham Centre for Security and Intelligence Studies (BUCSIS), with his specialization providing him with a robust academic framework for deconstructing complex international conflicts and intelligence operations, and the ability to translate intricate security data into actionable insights.