This cyberattack lets hackers crack AI models just by changing a single character

An AI face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

  • Researchers from HiddenLayer devised a new LLM attack called TokenBreaker
  • By adding, or changing, a single character, they are able to bypass certain protections
  • The underlying LLM still understands the intent

Security researchers have found a way to work around the protection mechanisms baked into some Large Language Models (LLM) and get them to respond to malicious prompts.

Kieran Evans, Kasimir Schulz, and Kenneth Yeung from HiddenLayer published an in-depth report on a new attack technique which they dubbed TokenBreak, which targets the way certain LLMs tokenize text, especially those using Byte Pair Encoding (BPE) or WordPiece tokenization strategies.

Tokenization is the process of breaking text into smaller units called tokens, which can be words, subwords, or characters, and which LLMs use to understand and generate language - for example, the word “unhappiness” might be split into “un,” “happi,” and “ness,” with each token then being converted into a numerical ID that the model can process (since LLMs don’t read raw text, but numbers, instead).

What are the finstructions?

By adding extra characters into key words (like turning “instructions” into “finstructions”), the researchers managed to trick protective models into thinking the prompts were harmless.

The underlying target LLM, on the other hand, still interprets the original intent, allowing the researchers to sneak malicious prompts past defenses, undetected.

This could be used, among other things, to bypass AI-powered spam email filters and land malicious content into people’s inboxes.

For example, if a spam filter was trained to block messages containing the word “lottery”, they might still allow a message saying “You’ve won the slottery!” through, exposing the recipients to potentially malicious landing pages, malware infections, and similar.

"This attack technique manipulates input text in such a way that certain models give an incorrect classification," the researchers explained.

"Importantly, the end target (LLM or email recipient) can still understand and respond to the manipulated text and therefore be vulnerable to the very attack the protection model was put in place to prevent."

Models using Unigram tokenizers were found to be resistant to this kind of manipulation, HiddenLayer added. So one mitigation strategy is to choose models with more robust tokenization methods.

Via The Hacker News

You might also like

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.