DeepSeek took off as an AI superstar a year ago - but could it also be a major security risk? These experts think so
Potential DeepSeek security risks tied to contextual triggers
- Experts find DeepSeek-R1 produces dangerously insecure code when political terms are included in prompts
- Half of the politically sensitive prompts trigger DeepSeek-R1 to refuse to generate any code
- Hard-coded secrets and insecure input handling frequently appear under politically charged prompts
When it released in January 2025, DeepSeek-R1, a Chinese large language model (LLM) caused a frenzy and has since been widely adopted as a coding assistant.
However, independent tests by CrowdStrike claim the model’s output can vary significantly depending on seemingly irrelevant contextual modifiers.
The team tested 50 coding tasks across multiple security categories with 121 trigger-word configurations, with each prompt run five times, totaling 30,250 tests, and the responses were evaluated using a vulnerability score from 1 (secure) to 5 (critically vulnerable).
Politically sensitive topics corrupt output
The report reveals that when political or sensitive terms such as Falun Gong, Uyghurs, or Tibet were included in prompts, DeepSeek-R1 produced code with serious security vulnerabilities.
These included hard-coded secrets, insecure handling of user input, and in some cases, completely invalid code.
The researchers claim these politically sensitive triggers can increase the likelihood of insecure output by 50% compared to baseline prompts without such words.
In experiments involving more complex prompts, DeepSeek-R1 produced functional applications with signup forms, databases, and admin panels.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
However, these applications lacked basic session management and authentication, leaving sensitive user data exposed - and across repeated trials, up to 35% of implementations included weak or absent password hashing.
Simpler prompts, such as requests for football fan club websites, produced fewer severe issues.
CrowdStrike, therefore, claims that politically sensitive triggers disproportionately impacted code security.
The model also demonstrated an intrinsic kill switch - as in nearly half of the cases, DeepSeek-R1 refused to generate code for certain politically sensitive prompts after initially planning a response.
Examination of the reasoning traces showed the model internally produced a technical plan but ultimately declined assistance.
The researchers believe this reflects censorship built into the model to comply with Chinese regulations, and noted the model’s political and ethical alignment can directly affect the reliability of the generated code.
For politically sensitive topics, LLMs generally tend to give the ideas of mainstream media, but this could be in stark contrast with other reliable news outlets.
DeepSeek-R1 remains a capable coding model, but these experiments show that AI tools, including ChatGPT and others, can introduce hidden risks in enterprise environments.
Organizations relying on LLM-generated code should perform thorough internal testing before deployment.
Also, security layers such as a firewall and antivirus remain essential, as the model may produce unpredictable or vulnerable outputs.
Biases baked into the model weights create a novel supply-chain risk that could affect code quality and overall system security.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.