'Each vulnerability exposes a different class of enterprise data': LangChain framework hit by several worrying security issues — here's what we know
Three bugs, and a whole lot of downstream trouble
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
- LangChain and LangGraph patch three high-severity flaws exposing files, secrets, and conversation histories
- Vulnerabilities included path traversal, deserialization leaks, and SQL injection in SQLite checkpoints
- Researchers warn risks ripple through downstream libraries; developers urged to audit configs and treat LLM outputs as untrusted input
LangChain and LangGraph, two popular open source frameworks for building AI apps, contained high-severity and critical vulnerabilities which allowed threat actors to exfiltrate sensitive data from compromised systems.
LangChain helps developers build apps using large language models (LLM), by connecting AI models to various data sources and tools. It is a popular tool among developers looking to build chatbots and assistants. LangGraph, on the other hand, is built on top of LangChain and is designed to help create AI agents that follow structured, step-by-step workflows. It uses graphs to control how tasks move between steps, and devs use it for complex, multi-step processes.
Citing stats on the Python Package Index (PyPI), The Hacker News says the projects have more than 60 million combined downloads a week, suggesting they are immensely popular in the software development community.
Article continues belowVulnerabilities and patches
In total, the projects fixed three vulnerabilities:
CVE-2026-34070 (severity score 7.5/10 - high) - A path traversal bug in LangChain that enables arbitrary file access without validation
CVE-2025-68664 (severity score 9.3/10 - critical ) - A deserialization of untrusted data flaw in LangChain that leaks API keys and environment secrets
CVE-2025-67644 (severity score 7.3/10 - high ) - An SQL injection vulnerability in LangGraph SQLite checkpoint implementation that enables SQL query manipulation
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
"Each vulnerability exposes a different class of enterprise data: filesystem files, environment secrets, and conversation history," security researcher Vladimir Tokarev of Cyera said in a report detailing the flaws.
The Hacker News notes exploiting any of the three flaws allows threat actors to read sensitive files like Docker configurations, exfiltrate secrets through prompt injection, and even access conversation histories associated with sensitive workflows.
All bugs have since been fixed so if you’re using any of these tools, make sure to upgrade to the latest version to safeguard your projects.
CVE-2026-34070 can be fixed by bringing langchain-core at least to version 1.2.22
CVE-2025-68664 can be fixed by bringing langchain-core to versions n0.3.81 and 1.2.5
CVE-2025-67644 can be fixed by bringing langgraph-checkpoint-sqlite to version 3.0.1
Foundational plumbing
For Cyera, the findings show that the biggest threat to enterprise AI data might not be as complex as people think.
“In fact, it hides in the invisible, foundational plumbing that connects your AI to your business. This layer is vulnerable to some of the oldest tricks in the hacker playbook,” they said.
They also warned that LangChain “does not exist in isolation” but rather sits “at the center of a massive dependency web that stretches across the AI stack.” WIth hundreds of libraries wrapping LangChain, extending it, or depending on it, it means that any vulnerability in the project also means vulnerabilities down the stream.
The bugs “ripple outward through every downstream library, every wrapper, every integration that inherits the vulnerable code path.”
In order to truly secure your environment, patching the tools will not suffice, they said. Any code that passes external or user-controlled configurations to load_prompt_from_config() or load_prompt() need to be audited, and developers should not enable secrets_from_env=True when deserializing untrusted data. “The new default is False. Keep it that way,” they warned.
They also urged the community to treat LLM outputs as “untrusted input”, since different fields can be influenced by prompt injection. Finally, metadata filter keeys must be validated before they can be passed onto checkpoint queries.
“Never allow user-controlled strings to become dictionary keys in filter operations.”

➡️ Read our full guide to the best antivirus
1. Best overall:
Bitdefender Total Security
2. Best for families:
Norton 360 with LifeLock
3. Best for mobile:
McAfee Mobile Security
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.