Sensitive data across enterprise AI systems may be at risk after researchers uncovered critical flaws in widely used frameworks LangChain and LangGraph.
Quick Summary – TLDR:
- Three major vulnerabilities found in LangChain and LangGraph expose files, secrets, and databases.
- Over 80 million weekly downloads increase the scale and potential impact.
- Critical flaw allows API key and secret leakage through unsafe data handling.
- Security patches released, users urged to update immediately.
What Happened?
Cybersecurity researchers have identified multiple vulnerabilities in LangChain and LangGraph that could allow attackers to access sensitive enterprise data. These flaws create different attack paths targeting filesystems, environment variables, and databases.
⚠️ Three flaws in LangChain and LangGraph expose files, secrets, and chat history.
— The Hacker News (@TheHackersNews) March 27, 2026
Path traversal, unsafe deserialization, and SQL injection create separate paths to access sensitive data in enterprise AI apps.
🔗 Full breakdown of each CVE and impact → https://t.co/SYhp7W66BN
Inside the Vulnerabilities
Security researchers revealed three distinct vulnerabilities, each exposing a different layer of enterprise systems.
- CVE-2026-34070 with a CVSS score of 7.5 allows attackers to access arbitrary files using a path traversal technique through manipulated prompt templates.
- CVE-2025-68664 with a critical score of 9.3 enables leakage of API keys and environment secrets through unsafe deserialization of untrusted data.
- CVE-2025-67644 with a score of 7.3 allows SQL injection in LangGraph, enabling attackers to run unauthorized database queries.
According to researcher Vladimir Tokarev, “Each vulnerability exposes a different class of enterprise data: filesystem files, environment secrets, and conversation history.“
Together, these flaws create multiple independent attack paths, allowing threat actors to extract sensitive information from AI driven systems.
Why This Is a Big Deal?
LangChain and LangGraph are widely used open-source frameworks that power applications built on large language models. LangGraph extends LangChain to support more complex workflows, making both tools central to modern AI development.
Recent data shows the scale of adoption:
- LangChain recorded over 52 million downloads in one week.
- LangChain Core saw more than 23 million downloads.
- LangGraph crossed 9 million downloads in the same period.
This widespread usage means that a single vulnerability in the core system can have a ripple effect across hundreds of dependent libraries and applications.
Researchers warned that these frameworks sit at the center of a large dependency ecosystem. Any weakness in the core code can extend into downstream tools, integrations, and enterprise platforms.
Real World Impact of the Flaws
If exploited, attackers could gain access to:
- Sensitive files such as Docker configurations.
- Environment secrets including API keys.
- Conversation histories from AI workflows.
- Databases through SQL injection attacks.
Experts say such access could compromise automated systems, internal tools, and decision making pipelines that rely heavily on AI.
The issue also highlights a broader concern. Even modern AI frameworks remain vulnerable to classic software security risks like path traversal, deserialization flaws, and SQL injection.
Patches Released but Risks Remain
Developers have already released fixes for the vulnerabilities:
- Path traversal issue fixed in langchain core version 1.2.22 and above.
- Deserialization flaw patched in versions 0.3.81 and 1.2.5.
- SQL injection issue resolved in langgraph checkpoint sqlite version 3.0.1.
Security experts strongly recommend that organizations update immediately and review their systems for potential exposure.
The urgency is underscored by recent incidents where similar vulnerabilities in related tools were exploited within hours of disclosure. This shows how quickly attackers move once flaws become public.
SQ Magazine Takeaway
I think this is a serious wake up call for anyone building with AI tools. We often assume modern AI frameworks are secure by design, but this clearly shows they still carry the same old risks as traditional software. The scary part is not just the bugs themselves, but how deeply these frameworks are embedded in the AI ecosystem. One weak link can quietly expose massive amounts of sensitive data. If you are using these tools, updating is not optional anymore. It is urgent.