A critical security flaw was discovered in LangChain, one of the most widely used AI frameworks in the world, exposing millions of applications to the risk of credential theft and malicious code injection. The vulnerability allows attackers to exploit LangChain's core serialization logic to extract environment variables and execute unauthorized actions, putting API keys, credentials, and sensitive data at risk across the entire AI ecosystem.

The attack vector centers on LangChain's serialization functions that failed to adequately handle user-controlled data. The dumps() and dumpd() functions do not escape dictionaries containing 'lc' keys, internal markers that LangChain uses to identify its own serialized objects. When malicious actors inject data with these special keys, the system treats them as legitimate LangChain content during deserialization, rather than untrusted user input. The vulnerability was discovered by a security specialist at Cyata during AI trust boundary audits.

The flaw enables multiple devastating attack paths, including extraction of secrets from environment variables when deserialization is executed with 'secrets_from_env=True', instantiation of classes within trusted namespaces such as langchain_core and langchain_community, and potential arbitrary code execution through Jinja2 templates. Attackers can craft prompts to instantiate allowed classes, triggering SSRF attacks with environment variables embedded in headers for data exfiltration. Since this affects common flows such as event streaming, logging, and caching, virtually any LangChain application processing untrusted data could be compromised.

LangChain responded quickly to the disclosure, releasing patches that fundamentally change how the framework handles serialization security. The fixes include new allowlist parameters in the load() and loads() functions to specify which classes can be serialized and deserialized, Jinja2 templates now blocked by default, and the dangerous 'secrets_from_env' option changed to 'False' to prevent automatic loading of secrets from environment variables. Vulnerable versions include langchain-core >= 1.0.0, < 1.2.5 and < 0.3.81, with fixes available in versions 1.2.5 and 0.3.81. Originally reported via Huntr on December 4, 2024, LangChain acknowledged the vulnerability the following day and published the advisory on December 24. A parallel vulnerability also hit LangChainJS, tracked as CVE-2025-68665, demonstrating that this serialization injection issue affects the entire LangChain ecosystem. Cybersecurity experts are issuing urgent alerts for developers to update langchain-core immediately and verify that dependencies such as langchain-community have also been updated.

This post was translated and summarized from its original version using AI, with human review.

With information from Cybersecurity News