Integration · framework

Cypherz for LangChain

LangChain orchestrates calls across many models, retrievers, and tools, which makes it easy for sensitive data to leak through an intermediate step. Configure Cypherz once at the LLM client layer and every downstream node in your chain runs against tokenized values.

  • 01

    Drop in at the LLM layer

    Wrap your `ChatOpenAI` / `ChatAnthropic` constructors with the Cypherz client — every node downstream is tokenized.

  • 02

    Works with agents and tools

    Tool calls that emit PII back into the chain stay tokenized; restoration happens at the boundary.

  • 03

    Retriever-safe

    Vector stores and retrievers can index tokenized chunks alongside the originals.

Configure once, every chain inherits PII protection

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  apiKey: process.env.CYPHERZ_KEY,
  configuration: {
    baseURL: "https://api.cypherz.app/v1/proxy/openai/v1",
  },
  modelName: "gpt-4o",
});

// Every chain, agent, and tool that uses `llm` is now PII-protected.

Common questions

Frequently asked.

Is Cypherz officially supported by LangChain?

Cypherz is built as a transparent proxy compatible with LangChain's public API. We don't require their endorsement and they don't gate us — your existing API key works through Cypherz.

Will my latency get worse?

Tokenization adds 5-30ms per request depending on payload size. The actual LLM call is the dominant cost (hundreds of ms), so the user-perceived difference is negligible. Run Cypherz in the same region as your AI provider for lowest hop overhead.

Does streaming work?

Yes — Cypherz proxies streaming responses and restores tokens chunk-by-chunk on the way back. Works with SSE and standard chunked transfer.

Can I use my own LangChain key?

Yes — paste your key when you create a project; Cypherz encrypts it under the per-project vault key. You can also use managed mode where Cypherz provisions and bills the upstream key.

Get started

Add Cypherz to your LangChain integration in 60 seconds.

Sign up, create a project, copy your API key. The first request is tokenized in under sixty seconds.