How to cache chat model responses
This guide assumes familiarity with the following concepts:
LangChain provides an optional caching layer for chat models. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.
import { ChatOpenAI } from "@langchain/openai";
// To make the caching really obvious, lets use a slower model.
const model = new ChatOpenAI({
model: "gpt-4",
cache: true,
});
In Memory Cacheβ
The default cache is stored in-memory. This means that if you restart your application, the cache will be cleared.
console.time();
// The first time, it is not yet in cache, so it should take longer
const res = await model.invoke("Tell me a joke!");
console.log(res);
console.timeEnd();
/*
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Why don't scientists trust atoms?\n\nBecause they make up everything!",
additional_kwargs: { function_call: undefined, tool_calls: undefined }
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: "Why don't scientists trust atoms?\n\nBecause they make up everything!",
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined }
}
default: 2.224s
*/
console.time();
// The second time it is, so it goes faster
const res2 = await model.invoke("Tell me a joke!");
console.log(res2);
console.timeEnd();
/*
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Why don't scientists trust atoms?\n\nBecause they make up everything!",
additional_kwargs: { function_call: undefined, tool_calls: undefined }
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: "Why don't scientists trust atoms?\n\nBecause they make up everything!",
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined }
}
default: 181.98ms
*/
Caching with Redisβ
LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers.
To use it, you'll need to install the redis
package:
- npm
- Yarn
- pnpm
npm install ioredis @langchain/community @langchain/core
yarn add ioredis @langchain/community @langchain/core
pnpm add ioredis @langchain/community @langchain/core
Then, you can pass a cache
option when you instantiate the LLM. For example:
import { ChatOpenAI } from "@langchain/openai";
import { Redis } from "ioredis";
import { RedisCache } from "@langchain/community/caches/ioredis";
const client = new Redis("redis://localhost:6379");
const cache = new RedisCache(client, {
ttl: 60, // Optional key expiration value
});
const model = new ChatOpenAI({ cache });
const response1 = await model.invoke("Do something random!");
console.log(response1);
/*
AIMessage {
content: "Sure! I'll generate a random number for you: 37",
additional_kwargs: {}
}
*/
const response2 = await model.invoke("Do something random!");
console.log(response2);
/*
AIMessage {
content: "Sure! I'll generate a random number for you: 37",
additional_kwargs: {}
}
*/
await client.disconnect();
API Reference:
- ChatOpenAI from
@langchain/openai
- RedisCache from
@langchain/community/caches/ioredis
Caching with Upstash Redisβ
LangChain provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the @upstash/redis
package:
- npm
- Yarn
- pnpm
npm install @upstash/redis
yarn add @upstash/redis
pnpm add @upstash/redis
You'll also need an Upstash account and a Redis database to connect to. Once you've done that, retrieve your REST URL and REST token.
Then, you can pass a cache
option when you instantiate the LLM. For example:
import { ChatOpenAI } from "@langchain/openai";
import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";
// See https://docs.upstash.com/redis/howto/connectwithupstashredis#quick-start for connection options
const cache = new UpstashRedisCache({
config: {
url: "UPSTASH_REDIS_REST_URL",
token: "UPSTASH_REDIS_REST_TOKEN",
},
ttl: 3600,
});
const model = new ChatOpenAI({ cache });
API Reference:
- ChatOpenAI from
@langchain/openai
- UpstashRedisCache from
@langchain/community/caches/upstash_redis
You can also directly pass in a previously created @upstash/redis client instance:
import { Redis } from "@upstash/redis";
import https from "https";
import { ChatOpenAI } from "@langchain/openai";
import { UpstashRedisCache } from "@langchain/community/caches/upstash_redis";
// const client = new Redis({
// url: process.env.UPSTASH_REDIS_REST_URL!,
// token: process.env.UPSTASH_REDIS_REST_TOKEN!,
// agent: new https.Agent({ keepAlive: true }),
// });
// Or simply call Redis.fromEnv() to automatically load the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN environment variables.
const client = Redis.fromEnv({
agent: new https.Agent({ keepAlive: true }),
});
const cache = new UpstashRedisCache({ client });
const model = new ChatOpenAI({ cache });
API Reference:
- ChatOpenAI from
@langchain/openai
- UpstashRedisCache from
@langchain/community/caches/upstash_redis
Caching with Vercel KVβ
LangChain provides an Vercel KV-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Vercel KV client uses HTTP and supports edge environments. To use it, you'll need to install the @vercel/kv
package:
- npm
- Yarn
- pnpm
npm install @vercel/kv
yarn add @vercel/kv
pnpm add @vercel/kv
You'll also need an Vercel account and a KV database to connect to. Once you've done that, retrieve your REST URL and REST token.
Then, you can pass a cache
option when you instantiate the LLM. For example:
import { ChatOpenAI } from "@langchain/openai";
import { VercelKVCache } from "@langchain/community/caches/vercel_kv";
import { createClient } from "@vercel/kv";
// See https://vercel.com/docs/storage/vercel-kv/kv-reference#createclient-example for connection options
const cache = new VercelKVCache({
client: createClient({
url: "VERCEL_KV_API_URL",
token: "VERCEL_KV_API_TOKEN",
}),
ttl: 3600,
});
const model = new ChatOpenAI({
model: "gpt-4o-mini",
cache,
});
API Reference:
- ChatOpenAI from
@langchain/openai
- VercelKVCache from
@langchain/community/caches/vercel_kv
Caching with Cloudflare KVβ
This integration is only supported in Cloudflare Workers.
If you're deploying your project as a Cloudflare Worker, you can use LangChain's Cloudflare KV-powered LLM cache.
For information on how to set up KV in Cloudflare, see the official documentation.
Note: If you are using TypeScript, you may need to install types if they aren't already present:
- npm
- Yarn
- pnpm
npm install -S @cloudflare/workers-types
yarn add @cloudflare/workers-types
pnpm add @cloudflare/workers-types
import type { KVNamespace } from "@cloudflare/workers-types";
import { ChatOpenAI } from "@langchain/openai";
import { CloudflareKVCache } from "@langchain/cloudflare";
export interface Env {
KV_NAMESPACE: KVNamespace;
OPENAI_API_KEY: string;
}
export default {
async fetch(_request: Request, env: Env) {
try {
const cache = new CloudflareKVCache(env.KV_NAMESPACE);
const model = new ChatOpenAI({
cache,
model: "gpt-3.5-turbo",
apiKey: env.OPENAI_API_KEY,
});
const response = await model.invoke("How are you today?");
return new Response(JSON.stringify(response), {
headers: { "content-type": "application/json" },
});
} catch (err: any) {
console.log(err.message);
return new Response(err.message, { status: 500 });
}
},
};
API Reference:
- ChatOpenAI from
@langchain/openai
- CloudflareKVCache from
@langchain/cloudflare
Caching on the File Systemβ
This cache is not recommended for production use. It is only intended for local development.
LangChain provides a simple file system cache. By default the cache is stored a temporary directory, but you can specify a custom directory if you want.
const cache = await LocalFileCache.create();
Next stepsβ
You've now learned how to cache model responses to save time and money.
Next, check out the other how-to guides on chat models, like how to get a model to return structured output or how to create your own custom chat model.