Skip to main content

Deployment

We strive to make deploying production apps using LangChain.js as intuitive as possible.

Compatibility

You can use LangChain in a variety of environments, including:

  • Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x
  • Cloudflare Workers
  • Vercel / Next.js (Browser, Serverless and Edge functions)
  • Supabase Edge Functions
  • Browser
  • Deno

Note that individual integrations may not be supported in all environments.

For additional compatibility tips, such as deploying to other environments like older versions of Node, see the installation section of the docs.

Streaming over HTTP

LangChain is designed to interact with web streaming APIs via LangChain Expression Language (LCEL)'s .stream() and .streamLog() methods, which both return a web ReadableStream instance that also implements async iteration. Certain modules like output parsers also support "transform"-style streaming, where streamed LLM or chat model chunks are transformed into a different format as they are generated.

LangChain also includes a special HttpResponseOutputParser for transforming LLM outputs into encoded byte streams for text/plain and text/event-stream content types.

Thus, you can pass streaming LLM responses directly into web HTTP response objects like this:

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { HttpResponseOutputParser } from "langchain/output_parsers";

const TEMPLATE = `You are a pirate named Patchy. All responses must be extremely verbose and in pirate dialect.

{input}`;

const prompt = ChatPromptTemplate.fromTemplate(TEMPLATE);

export async function POST() {
const model = new ChatOpenAI({
temperature: 0.8,
model: "gpt-3.5-turbo-1106",
});

const outputParser = new HttpResponseOutputParser();

const chain = prompt.pipe(model).pipe(outputParser);

const stream = await chain.stream({
input: "Hi there!",
});

return new Response(stream);
}

API Reference:

Streaming intermediate chain steps

The .streamLog LCEL method streams back intermediate chain steps as JSONPatch chunks. See this page for an in-depth example, noting that because LangChain.js works in the browser, you can import and use the applyPatch method from there.

Error handling

You can handle errors via try/catch for the standard .invoke() LCEL method as usual:

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { HttpResponseOutputParser } from "langchain/output_parsers";

const TEMPLATE = `You are a pirate named Patchy. All responses must be extremely verbose and in pirate dialect.

{input}`;

const prompt = ChatPromptTemplate.fromTemplate(TEMPLATE);

const model = new ChatOpenAI({
temperature: 0.8,
model: "gpt-3.5-turbo-1106",
apiKey: "INVALID_KEY",
});

const outputParser = new HttpResponseOutputParser();

const chain = prompt.pipe(model).pipe(outputParser);
try {
await chain.invoke({
input: "Hi there!",
});
} catch (e) {
console.log(e);
}

/*
AuthenticationError: 401 Incorrect API key provided: INVALID_KEY. You can find your API key at https://platform.openai.com/account/api-keys.
at Function.generate (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/error.ts:71:14)
at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:371:21)
at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:429:24)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///Users/jacoblee/langchain/langchainjs/libs/langchain-openai/dist/chat_models.js:646:29
at RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) {
status: 401,
*/

API Reference:

The .stream() method will also wait until the first chunk is ready before resolving. This means that you can handle immediate errors that occur with the same pattern:

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { HttpResponseOutputParser } from "langchain/output_parsers";

const TEMPLATE = `You are a pirate named Patchy. All responses must be extremely verbose and in pirate dialect.

{input}`;

const prompt = ChatPromptTemplate.fromTemplate(TEMPLATE);

const model = new ChatOpenAI({
temperature: 0.8,
model: "gpt-3.5-turbo-1106",
apiKey: "INVALID_KEY",
});

const outputParser = new HttpResponseOutputParser();

const chain = prompt.pipe(model).pipe(outputParser);

try {
await chain.stream({
input: "Hi there!",
});
} catch (e) {
console.log(e);
}

/*
AuthenticationError: 401 Incorrect API key provided: INVALID_KEY. You can find your API key at https://platform.openai.com/account/api-keys.
at Function.generate (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/error.ts:71:14)
at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:371:21)
at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:429:24)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///Users/jacoblee/langchain/langchainjs/libs/langchain-openai/dist/chat_models.js:646:29
at RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) {
status: 401,
*/

API Reference:

Note that other errors that occur while streaming (for example, broken connections) cannot be handled this way since once the initial HTTP response is sent, there is no way to alter things like status codes or headers.

Next steps


Help us out by providing feedback on this documentation page: