Skip to main content

Adding a timeout

By default, LangChain will wait indefinitely for a response from the model provider. If you want to add a timeout, you can pass a timeout option, in milliseconds, when you call the model. For example, for OpenAI:

npm install @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";

const chat = new ChatOpenAI({ temperature: 1 });

const response = await chat.invoke(
[
new HumanMessage(
"What is a good name for a company that makes colorful socks?"
),
],
{ timeout: 1000 } // 1s timeout
);
console.log(response);
// AIMessage { text: '\n\nRainbow Sox Co.' }

API Reference:


Help us out by providing feedback on this documentation page: