Skip to main content

Dealing with rate limits

Some providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a maxConcurrency option when instantiating an Embeddings model. This option allows you to specify the maximum number of concurrent requests you want to make to the provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.

For example, if you set maxConcurrency: 5, then LangChain will only send 5 requests to the provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.

To use this feature, simply pass maxConcurrency: <number> when you instantiate the LLM. For example:

npm install @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";

const model = new OpenAIEmbeddings({ maxConcurrency: 5 });

Help us out by providing feedback on this documentation page: