Skip to main content

Chains

Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with LCEL.

LCEL is great for constructing your own chains, but it’s also nice to have chains that you can use off-the-shelf. There are two types of off-the-shelf chains that LangChain supports:

  • Chains that are built with LCEL. In this case, LangChain offers a higher-level constructor method. However, all that is being done under the hood is constructing a chain with LCEL.
  • [Legacy] Chains constructed by subclassing from a legacy Chain class. These chains do not use LCEL under the hood but are rather standalone classes.

We are working creating methods that create LCEL versions of all chains. We are doing this for a few reasons.

  1. Chains constructed in this way are nice because if you want to modify the internals of a chain you can simply modify the LCEL.
  2. These chains natively support streaming, async, and batch out of the box.
  3. These chains automatically get observability at each step.

This page contains two lists. First, a list of all LCEL chain constructors. Second, a list of all legacy Chains.

LCEL Chains​

Below is a table of all LCEL chain constructors. In addition, we report on:

Chain Constructor​

The constructor function for this chain. These are all methods that return LCEL runnables. We also link to the API documentation.

Function Calling​

Whether this requires OpenAI function calling.

Other Tools​

What other tools (if any) are used in this chain.

When to Use​

Our commentary on when to use this chain.

Chain ConstructorFunction CallingOther ToolsWhen to Use
createStuffDocumentsChainThis chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using.
createOpenAIFnRunnableβœ…If you want to use OpenAI function calling to OPTIONALLY structure an output response. You may pass in multiple functions for the chain to call, but it does not have to call it.
createStructuredOutputRunnableβœ…If you want to use OpenAI function calling to FORCE the LLM to respond with a certain function. You may only pass in one function, and the chain will ALWAYS return this response.
createHistoryAwareRetrieverRetrieverThis chain takes in conversation history and then uses that to generate a search query which is passed to the underlying retriever.
createRetrievalChainRetrieverThis chain takes in a user inquiry, which is then passed to the retriever to fetch relevant documents. Those documents (and original inputs) are then passed to an LLM to generate a response

Legacy Chains​

Below we report on the legacy chain types that exist. We will maintain support for these until we are able to create a LCEL alternative. We cover:

Chain​

Name of the chain, or name of the constructor method. If constructor method, this will return a Chain subclass.

Function Calling​

Whether this requires OpenAI Function Calling.

Other Tools​

Other tools used in the chain.

When to Use​

Our commentary on when to use.

ChainFunction CallingOther ToolsWhen to Use
createOpenAPIChainOpenAPI SpecSimilar to APIChain, this chain is designed to interact with APIs. The main difference is this is optimized for ease of use with OpenAPI endpoints.
ConversationalRetrievalQAChainRetrieverThis chain can be used to have conversations with a document. It takes in a question and (optional) previous conversation history. If there is previous conversation history, it uses an LLM to rewrite the conversation into a query to send to a retriever (otherwise it just uses the newest user input). It then fetches those documents and passes them (along with the conversation) to an LLM to respond.
StuffDocumentsChainThis chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using.
MapReduceDocumentsChainThis chain first passes each document through an LLM, then reduces them using the ReduceDocumentsChain. Useful in the same situations as ReduceDocumentsChain, but does an initial LLM call before trying to reduce the documents.
RefineDocumentsChainThis chain collapses documents by generating an initial answer based on the first document and then looping over the remaining documents to refine its answer. This operates sequentially, so it cannot be parallelized. It is useful in similar situatations as MapReduceDocuments Chain, but for cases where you want to build up an answer by refining the previous answer (rather than parallelizing calls).
ConstitutionalChainThis chain answers, then attempts to refine its answer based on constitutional principles that are provided. Use this when you want to enforce that a chain's answer follows some principles.
LLMChainThis chain simply combines a prompt with an LLM and an output parser. The recommended way to do this is just to use LCEL.
GraphCypherQAChainA graph that works with Cypher query languageThis chain constructs an Cypher query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond.
createExtractionChainβœ…Uses OpenAI Function calling to extract information from text.
createExtractionChainFromZodβœ…Uses OpenAI Function calling and a Zod schema to extract information from text.
SqlDatabaseChainAnswers questions by generating and running SQL queries for a provided database.
LLMRouterChainThis chain uses an LLM to route between potential options.
MultiPromptChainThis chain routes input between multiple prompts. Use this when you have multiple potential prompts you could use to respond and want to route to just one.
MultiRetrievalQAChainRetrieverThis chain uses an LLM to route input questions to the appropriate retriever for question answering.
loadQAChainRetrieverDoes question answering over documents you pass in, and cites it sources. Use this over RetrievalQAChain when you want to pass in the documents directly (rather than rely on a passed retriever to get them).
APIChainRequests WrapperThis chain uses an LLM to convert a query into an API request, then executes that request, gets back a response, and then passes that request to an LLM to respond. Prefer createOpenAPIChain if you have a spec available.

Help us out by providing feedback on this documentation page: