Chains
Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with LCEL.
LCEL is great for constructing your own chains, but itβs also nice to have chains that you can use off-the-shelf. There are two types of off-the-shelf chains that LangChain supports:
- Chains that are built with LCEL. In this case, LangChain offers a higher-level constructor method. However, all that is being done under the hood is constructing a chain with LCEL.
- [Legacy] Chains constructed by subclassing from a legacy Chain class. These chains do not use LCEL under the hood but are rather standalone classes.
We are working creating methods that create LCEL versions of all chains. We are doing this for a few reasons.
- Chains constructed in this way are nice because if you want to modify the internals of a chain you can simply modify the LCEL.
- These chains natively support streaming, async, and batch out of the box.
- These chains automatically get observability at each step.
This page contains two lists. First, a list of all LCEL chain constructors. Second, a list of all legacy Chains.
LCEL Chainsβ
Below is a table of all LCEL chain constructors. In addition, we report on:
Chain Constructorβ
The constructor function for this chain. These are all methods that return LCEL runnables. We also link to the API documentation.
Function Callingβ
Whether this requires OpenAI function calling.
Other Toolsβ
What other tools (if any) are used in this chain.
When to Useβ
Our commentary on when to use this chain.
Chain Constructor | Function Calling | Other Tools | When to Use |
---|---|---|---|
createStuffDocumentsChain | This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using. | ||
createOpenAIFnRunnable | β | If you want to use OpenAI function calling to OPTIONALLY structure an output response. You may pass in multiple functions for the chain to call, but it does not have to call it. | |
createStructuredOutputRunnable | β | If you want to use OpenAI function calling to FORCE the LLM to respond with a certain function. You may only pass in one function, and the chain will ALWAYS return this response. | |
createHistoryAwareRetriever | Retriever | This chain takes in conversation history and then uses that to generate a search query which is passed to the underlying retriever. | |
createRetrievalChain | Retriever | This chain takes in a user inquiry, which is then passed to the retriever to fetch relevant documents. Those documents (and original inputs) are then passed to an LLM to generate a response |
Legacy Chainsβ
Below we report on the legacy chain types that exist. We will maintain support for these until we are able to create a LCEL alternative. We cover:
Chainβ
Name of the chain, or name of the constructor method. If constructor method, this will return a Chain subclass.
Function Callingβ
Whether this requires OpenAI Function Calling.
Other Toolsβ
Other tools used in the chain.
When to Useβ
Our commentary on when to use.
Chain | Function Calling | Other Tools | When to Use | |
---|---|---|---|---|
createOpenAPIChain | OpenAPI Spec | Similar to APIChain, this chain is designed to interact with APIs. The main difference is this is optimized for ease of use with OpenAPI endpoints. | ||
ConversationalRetrievalQAChain | Retriever | This chain can be used to have conversations with a document. It takes in a question and (optional) previous conversation history. If there is previous conversation history, it uses an LLM to rewrite the conversation into a query to send to a retriever (otherwise it just uses the newest user input). It then fetches those documents and passes them (along with the conversation) to an LLM to respond. | ||
StuffDocumentsChain | This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using. | |||
MapReduceDocumentsChain | This chain first passes each document through an LLM, then reduces them using the ReduceDocumentsChain. Useful in the same situations as ReduceDocumentsChain, but does an initial LLM call before trying to reduce the documents. | |||
RefineDocumentsChain | This chain collapses documents by generating an initial answer based on the first document and then looping over the remaining documents to refine its answer. This operates sequentially, so it cannot be parallelized. It is useful in similar situatations as MapReduceDocuments Chain, but for cases where you want to build up an answer by refining the previous answer (rather than parallelizing calls). | |||
ConstitutionalChain | This chain answers, then attempts to refine its answer based on constitutional principles that are provided. Use this when you want to enforce that a chain's answer follows some principles. | |||
LLMChain | This chain simply combines a prompt with an LLM and an output parser. The recommended way to do this is just to use LCEL. | |||
GraphCypherQAChain | A graph that works with Cypher query language | This chain constructs an Cypher query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | ||
createExtractionChain | β | Uses OpenAI Function calling to extract information from text. | ||
createExtractionChainFromZod | β | Uses OpenAI Function calling and a Zod schema to extract information from text. | ||
SqlDatabaseChain | Answers questions by generating and running SQL queries for a provided database. | |||
LLMRouterChain | This chain uses an LLM to route between potential options. | |||
MultiPromptChain | This chain routes input between multiple prompts. Use this when you have multiple potential prompts you could use to respond and want to route to just one. | |||
MultiRetrievalQAChain | Retriever | This chain uses an LLM to route input questions to the appropriate retriever for question answering. | ||
loadQAChain | Retriever | Does question answering over documents you pass in, and cites it sources. Use this over RetrievalQAChain when you want to pass in the documents directly (rather than rely on a passed retriever to get them). | ||
APIChain | Requests Wrapper | This chain uses an LLM to convert a query into an API request, then executes that request, gets back a response, and then passes that request to an LLM to respond. Prefer createOpenAPIChain if you have a spec available. |