How to chain runnables
One point about LangChain Expression Language is
that any two runnables can be โchainedโ together into sequences. The
output of the previous runnableโs .invoke()
call is passed as input to
the next runnable. This can be done using the .pipe()
method.
The resulting
RunnableSequence
is itself a runnable, which means it can be invoked, streamed, or
further chained just like any other runnable. Advantages of chaining
runnables in this way are efficient streaming (the sequence will stream
output as soon as it is available), and debugging and tracing with tools
like LangSmith.
This guide assumes familiarity with the following concepts:
The pipe methodโ
To show off how this works, letโs go through an example. Weโll walk through a common pattern in LangChain: using a prompt template to format input into a chat model, and finally converting the chat message output into a string with an [output parser](/docs/concepts/output_parsers.
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
- npm
- yarn
- pnpm
npm i @langchain/core @langchain/core
yarn add @langchain/core @langchain/core
pnpm add @langchain/core @langchain/core
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromTemplate("tell me a joke about {topic}");
const chain = prompt.pipe(model).pipe(new StringOutputParser());
Prompts and models are both runnable, and the output type from the prompt call is the same as the input type of the chat model, so we can chain them together. We can then invoke the resulting sequence like any other runnable:
await chain.invoke({ topic: "bears" });
"Here's a bear joke for you:\n\nWhy did the bear dissolve in water?\nBecause it was a polar bear!"
Coercionโ
We can even combine this chain with more runnables to create another chain. This may involve some input/output formatting using other types of runnables, depending on the required inputs and outputs of the chain components.
For example, letโs say we wanted to compose the joke generating chain with another chain that evaluates whether or not the generated joke was funny.
We would need to be careful with how we format the input into the next
chain. In the below example, the dict in the chain is automatically
parsed and converted into a RunnableParallel
,
which runs all of its values in parallel and returns a dict with the
results.
This happens to be the same format the next prompt template expects. Here it is in action:
import { RunnableLambda } from "@langchain/core/runnables";
const analysisPrompt = ChatPromptTemplate.fromTemplate(
"is this a funny joke? {joke}"
);
const composedChain = new RunnableLambda({
func: async (input: { topic: string }) => {
const result = await chain.invoke(input);
return { joke: result };
},
})
.pipe(analysisPrompt)
.pipe(model)
.pipe(new StringOutputParser());
await composedChain.invoke({ topic: "bears" });
'Haha, that\'s a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing. I appreciate a good pun or wordplay joke.'
Functions will also be coerced into runnables, so you can add custom logic to your chains too. The below chain results in the same logical flow as before:
import { RunnableSequence } from "@langchain/core/runnables";
const composedChainWithLambda = RunnableSequence.from([
chain,
(input) => ({ joke: input }),
analysisPrompt,
model,
new StringOutputParser(),
]);
await composedChainWithLambda.invoke({ topic: "beets" });
"Haha, that's a cute and punny joke! I like how it plays on the idea of beets blushing or turning red like someone blushing. Food puns can be quite amusing. While not a total knee-slapper, it's a light-hearted, groan-worthy dad joke that would make me chuckle and shake my head. Simple vegetable humor!"
See the LangSmith trace for the run above here
However, keep in mind that using functions like this may interfere with operations like streaming. See this section for more information.
Next stepsโ
You now know some ways to chain two runnables together.
To learn more, see the other how-to guides on runnables in this section.