Get started
LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging.
Basic example: prompt + model + output parser
The most basic and common use case is chaining a prompt template and a model together. To see how this works, let's create a chain that takes a topic and generates a joke:
- npm
- Yarn
- pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
We're unifying model params across all packages. We now suggest using model
instead of modelName
, and apiKey
for API keys.
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const prompt = ChatPromptTemplate.fromMessages([
["human", "Tell me a short joke about {topic}"],
]);
const model = new ChatOpenAI({});
const outputParser = new StringOutputParser();
const chain = prompt.pipe(model).pipe(outputParser);
const response = await chain.invoke({
topic: "ice cream",
});
console.log(response);
/**
Why did the ice cream go to the gym?
Because it wanted to get a little "cone"ditioning!
*/
API Reference:
- ChatOpenAI from
@langchain/openai
- ChatPromptTemplate from
@langchain/core/prompts
- StringOutputParser from
@langchain/core/output_parsers
Notice in this line we're chaining our prompt, LLM model and output parser together:
const chain = prompt.pipe(model).pipe(outputParser);
The .pipe()
method allows for chaining together any number of runnables. It will pass the output of one through to the input of the next.
Here, the prompt is passed a topic
and when invoked it returns a formatted string with the {topic}
input variable replaced with the string we passed to the invoke call.
That string is then passed as the input to the LLM which returns a BaseMessage
object. Finally, the output parser takes that BaseMessage
object and returns the content of that object as a string.
1. Prompt
prompt
is a BasePromptTemplate
, which means it takes in an object of template variables and produces a PromptValue
.
A PromptValue
is a wrapper around a completed prompt that can be passed to either an LLM
(which takes a string as input) or ChatModel
(which takes a sequence of messages as input).
It can work with either language model type because it defines logic both for producing BaseMessages and for producing a string.
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
["human", "Tell me a short joke about {topic}"],
]);
const promptValue = await prompt.invoke({ topic: "ice cream" });
console.log(promptValue);
/**
ChatPromptValue {
messages: [
HumanMessage {
content: 'Tell me a short joke about ice cream',
name: undefined,
additional_kwargs: {}
}
]
}
*/
const promptAsMessages = promptValue.toChatMessages();
console.log(promptAsMessages);
/**
[
HumanMessage {
content: 'Tell me a short joke about ice cream',
name: undefined,
additional_kwargs: {}
}
]
*/
const promptAsString = promptValue.toString();
console.log(promptAsString);
/**
Human: Tell me a short joke about ice cream
*/
API Reference:
- ChatPromptTemplate from
@langchain/core/prompts
2. Model
The PromptValue
is then passed to model
. In this case our model
is a ChatModel
, meaning it will output a BaseMessage
.
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({});
const promptAsString = "Human: Tell me a short joke about ice cream";
const response = await model.invoke(promptAsString);
console.log(response);
/**
AIMessage {
content: 'Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education!',
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined }
}
*/
API Reference:
- ChatOpenAI from
@langchain/openai
If our model was an LLM, it would output a string.
import { OpenAI } from "@langchain/openai";
const model = new OpenAI({});
const promptAsString = "Human: Tell me a short joke about ice cream";
const response = await model.invoke(promptAsString);
console.log(response);
/**
Why did the ice cream go to therapy?
Because it was feeling a little rocky road.
*/
API Reference:
- OpenAI from
@langchain/openai
3. Output parser
And lastly we pass our model
output to the outputParser
, which is a BaseOutputParser
meaning it takes either a string or a BaseMessage
as input. The StringOutputParser
specifically simple converts any input into a string.
import { AIMessage } from "@langchain/core/messages";
import { StringOutputParser } from "@langchain/core/output_parsers";
const outputParser = new StringOutputParser();
const message = new AIMessage(
'Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education!'
);
const parsed = await outputParser.invoke(message);
console.log(parsed);
/**
Sure, here you go: Why did the ice cream go to school? Because it wanted to get a little "sundae" education!
*/
API Reference:
- AIMessage from
@langchain/core/messages
- StringOutputParser from
@langchain/core/output_parsers
RAG Search Example
For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions.
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { Document } from "@langchain/core/documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnableLambda,
RunnableMap,
RunnablePassthrough,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
const vectorStore = await HNSWLib.fromDocuments(
[
new Document({ pageContent: "Harrison worked at Kensho" }),
new Document({ pageContent: "Bears like to eat honey." }),
],
new OpenAIEmbeddings()
);
const retriever = vectorStore.asRetriever(1);
const prompt = ChatPromptTemplate.fromMessages([
[
"ai",
`Answer the question based on only the following context:
{context}`,
],
["human", "{question}"],
]);
const model = new ChatOpenAI({});
const outputParser = new StringOutputParser();
const setupAndRetrieval = RunnableMap.from({
context: new RunnableLambda({
func: (input: string) =>
retriever.invoke(input).then((response) => response[0].pageContent),
}).withConfig({ runName: "contextRetriever" }),
question: new RunnablePassthrough(),
});
const chain = setupAndRetrieval.pipe(prompt).pipe(model).pipe(outputParser);
const response = await chain.invoke("Where did Harrison work?");
console.log(response);
/**
Harrison worked at Kensho.
*/
API Reference:
- ChatOpenAI from
@langchain/openai
- OpenAIEmbeddings from
@langchain/openai
- HNSWLib from
@langchain/community/vectorstores/hnswlib
- Document from
@langchain/core/documents
- ChatPromptTemplate from
@langchain/core/prompts
- RunnableLambda from
@langchain/core/runnables
- RunnableMap from
@langchain/core/runnables
- RunnablePassthrough from
@langchain/core/runnables
- StringOutputParser from
@langchain/core/output_parsers
In this chain we add some extra logic around retrieving context from a vector store.
We first instantiated our model, vector store and output parser. Then we defined our prompt, which takes in two input variables:
context
-> this is a string which is returned from our vector store based on a semantic search from the input.question
-> this is the question we want to ask.
Next we created a setupAndRetriever
runnable. This has two components which return the values required by our prompt:
context
-> this is aRunnableLambda
which takes the input from the.invoke()
call, makes a request to our vector store, and returns the first result.question
-> this uses aRunnablePassthrough
which simply passes whatever the input was through to the next step, and in our case it returns it to the key in the object we defined.
Both of these are wrapped inside a RunnableMap
. This is a special type of runnable that takes an object of runnables and executes them all in parallel.
It then returns an object with the same keys as the input object, but with the values replaced with the output of the runnables.
Finally, we pass the output of the setupAndRetriever
to our prompt
and then to our model
and outputParser
as before.