LLM
An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
Get started
We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "@langchain/core/prompts";
// We can construct an LLMChain from a PromptTemplate and an LLM.
const model = new OpenAI({ temperature: 0 });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chainA = new LLMChain({ llm: model, prompt });
// The result is an object with a `text` property.
const resA = await chainA.invoke({ product: "colorful socks" });
console.log({ resA });
// { resA: { text: '\n\nSocktastic!' } }
API Reference:
- OpenAI from
@langchain/openai
- LLMChain from
langchain/chains
- PromptTemplate from
@langchain/core/prompts
Usage with Chat Models
We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel:
import { LLMChain } from "langchain/chains";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.
const chat = new ChatOpenAI({ temperature: 0 });
const chatPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{text}"],
]);
const chainB = new LLMChain({
prompt: chatPrompt,
llm: chat,
});
const resB = await chainB.invoke({
input_language: "English",
output_language: "French",
text: "I love programming.",
});
console.log({ resB });
// { resB: { text: "J'adore la programmation." } }
API Reference:
- LLMChain from
langchain/chains
- ChatOpenAI from
@langchain/openai
- ChatPromptTemplate from
@langchain/core/prompts
Usage in Streaming Mode
We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:
import { OpenAI } from "@langchain/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "@langchain/core/prompts";
// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.
const model = new OpenAI({ temperature: 0.9, streaming: true });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
// Call the chain with the inputs and a callback for the streamed tokens
const res = await chain.invoke(
{ product: "colorful socks" },
{
callbacks: [
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
],
}
);
console.log({ res });
// { res: { text: '\n\nKaleidoscope Socks' } }
API Reference:
- OpenAI from
@langchain/openai
- LLMChain from
langchain/chains
- PromptTemplate from
@langchain/core/prompts
Cancelling a running LLMChain
We can also cancel a running LLMChain by passing an AbortSignal to the call
method:
import { OpenAI } from "@langchain/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "@langchain/core/prompts";
// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.
const model = new OpenAI({ temperature: 0.9, streaming: true });
const prompt = PromptTemplate.fromTemplate(
"Give me a long paragraph about {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
const controller = new AbortController();
// Call `controller.abort()` somewhere to cancel the request.
setTimeout(() => {
controller.abort();
}, 3000);
try {
// Call the chain with the inputs and a callback for the streamed tokens
const res = await chain.invoke(
{ product: "colorful socks", signal: controller.signal },
{
callbacks: [
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
],
}
);
} catch (e) {
console.log(e);
// Error: Cancel: canceled
}
API Reference:
- OpenAI from
@langchain/openai
- LLMChain from
langchain/chains
- PromptTemplate from
@langchain/core/prompts
In this example we show cancellation in streaming mode, but it works the same way in non-streaming mode.