Getting Started: Chains
Using a language model in isolation is fine for some applications, but it is often useful to combine language models with other sources of information, third-party APIs, or even other language models. This is where the concept of a chain comes in.
LangChain provides a standard interface for chains, as well as a number of built-in chains that can be used out of the box. You can also create your own chains.
📄️ LLM Chain
Conceptual Guide
🗃️ Index Related Chains
3 items
📄️ Sequential Chain
Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario.
🗃️ Other Chains
8 items
📄️ Prompt Selectors
Conceptual Guide
Advanced
To implement your own custom chain you can subclass BaseChain
and implement the following methods:
import { CallbackManagerForChainRun } from "langchain/callbacks";
import { BaseChain as _ } from "langchain/chains";
import { BaseMemory } from "langchain/memory";
import { ChainValues } from "langchain/schema";
abstract class BaseChain {
memory?: BaseMemory;
/**
* Run the core logic of this chain and return the output
*/
abstract _call(
values: ChainValues,
runManager?: CallbackManagerForChainRun
): Promise<ChainValues>;
/**
* Return the string type key uniquely identifying this class of chain.
*/
abstract _chainType(): string;
/**
* Return the list of input keys this chain expects to receive when called.
*/
abstract get inputKeys(): string[];
/**
* Return the list of output keys this chain will produce when called.
*/
abstract get outputKeys(): string[];
}
API Reference:
- CallbackManagerForChainRun from
langchain/callbacks
- BaseChain from
langchain/chains
- BaseMemory from
langchain/memory
- ChainValues from
langchain/schema
Subclassing BaseChain
The _call
method is the main method custom chains must implement. It takes a record of inputs and returns a record of outputs. The inputs received should conform the inputKeys
array, and the outputs returned should conform to the outputKeys
array.
When implementing this method in a custom chain it's worth paying special attention to the runManager
argument, which is what allows your custom chains to participate in the same callbacks system as the built-in chains.
If you call into another chain/model/agent inside your custom chain then you should pass it the result of calling runManager?.getChild()
which will produce a new callback manager scoped to that inner run. An example:
import { BasePromptTemplate, PromptTemplate } from "langchain/prompts";
import { BaseLanguageModel } from "langchain/base_language";
import { CallbackManagerForChainRun } from "langchain/callbacks";
import { BaseChain, ChainInputs } from "langchain/chains";
import { ChainValues } from "langchain/schema";
export interface MyCustomChainInputs extends ChainInputs {
llm: BaseLanguageModel;
promptTemplate: string;
}
export class MyCustomChain extends BaseChain implements MyCustomChainInputs {
llm: BaseLanguageModel;
promptTemplate: string;
prompt: BasePromptTemplate;
constructor(fields: MyCustomChainInputs) {
super(fields);
this.llm = fields.llm;
this.promptTemplate = fields.promptTemplate;
this.prompt = PromptTemplate.fromTemplate(this.promptTemplate);
}
async _call(
values: ChainValues,
runManager?: CallbackManagerForChainRun
): Promise<ChainValues> {
// Your custom chain logic goes here
// This is just an example that mimics LLMChain
const promptValue = await this.prompt.formatPromptValue(values);
// Whenever you call a language model, or another chain, you should pass
// a callback manager to it. This allows the inner run to be tracked by
// any callbacks that are registered on the outer run.
// You can always obtain a callback manager for this by calling
// `runManager?.getChild()` as shown below.
const result = await this.llm.generatePrompt(
[promptValue],
{},
runManager?.getChild()
);
// If you want to log something about this run, you can do so by calling
// methods on the runManager, as shown below. This will trigger any
// callbacks that are registered for that event.
runManager?.handleText("Log something about this run");
return { output: result.generations[0][0].text };
}
_chainType(): string {
return "my_custom_chain";
}
get inputKeys(): string[] {
return ["input"];
}
get outputKeys(): string[] {
return ["output"];
}
}
API Reference:
- BasePromptTemplate from
langchain/prompts
- PromptTemplate from
langchain/prompts
- BaseLanguageModel from
langchain/base_language
- CallbackManagerForChainRun from
langchain/callbacks
- BaseChain from
langchain/chains
- ChainInputs from
langchain/chains
- ChainValues from
langchain/schema