Choosing between multiple tools
In the tools Quickstart we went over how to build a Chain that calls a single multiply
tool.
Now letโs take a look at how we might augment this chain so that it can pick from a number of tools to call.
Weโll focus on Chains since Agents can route between multiple tools by default.
Setupโ
Weโll use OpenAI for this guide, and will need to install its partner package:
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You'll need to sign up for an OpenAI key and set it as an environment variable named OPENAI_API_KEY
.
We'll also use the popular validation library Zod to define our tool schemas. It's already
a dependency of langchain
, but you can install it explicitly like this too:
- npm
- Yarn
- pnpm
npm install zod
yarn add zod
pnpm add zod
Toolsโ
Recall the multiply
tool from the quickstart:
import { z } from "zod";
import { DynamicStructuredTool } from "@langchain/core/tools";
const addTool = new DynamicStructuredTool({
name: "add",
description: "Add two integers together.",
schema: z.object({
firstInt: z.number(),
secondInt: z.number(),
}),
func: async ({ firstInt, secondInt }) => {
return (firstInt + secondInt).toString();
},
});
And now let's create exponentiate
and add
tools:
const multiplyTool = new DynamicStructuredTool({
name: "multiply",
description: "Multiply two integers together.",
schema: z.object({
firstInt: z.number(),
secondInt: z.number(),
}),
func: async ({ firstInt, secondInt }) => {
return (firstInt * secondInt).toString();
},
});
const exponentiateTool = new DynamicStructuredTool({
name: "exponentiate",
description: "Exponentiate the base to the exponent power.",
schema: z.object({
base: z.number(),
exponent: z.number(),
}),
func: async ({ base, exponent }) => {
return (base ** exponent).toString();
},
});
The main difference between using one Tool and many is that in the case of many we canโt be sure which Tool the model will invoke.
So we cannot hardcode, like we did in the Quickstart, a specific tool into our chain.
Instead weโll create a function called callSelectedTool
that takes the JsonOutputToolsParser
output and returns the end of the chain based on the chosen tool.
This means that the callSelectedTool
function appends the Tools that were invoked to the end of the chain at runtime.
We can do this because LCEL allows runnables to return runnables themselves, which are then invoked as part of the chain.
import { ChatOpenAI } from "@langchain/openai";
import { convertToOpenAITool } from "@langchain/core/utils/function_calling";
import { JsonOutputToolsParser } from "langchain/output_parsers";
import {
RunnableLambda,
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
const model = new ChatOpenAI({
model: "gpt-3.5-turbo-1106",
});
const tools = [multiplyTool, exponentiateTool, addTool];
const toolMap: Record<string, any> = {
multiply: multiplyTool,
exponentiate: exponentiateTool,
add: addTool,
};
const modelWithTools = model.bind({
tools: tools.map(convertToOpenAITool),
});
// Function for dynamically constructing the end of the chain based on the model-selected tool.
const callSelectedTool = RunnableLambda.from(
(toolInvocation: Record<string, any>) => {
const selectedTool = toolMap[toolInvocation.type];
if (!selectedTool) {
throw new Error(
`No matching tool available for requested type "${toolInvocation.type}".`
);
}
const toolCallChain = RunnableSequence.from([
(toolInvocation) => toolInvocation.args,
selectedTool,
]);
// We use `RunnablePassthrough.assign` here to return the intermediate `toolInvocation` params
// as well, but you can omit if you only care about the answer.
return RunnablePassthrough.assign({
output: toolCallChain,
});
}
);
const chain = RunnableSequence.from([
modelWithTools,
new JsonOutputToolsParser(),
// .map() allows us to apply a function for each item in a list of inputs.
// Required because the model can call multiple tools at once.
callSelectedTool.map(),
]);
Note the use of .map()
above - this is required because the model can choose to call multiple
tools in parallel. See this section for more details.
You can see a LangSmith trace of this example here
await chain.invoke("What's 23 times 7");
[
{
type: 'multiply',
args: { firstInt: 23, secondInt: 7 },
output: '161'
}
]
await chain.invoke("add a million plus a billion");
[
{
type: 'add',
args: { firstInt: 1000000, secondInt: 1000000000 },
output: '1001000000'
}
]
await chain.invoke("cube thirty-seven");
[
{
type: 'exponentiate',
args: { base: 37, exponent: 3 },
output: '50653'
}
]