Tool error handling
Using a model to invoke a tool has some obvious potential failure modes. Firstly, the model needs to return a output that can be parsed at all. Secondly, the model needs to return tool arguments that are valid.
We can build error handling into our chains to mitigate these failure modes.
Setupβ
Weβll use OpenAI for this guide, and will need to install its partner package:
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You'll need to sign up for an OpenAI key and set it as an environment variable named OPENAI_API_KEY
.
We'll also use the popular validation library Zod to define our tool schemas. It's already
a dependency of langchain
, but you can install it explicitly like this too:
- npm
- Yarn
- pnpm
npm install zod
yarn add zod
pnpm add zod
Chainβ
Suppose we have the following (dummy) tool and tool-calling chain. Weβll make our tool intentionally convoluted to try and trip up the model.
import { z } from "zod";
import { DynamicStructuredTool } from "@langchain/core/tools";
const complexTool = new DynamicStructuredTool({
name: "complex_tool",
description: "Do something complex with a complex tool.",
schema: z.object({
intArg: z.number(),
intArg2: z.number(),
dictArg: z.object({
test: z.object({}),
}),
}),
func: async ({ intArg, intArg2, dictArg }) => {
// Unused for demo purposes
console.log(dictArg);
return (intArg * intArg2).toString();
},
});
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-3.5-turbo-1106",
temperature: 0,
});
import { convertToOpenAITool } from "@langchain/core/utils/function_calling";
const formattedTools = [convertToOpenAITool(complexTool)];
const modelWithTools = model.bind({
tools: formattedTools,
// We specify tool_choice to enforce that the 'multiply' function is called by the model.
tool_choice: {
type: "function",
function: { name: "complex_tool" },
},
});
import { JsonOutputKeyToolsParser } from "langchain/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
const chain = RunnableSequence.from([
modelWithTools,
new JsonOutputKeyToolsParser({ keyName: "complex_tool", returnSingle: true }),
complexTool,
]);
We can see that when we try to invoke this chain with certain inputs,
the model fails to correctly call the tool (it fails to provide a valid object to the dictArg
parameter, and instead passes potato
).
await chain.invoke("use complex tool. the args are 5, 2.1, potato.");
ToolInputParsingException [Error]: Received tool input did not match expected schema
at DynamicStructuredTool.call (file:///Users/jacoblee/langchain/langchainjs/langchain-core/dist/tools.js:63:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async RunnableSequence.invoke (file:///Users/jacoblee/langchain/langchainjs/langchain-core/dist/runnables/base.js:818:27)
at <anonymous> (/Users/jacoblee/langchain/langchainjs/examples/src/use_cases/tool_use/tool_error_handling_intro.ts:50:3) {
output: '{"intArg":5,"floatArg":2.1,"dictArg":"potato"}'
}
Fallbacksβ
One way to solve this is to fallback to a better model in the event of a tool invocation error. In this case weβll fall back to an identical chain that uses gpt-4-1106-preview
instead of gpt-3.5-turbo
.
const betterModel = new ChatOpenAI({
model: "gpt-4-1106-preview",
temperature: 0,
}).bind({
tools: formattedTools,
// We specify tool_choice to enforce that the 'multiply' function is called by the model.
tool_choice: {
type: "function",
function: { name: "complex_tool" },
},
});
const betterChain = RunnableSequence.from([
betterModel,
new JsonOutputKeyToolsParser({ keyName: "complex_tool", returnSingle: true }),
complexTool,
]);
const chainWithFallback = chain.withFallbacks({
fallbacks: [betterChain],
});
You can see a LangSmith trace of this example here
await chainWithFallback.invoke(
"use complex tool. the args are 5, 2.1, potato."
);
10.5
Looking at the Langsmith trace for this chain run, we can see that the first chain call fails as expected and itβs the fallback that succeeds.