Skip to main content

Handle parsing errors

Occasionally the LLM cannot determine what step to take because it outputs format in incorrect form to be handled by the output parser. In this case, by default the agent errors. You can control this functionality by passing handleParsingErrors when initializing the agent executor. This field can be a boolean, a string, or a function:

  • Passing true will pass a generic error back to the LLM along with the parsing error text for a retry.
  • Passing a string will return that value along with the parsing error text. This is helpful to steer the LLM in the right direction.
  • Passing a function that takes an OutputParserException as a single argument allows you to run code in response to an error and return whatever string you'd like.

Here's an example where the model initially tries to set "Reminder" as the task type instead of an allowed value:

npm install @langchain/openai
import { z } from "zod";
import type { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
import { pull } from "langchain/hub";
import { DynamicStructuredTool } from "@langchain/core/tools";

const model = new ChatOpenAI({ temperature: 0.1 });
const tools = [
new DynamicStructuredTool({
name: "task-scheduler",
description: "Schedules tasks",
schema: z
tasks: z
title: z
.describe("The title of the tasks, reminders and alerts"),
due_date: z
.describe("Due date. Must be a valid JavaScript date string"),
task_type: z
"In-Person Meeting",
"Open House",
.describe("The type of task"),
.describe("The JSON for task, reminder or alert to create"),
.describe("JSON definition for creating tasks, reminders and alerts"),
func: async (input: { tasks: object }) => JSON.stringify(input),

// Get the prompt to use - you can modify this!
// If you want to see the prompt in full, you can at:
const prompt = await pull<ChatPromptTemplate>(

const agent = await createOpenAIFunctionsAgent({
llm: model,

const agentExecutor = new AgentExecutor({
verbose: true,
"Please try again, paying close attention to the allowed enum values",

console.log("Loaded agent.");

const input = `Set a reminder to renew our online property ads next week.`;

console.log(`Executing with input "${input}"...`);

const result = await agentExecutor.invoke({ input });

console.log({ result });

result: {
input: 'Set a reminder to renew our online property ads next week.',
output: 'I have set a reminder for you to renew your online property ads on October 10th, 2022.'

API Reference:

This is what the resulting trace looks like - note that the LLM retries before correctly choosing a matching enum:

Help us out by providing feedback on this documentation page: