Skip to main content

OUTPUT_PARSING_FAILURE

An output parser was unable to handle model output as expected.

To illustrate this, let's say you have an output parser that expects a chat model to output JSON surrounded by a markdown code tag (triple backticks). Here would be an example of good input:

AIMessage {
content: "```\n{\"foo\": \"bar\"}\n```"
}

Internally, our output parser might try to strip out the markdown fence and newlines and then run JSON.parse().

If instead the chat model generated an output with malformed JSON like this:

AIMessage {
content: "```\n{\"foo\":\n```"
}

When our output parser attempts to parse this, the JSON.parse() call will fail.

Note that some prebuilt constructs like legacy LangChain agents and chains may use output parsers internally, so you may see this error even if you're not visibly instantiating and using an output parser.

Troubleshooting

The following may help resolve this error:

  • Consider using tool calling or other structured output techniques if possible without an output parser to reliably output parseable values.
    • If you are using a prebuilt chain or agent, use LangGraph to compose your logic explicitly instead.
  • Add more precise formatting instructions to your prompt. In the above example, adding "You must always return valid JSON fenced by a markdown code block. Do not return any additional text." to your input may help steer the model to returning the expected format.
  • If you are using a smaller or less capable model, try using a more capable one.
  • Add LLM-powered retries.

Was this page helpful?


You can also leave detailed feedback on GitHub.