Skip to main content

OpenAI Tools

These output parsers extract tool calls from OpenAIโ€™s function calling API responses. This means they are only usable with models that support function calling, and specifically the latest tools and tool_choice parameters. We recommend familiarizing yourself with function calling before reading this guide.

There are a few different variants of output parsers:

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
const properties = {
setup: {
type: "string",
description: "The setup for the joke",
},
punchline: {
type: "string",
description: "The joke's punchline",
},
};

const tool = {
type: "function" as const,
function: {
name: "joke",
description: "Joke to tell user.",
parameters: {
$schema: "http://json-schema.org/draft-07/schema#",
title: "Joke",
type: "object",
properties,
required: ["setup", "punchline"],
},
},
};
import { ChatPromptTemplate } from "@langchain/core/prompts";

const llm = new ChatOpenAI();

// Use `.bind` to attach the tool to the model
const llmWithTools = llm.bind({
tools: [tool],
// Optionally, we can pass the tool to the `tool_choice` parameter to
// force the model to call the tool.
tool_choice: tool,
});

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are the funniest comedian, tell the user a joke about their topic.",
],
["human", "Topic: {topic}"],
]);

Now we can use LCEL to pipe our prompt and LLM together.

const chain = prompt.pipe(llmWithTools);
const result = await chain.invoke({ topic: "Large Language Models" });
result.additional_kwargs;
{
function_call: undefined,
tool_calls: [
{
id: "call_vo9oYcHXKWzS6bJ4bK7Eghmz",
type: "function",
function: {
name: "joke",
arguments: "{\n" +
' "setup": "Why did the large language model go on a diet?",\n' +
' "punchline": "It wanted to reduce i'... 17 more characters
}
}
]
}

Inspect the LangSmith trace from the call aboveโ€‹

JsonOutputToolsParserโ€‹

import { JsonOutputToolsParser } from "langchain/output_parsers";

const outputParser = new JsonOutputToolsParser();
const chain = prompt.pipe(llmWithTools).pipe(outputParser);
await chain.invoke({ topic: "Large Language Models" });
[
{
type: "joke",
args: {
setup: "Why did the large language model go to therapy?",
punchline: "It had too many layers!"
}
}
]

Inspect the LangSmith trace with the JsonOutputToolsParserโ€‹

JsonOutputKeyToolsParserโ€‹

This merely extracts a single key from the returned response. This is useful for when you are passing in a single tool and just want itโ€™s arguments.

import { JsonOutputKeyToolsParser } from "langchain/output_parsers";

const outputParser = new JsonOutputKeyToolsParser({ keyName: "joke" });
const chain = prompt.pipe(llmWithTools).pipe(outputParser);
await chain.invoke({ topic: "Large Language Models" });
[
{
setup: "Why did the large language model go to therapy?",
punchline: "It had too many layers!"
}
]

Inspect the LangSmith trace with the JsonOutputKeyToolsParserโ€‹

Some LLMs have support for calling multiple tools in a single response. Because of this, the result of invoking JsonOutputKeyToolsParser is always an array. If you would only like a single result to be returned, you can specify returnSingle in the constructor.

const outputParserSingle = new JsonOutputKeyToolsParser({
keyName: "joke",
returnSingle: true,
});
const chain = prompt.pipe(llmWithTools);
const response = await chain.invoke({ topic: "Large Language Models" });
await outputParserSingle.invoke(response);
{
setup: "Why did the large language model go on a diet?",
punchline: "It wanted to shed some excess bytes!"
}

See the LangSmith trace from this output parser.โ€‹


Help us out by providing feedback on this documentation page: