Skip to main content

Structured Output with OpenAI functions

Compatibility

Must be used with an OpenAI functions model.

This example shows how to leverage OpenAI functions to output objects that match a given format for any given input. It converts input schema into an OpenAI function, then forces OpenAI to call that function to return a response in the correct format.

You can use it where you would use a chain with a StructuredOutputParser, but it doesn't require any special instructions stuffed into the prompt. It will also more reliably output structured results with higher temperature values, making it better suited for more creative applications.

Note: The outermost layer of the input schema must be an object.

Usage

Though you can pass in JSON Schema directly, you can also define your output schema using the popular Zod schema library and convert it with the zod-to-json-schema package. To do so, install the following packages:

npm install zod zod-to-json-schema

Format Text into Structured Data

npm install @langchain/openai
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

import { ChatOpenAI } from "@langchain/openai";
import { JsonOutputFunctionsParser } from "langchain/output_parsers";
import {
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
} from "@langchain/core/prompts";

const zodSchema = z.object({
foods: z
.array(
z.object({
name: z.string().describe("The name of the food item"),
healthy: z.boolean().describe("Whether the food is good for you"),
color: z.string().optional().describe("The color of the food"),
})
)
.describe("An array of food items mentioned in the text"),
});

const prompt = new ChatPromptTemplate({
promptMessages: [
SystemMessagePromptTemplate.fromTemplate(
"List all food items mentioned in the following text."
),
HumanMessagePromptTemplate.fromTemplate("{inputText}"),
],
inputVariables: ["inputText"],
});

const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0613", temperature: 0 });

// Binding "function_call" below makes the model always call the specified function.
// If you want to allow the model to call functions selectively, omit it.
const functionCallingModel = llm.bind({
functions: [
{
name: "output_formatter",
description: "Should always be used to properly format output",
parameters: zodToJsonSchema(zodSchema),
},
],
function_call: { name: "output_formatter" },
});

const outputParser = new JsonOutputFunctionsParser();

const chain = prompt.pipe(functionCallingModel).pipe(outputParser);

const response = await chain.invoke({
inputText: "I like apples, bananas, oxygen, and french fries.",
});

console.log(JSON.stringify(response, null, 2));

/*
{
"output": {
"foods": [
{
"name": "apples",
"healthy": true,
"color": "red"
},
{
"name": "bananas",
"healthy": true,
"color": "yellow"
},
{
"name": "french fries",
"healthy": false,
"color": "golden"
}
]
}
}
*/

API Reference:

Generate a Database Record

Though we suggest the above Expression Language example, here's an example of using the createStructuredOutputChainFromZod convenience method to return a classic LLMChain:

import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
import { createStructuredOutputChainFromZod } from "langchain/chains/openai_functions";
import {
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
} from "@langchain/core/prompts";

const zodSchema = z.object({
name: z.string().describe("Human name"),
surname: z.string().describe("Human surname"),
age: z.number().describe("Human age"),
birthplace: z.string().describe("Where the human was born"),
appearance: z.string().describe("Human appearance description"),
shortBio: z.string().describe("Short bio secription"),
university: z.string().optional().describe("University name if attended"),
gender: z.string().describe("Gender of the human"),
interests: z
.array(z.string())
.describe("json array of strings human interests"),
});

const prompt = new ChatPromptTemplate({
promptMessages: [
SystemMessagePromptTemplate.fromTemplate(
"Generate details of a hypothetical person."
),
HumanMessagePromptTemplate.fromTemplate("Additional context: {inputText}"),
],
inputVariables: ["inputText"],
});

const llm = new ChatOpenAI({ model: "gpt-3.5-turbo-0613", temperature: 1 });

const chain = createStructuredOutputChainFromZod(zodSchema, {
prompt,
llm,
outputKey: "person",
});

const response = await chain.invoke({
inputText:
"Please generate a diverse group of people, but don't generate anyone who likes video games.",
});

console.log(JSON.stringify(response, null, 2));

/*
{
"person": {
"name": "Sophia",
"surname": "Martinez",
"age": 32,
"birthplace": "Mexico City, Mexico",
"appearance": "Sophia has long curly brown hair and hazel eyes. She has a warm smile and a contagious laugh.",
"shortBio": "Sophia is a passionate environmentalist who is dedicated to promoting sustainable living. She believes in the power of individual actions to create a positive impact on the planet.",
"university": "Stanford University",
"gender": "Female",
"interests": [
"Hiking",
"Yoga",
"Cooking",
"Reading"
]
}
}
*/

API Reference:


Help us out by providing feedback on this documentation page: