Skip to main content

Quickstart

In this quick start, we will use LLMs that are capable of function/tool calling to extract information from text.

info

Extraction using function/tool calling only works with models that support function/tool calling.

Set up

We will use the new withStructuredOutput() method available on LLMs that are capable of function/tool calling, along with the popular and intuitive Zod typing library.

Select a model, install the dependencies for it and set your API keys as environment variables. We’ll use Mistral as an example below:

yarn add @langchain/mistralai zod

You can also see this page for an overview of which models support different kinds of structured output.

The Schema

First, we need to describe what information we want to extract from the text.

For convenience, we’ll use Zod to define an example schema to extract personal information. You may also use JSON schema directly if you wish.

import { z } from "zod";

// Note that:
// 1. Each field is `optional` -- this allows the model to decline to extract it!
// 2. Each field uses the `.describe()` method -- this description is used by the LLM.
// Having a good description can help improve extraction results.
const personSchema = z
.object({
name: z.optional(z.string()).describe("The name of the person"),
hair_color: z
.optional(z.string())
.describe("The color of the person's hair, if known"),
height_in_meters: z
.optional(z.string())
.describe("Height measured in meters"),
})
.describe("Information about a person.");

There are two best practices when defining schema:

  1. Document the attributes and the schema itself: This information is sent to the LLM and is used to improve the quality of information extraction.
  2. Do not force the LLM to make up information! Above we used Optional for the attributes allowing the LLM to output None if it doesn’t know the answer.
info

For best performance, document the schema well and make sure the model isn’t force to return results if there’s no information to be extracted in the text.

The Extractor

Let’s create an information extractor using the schema we defined above.

import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";

// Define a custom prompt to provide instructions and any additional context.
// 1) You can add examples into the prompt template to improve extraction quality
// 2) Introduce additional parameters to take context into account (e.g., include metadata
// about the document from which the text was extracted.)

const SYSTEM_PROMPT_TEMPLATE = `You are an expert extraction algorithm.
Only extract relevant information from the text.
If you do not know the value of an attribute asked to extract, you may omit the attribute's value.`;

const prompt = ChatPromptTemplate.fromMessages([
["system", SYSTEM_PROMPT_TEMPLATE],
// Please see the how-to about improving performance with
// reference examples.
// new MessagesPlaceholder("examples"),
["human", "{text}"],
]);

We need to use a model that supports function/tool calling.

Please review the chat model integration page for list of some models that can be used with this API.

import { ChatMistralAI } from "@langchain/mistralai";

const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
});

const extractionRunnable = prompt.pipe(llm.withStructuredOutput(personSchema));

Let’s test it out!

const text = "Alan Smith is 6 feet tall and has blond hair.";

await extractionRunnable.invoke({ text });
{ name: "Alan Smith", height_in_meters: "1.8288", hair_color: "blond" }
info

Extraction is Generative 🤯

LLMs are generative models, so they can do some pretty cool things like correctly extract the height of the person in meters even though it was provided in feet!

Multiple Entities

In most cases, you should be extracting a list of entities rather than a single entity.

This can be easily achieved with Zod by nesting models inside one another. Here’s an example using the personSchema we defined above:

const dataSchema = z.object({
people: z.array(personSchema),
});
info

Extraction might not be perfect here. Please continue to see how to use Reference Examples to improve the quality of extraction, and see the guidelines section!

const extractionRunnable = prompt.pipe(llm.withStructuredOutput(dataSchema));
const text =
"My name is Jeff, my hair is black and i am 6 feet tall. Anna has the same color hair as me.";
await extractionRunnable.invoke({ text });
{
people: [
{ name: "Jeff", hair_color: "black", height_in_meters: "1.8288" },
{ name: "Anna", hair_color: "black" }
]
}
tip

When the schema accommodates the extraction of multiple entities, it also allows the model to extract no entities if no relevant information is in the text by providing an empty list.

This is usually a good thing! It allows specifying required attributes on an entity without necessarily forcing the model to detect this entity.

Next steps

Now that you understand the basics of extraction with LangChain, you’re ready to proceed to the rest of the how-to guide:

  • Add Examples: Learn how to use reference examples to improve performance.
  • Handle Long Text: What should you do if the text does not fit into the context window of the LLM?
  • Handle Files: Examples of using LangChain document loaders and parsers to extract from files like PDFs.
  • Without function calling: Use a prompt based approach to extract with models that do not support tool/function calling.
  • Guidelines: Guidelines for getting good performance on extraction tasks.

Help us out by providing feedback on this documentation page: