Skip to main content

Tracking token usage

This notebook goes over how to track your token usage for specific calls. This is currently only implemented for the OpenAI API.

Here's an example of tracking token usage for a single LLM call:

npm install @langchain/openai
import { ChatOpenAI } from "@langchain/openai";

const chatModel = new ChatOpenAI({
model: "gpt-4-turbo",
});

const res = await chatModel.invoke("Tell me a joke.");

console.log(res.response_metadata);

/*
{
tokenUsage: { completionTokens: 15, promptTokens: 12, totalTokens: 27 },
finish_reason: 'stop'
}
*/

API Reference:

If this model is passed to a chain or agent that calls it multiple times, it will log an output each time.


Help us out by providing feedback on this documentation page: