How to pass run time values to tools
This guide assumes familiarity with the following concepts:
This how-to guide uses models with native tool calling capability. You can find a list of all models that support tool calling.
You may need to bind values to a tool that are only known at runtime. For example, the tool logic may require using the ID of the user who made the request.
Most of the time, such values should not be controlled by the LLM. In fact, allowing the LLM to control the user ID may lead to a security risk.
Instead, the LLM should only control the parameters of the tool that are meant to be controlled by the LLM, while other parameters (such as user ID) should be fixed by the application logic.
Pick your chat model:
- Groq
- OpenAI
- Anthropic
- Google Gemini
- FireworksAI
- MistralAI
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq 
yarn add @langchain/groq 
pnpm add @langchain/groq 
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
  model: "llama-3.3-70b-versatile",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai 
yarn add @langchain/openai 
pnpm add @langchain/openai 
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic 
yarn add @langchain/anthropic 
pnpm add @langchain/anthropic 
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
  model: "claude-3-5-sonnet-20240620",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-genai 
yarn add @langchain/google-genai 
pnpm add @langchain/google-genai 
Add environment variables
GOOGLE_API_KEY=your-api-key
Instantiate the model
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const llm = new ChatGoogleGenerativeAI({
  model: "gemini-2.0-flash",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community 
yarn add @langchain/community 
pnpm add @langchain/community 
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
  model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai 
yarn add @langchain/mistralai 
pnpm add @langchain/mistralai 
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
  model: "mistral-large-latest",
  temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai 
yarn add @langchain/google-vertexai 
pnpm add @langchain/google-vertexai 
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
  model: "gemini-1.5-flash",
  temperature: 0
});
Using context variablesβ
This functionality was added in @langchain/core>=0.3.10. If you are using the LangSmith SDK separately in your project, we also recommend upgrading to langsmith>=0.1.65. Please make sure your packages are up to date.
It also requires async_hooks support, which is not supported in all environments.
One way to solve this problem is by using context variables. Context variables are a powerful feature that allows you to set values at a higher level of your application, then access them within child runnable (such as tools) called from that level.
They work outside of traditional scoping rules, so you donβt need to have a direct reference to the declared variable to access its value.
Below, we declare a tool that updates a central userToPets state based
on a context variable called userId. Note that this userId is not
part of the tool schema or input:
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { getContextVariable } from "@langchain/core/context";
let userToPets: Record<string, string[]> = {};
const updateFavoritePets = tool(
  async (input) => {
    const userId = getContextVariable("userId");
    if (userId === undefined) {
      throw new Error(
        `No "userId" found in current context. Remember to call "setContextVariable('userId', value)";`
      );
    }
    userToPets[userId] = input.pets;
    return "update_favorite_pets called.";
  },
  {
    name: "update_favorite_pets",
    description: "add to the list of favorite pets.",
    schema: z.object({
      pets: z.array(z.string()),
    }),
  }
);
If you were to invoke the above tool before setting a context variable
at a higher level, userId would be undefined:
await updateFavoritePets.invoke({ pets: ["cat", "dog"] });
Error: No "userId" found in current context. Remember to call "setContextVariable('userId', value)";
    at updateFavoritePets.name (evalmachine.<anonymous>:14:15)
    at /Users/jacoblee/langchain/langchainjs/langchain-core/dist/tools/index.cjs:329:33
    at AsyncLocalStorage.run (node:async_hooks:346:14)
    at AsyncLocalStorageProvider.runWithConfig (/Users/jacoblee/langchain/langchainjs/langchain-core/dist/singletons/index.cjs:58:24)
    at /Users/jacoblee/langchain/langchainjs/langchain-core/dist/tools/index.cjs:325:68
    at new Promise (<anonymous>)
    at DynamicStructuredTool.func (/Users/jacoblee/langchain/langchainjs/langchain-core/dist/tools/index.cjs:321:20)
    at DynamicStructuredTool._call (/Users/jacoblee/langchain/langchainjs/langchain-core/dist/tools/index.cjs:283:21)
    at DynamicStructuredTool.call (/Users/jacoblee/langchain/langchainjs/langchain-core/dist/tools/index.cjs:111:33)
    at async evalmachine.<anonymous>:3:22
Instead, set a context variable with a parent of where the tools are invoked:
import { setContextVariable } from "@langchain/core/context";
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { RunnableLambda } from "@langchain/core/runnables";
const handleRunTimeRequestRunnable = RunnableLambda.from(
  async (params: { userId: string; query: string; llm: BaseChatModel }) => {
    const { userId, query, llm } = params;
    if (!llm.bindTools) {
      throw new Error("Language model does not support tools.");
    }
    // Set a context variable accessible to any child runnables called within this one.
    // You can also set context variables at top level that act as globals.
    setContextVariable("userId", userId);
    const tools = [updateFavoritePets];
    const llmWithTools = llm.bindTools(tools);
    const modelResponse = await llmWithTools.invoke(query);
    // For simplicity, skip checking the tool call's name field and assume
    // that the model is calling the "updateFavoritePets" tool
    if (modelResponse.tool_calls.length > 0) {
      return updateFavoritePets.invoke(modelResponse.tool_calls[0]);
    } else {
      return "No tool invoked.";
    }
  }
);
And when our method invokes the tools, you will see that the tool
properly access the previously set userId context variable and runs
successfully:
await handleRunTimeRequestRunnable.invoke({
  userId: "brace",
  query: "my favorite animals are cats and parrots.",
  llm: llm,
});
ToolMessage {
  "content": "update_favorite_pets called.",
  "name": "update_favorite_pets",
  "additional_kwargs": {},
  "response_metadata": {},
  "tool_call_id": "call_vsD2DbSpDquOtmFlOtbUME6h"
}
And have additionally updated the userToPets object with a key
matching the userId we passed, "brace":
console.log(userToPets);
{ brace: [ 'cats', 'parrots' ] }
Context variables pinned using setContextVariable. This means that if the value is passed by reference, and the value is later reaccessed using getContextVariable, the changes will be reflected in the original value.
Without context variablesβ
If you are on an earlier version of core or an environment that does not
support async_hooks, you can use the following design pattern that
creates the tool dynamically at run time and binds to them appropriate
values.
The idea is to create the tool dynamically at request time, and bind to it the appropriate information. For example, this information may be the user ID as resolved from the request itself.
import { z } from "zod";
import { tool } from "@langchain/core/tools";
userToPets = {};
function generateToolsForUser(userId: string) {
  const updateFavoritePets = tool(
    async (input) => {
      userToPets[userId] = input.pets;
      return "update_favorite_pets called.";
    },
    {
      name: "update_favorite_pets",
      description: "add to the list of favorite pets.",
      schema: z.object({
        pets: z.array(z.string()),
      }),
    }
  );
  // You can declare and return additional tools as well:
  return [updateFavoritePets];
}
Verify that the tool works correctly
const [updatePets] = generateToolsForUser("cobb");
await updatePets.invoke({ pets: ["tiger", "wolf"] });
console.log(userToPets);
{ cobb: [ 'tiger', 'wolf' ] }
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
async function handleRunTimeRequest(
  userId: string,
  query: string,
  llm: BaseChatModel
): Promise<any> {
  if (!llm.bindTools) {
    throw new Error("Language model does not support tools.");
  }
  const tools = generateToolsForUser(userId);
  const llmWithTools = llm.bindTools(tools);
  return llmWithTools.invoke(query);
}
This code will allow the LLM to invoke the tools, but the LLM is
unaware of the fact that a user ID even exists. You can see that
user_id is not among the params the LLM generates:
const aiMessage = await handleRunTimeRequest(
  "cobb",
  "my favorite pets are tigers and wolves.",
  llm
);
console.log(aiMessage.tool_calls[0]);
{
  name: 'update_favorite_pets',
  args: { pets: [ 'tigers', 'wolves' ] },
  type: 'tool_call',
  id: 'call_FBF4D51SkVK2clsLOQHX6wTv'
}
Click here to see the LangSmith trace for the above run.
Chat models only output requests to invoke tools. They don't actually invoke the underlying tools.
To see how to invoke the tools, please refer to how to use a model to call tools.