Skip to main content

Plan and execute

Compatibility

This agent currently only supports Chat Models.

Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by BabyAGI and then the "Plan-and-Solve" paper.

The planning is almost always done by an LLM.

The execution is usually done by a separate agent (equipped with tools).

This agent uses a two step process:

  1. First, the agent uses an LLM to create a plan to answer the query with clear steps.
  2. Once it has a plan, it uses an embedded traditional Action Agent to solve each step.

The idea is that the planning step keeps the LLM more "on track" by breaking up a larger task into simpler subtasks. However, this method requires more individual LLM queries and has higher latency compared to Action Agents.

With PlanAndExecuteAgentExecutor

info

This is an experimental chain and is not recommended for production use yet.

npm install @langchain/openai
import { Calculator } from "@langchain/community/tools/calculator";
import { ChatOpenAI } from "@langchain/openai";
import { PlanAndExecuteAgentExecutor } from "langchain/experimental/plan_and_execute";
import { SerpAPI } from "@langchain/community/tools/serpapi";

const tools = [new Calculator(), new SerpAPI()];
const model = new ChatOpenAI({
temperature: 0,
model: "gpt-3.5-turbo",
verbose: true,
});
const executor = await PlanAndExecuteAgentExecutor.fromLLMAndTools({
llm: model,
tools,
});

const result = await executor.invoke({
input: `Who is the current president of the United States? What is their current age raised to the second power?`,
});

console.log({ result });

API Reference:


Help us out by providing feedback on this documentation page: