Tavily Extract
Tavily is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed. Tavily offers two key endpoints, one of which being Extract, which provides raw extracted content from a URL.
This guide provides a quick overview for getting started with the Tavily tool. For a complete breakdown of the Tavily tool, you can find more detailed documentation in the API reference.
Overview
Integration details
Class | Package | PY support | Package latest |
---|---|---|---|
TavilyExtract | @langchain/tavily | ✅ | ![]() |
Setup
The integration lives in the @langchain/tavily
package, which you can
install as shown below:
- npm
- yarn
- pnpm
npm i @langchain/tavily @langchain/core
yarn add @langchain/tavily @langchain/core
pnpm add @langchain/tavily @langchain/core
Credentials
Set up an API key here and set it as an
environment variable named TAVILY_API_KEY
.
process.env.TAVILY_API_KEY = "YOUR_API_KEY";
It’s also helpful (but not needed) to set up LangSmith for best-in-class observability:
process.env.LANGSMITH_TRACING = "true";
process.env.LANGSMITH_API_KEY = "your-api-key";
Instantiation
You can import and instantiate an instance of the TavilyExtract
tool
like this:
import { TavilyExtract } from "@langchain/tavily";
const tool = new TavilyExtract({
extractDepth: "basic",
includeImages: false,
});
Invocation
Invoke directly with args
The Tavily Extract tool accepts the following arguments during invocation:
urls
(required): A list of URLs to extract content from.Both
extractDepth
andincludeImages
can also be set during invocation
await tool.invoke({
urls: ["https://en.wikipedia.org/wiki/Lionel_Messi"],
});
Invoke with ToolCall
We can also invoke the tool with a model-generated ToolCall
, in which
case a ToolMessage
will be returned:
// This is usually generated by a model, but we'll create a tool call directly for demo purposes.
const modelGeneratedToolCall = {
args: { urls: ["https://en.wikipedia.org/wiki/Lionel_Messi"] },
id: "1",
name: tool.name,
type: "tool_call",
};
await tool.invoke(modelGeneratedToolCall);
Chaining
We can use our tool in a chain by first binding it to a tool-calling model and then calling it:
Pick your chat model:
- Groq
- OpenAI
- Anthropic
- Google Gemini
- FireworksAI
- MistralAI
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "llama-3.3-70b-versatile",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-genai
yarn add @langchain/google-genai
pnpm add @langchain/google-genai
Add environment variables
GOOGLE_API_KEY=your-api-key
Instantiate the model
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const llm = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
import { HumanMessage } from "@langchain/core/messages";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
["placeholder", "{messages}"],
]);
const llmWithTools = llm.bindTools([tool]);
const chain = prompt.pipe(llmWithTools);
const toolChain = RunnableLambda.from(async (userInput: string, config) => {
const humanMessage = new HumanMessage(userInput);
const aiMsg = await chain.invoke(
{
messages: [new HumanMessage(userInput)],
},
config
);
const toolMsgs = await tool.batch(aiMsg.tool_calls, config);
return chain.invoke(
{
messages: [humanMessage, aiMsg, ...toolMsgs],
},
config
);
});
const toolChainResult = await toolChain.invoke(
"['https://en.wikipedia.org/wiki/Albert_Einstein','https://en.wikipedia.org/wiki/Theoretical_physics']"
);
const { tool_calls, content } = toolChainResult;
console.log(
"AIMessage",
JSON.stringify(
{
tool_calls,
content,
},
null,
2
)
);
Agents
For guides on how to use LangChain tools in agents, see the LangGraph.js docs.
API reference
For detailed documentation of all Tavily Extract API features and configurations head to the API reference:
https://docs.tavily.com/documentation/api-reference/endpoint/extract
Related
Tool conceptual guide
Tool how-to guides