Skip to main content

SearchApi Loader

This guide shows how to use SearchApi with LangChain to load web search results.

Overview

SearchApi is a real-time API that grants developers access to results from a variety of search engines, including engines like Google Search, Google News, Google Scholar, YouTube Transcripts or any other engine that could be found in documentation. This API enables developers and businesses to scrape and extract meaningful data directly from the result pages of all these search engines, providing valuable insights for different use-cases.

This guide shows how to load web search results using the SearchApiLoader in LangChain. The SearchApiLoader simplifies the process of loading and processing web search results from SearchApi.

Setup

You'll need to sign up and retrieve your SearchApi API key.

Usage

Here's an example of how to use the SearchApiLoader:

npm install @langchain/openai
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { TokenTextSplitter } from "@langchain/textsplitters";
import { SearchApiLoader } from "@langchain/community/document_loaders/web/searchapi";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { createRetrievalChain } from "langchain/chains/retrieval";

// Initialize the necessary components
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo-1106",
});
const embeddings = new OpenAIEmbeddings();
const apiKey = "Your SearchApi API key";

// Define your question and query
const question = "Your question here";
const query = "Your query here";

// Use SearchApiLoader to load web search results
const loader = new SearchApiLoader({ q: query, apiKey, engine: "google" });
const docs = await loader.load();

const textSplitter = new TokenTextSplitter({
chunkSize: 800,
chunkOverlap: 100,
});

const splitDocs = await textSplitter.splitDocuments(docs);

// Use MemoryVectorStore to store the loaded documents in memory
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
embeddings
);

const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
["human", "{input}"],
]);

const combineDocsChain = await createStuffDocumentsChain({
llm,
prompt: questionAnsweringPrompt,
});

const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain,
});

const res = await chain.invoke({
input: question,
});

console.log(res.answer);

API Reference:

In this example, the SearchApiLoader is used to load web search results, which are then stored in memory using MemoryVectorStore. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the SearchApiLoader can streamline the process of loading and processing web search results.


Was this page helpful?


You can also leave detailed feedback on GitHub.