Apify Dataset
This guide shows how to use Apify with LangChain to load documents from an Apify Dataset.
Overview
Apify is a cloud platform for web scraping and data extraction, which provides an ecosystem of more than two thousand ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.
This guide shows how to load documents from an Apify Dataset — a scalable append-only storage built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel.
Datasets are typically used to save results of different Actors. For example, Website Content Crawler Actor deeply crawls websites such as documentation, knowledge bases, help centers, or blogs, and then stores the text content of webpages into a dataset, from which you can feed the documents into a vector database and use it for information retrieval. Another example is the RAG Web Browser Actor, which queries Google Search, scrapes the top N pages from the results, and returns the cleaned content in Markdown format for further processing by a large language model.
Setup
You'll first need to install the official Apify client:
- npm
- Yarn
- pnpm
npm install apify-client
yarn add apify-client
pnpm add apify-client
- npm
- Yarn
- pnpm
npm install hnswlib-node @langchain/openai @langchain/community @langchain/core
yarn add hnswlib-node @langchain/openai @langchain/community @langchain/core
pnpm add hnswlib-node @langchain/openai @langchain/community @langchain/core
You'll also need to sign up and retrieve your Apify API token.
Usage
From a New Dataset (Crawl a Website and Store the data in Apify Dataset)
If you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.
In the example below, we use the Website Content Crawler Actor to crawl
LangChain documentation, store the results in Apify Dataset, and then load the dataset using the ApifyDatasetLoader
.
For this demonstration, we'll use a fast Cheerio crawler type and limit the number of crawled pages to 10.
Note: Running the Website Content Crawler may take some time, depending on the size of the website. For large sites, it can take several hours or even days!
Here's an example:
import { ApifyDatasetLoader } from "@langchain/community/document_loaders/web/apify_dataset";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { createRetrievalChain } from "langchain/chains/retrieval";
const APIFY_API_TOKEN = "YOUR-APIFY-API-TOKEN"; // or set as process.env.APIFY_API_TOKEN
const OPENAI_API_KEY = "YOUR-OPENAI-API-KEY"; // or set as process.env.OPENAI_API_KEY
/*
* datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents.
* In the below example, the Apify dataset format looks like this:
* {
* "url": "https://apify.com",
* "text": "Apify is the best web scraping and automation platform."
* }
*/
const loader = await ApifyDatasetLoader.fromActorCall(
"apify/website-content-crawler",
{
maxCrawlPages: 10,
crawlerType: "cheerio",
startUrls: [{ url: "https://js.langchain.com/docs/" }],
},
{
datasetMappingFunction: (item) =>
new Document({
pageContent: (item.text || "") as string,
metadata: { source: item.url },
}),
clientOptions: {
token: APIFY_API_TOKEN,
},
}
);
const docs = await loader.load();
const vectorStore = await HNSWLib.fromDocuments(
docs,
new OpenAIEmbeddings({ apiKey: OPENAI_API_KEY })
);
const model = new ChatOpenAI({
temperature: 0,
apiKey: OPENAI_API_KEY,
});
const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
["human", "{input}"],
]);
const combineDocsChain = await createStuffDocumentsChain({
llm: model,
prompt: questionAnsweringPrompt,
});
const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain,
});
const res = await chain.invoke({ input: "What is LangChain?" });
console.log(res.answer);
console.log(res.context.map((doc) => doc.metadata.source));
/*
LangChain is a framework for developing applications powered by language models.
[
'https://js.langchain.com/docs/',
'https://js.langchain.com/docs/modules/chains/',
'https://js.langchain.com/docs/modules/chains/llmchain/',
'https://js.langchain.com/docs/category/functions-4'
]
*/
API Reference:
- ApifyDatasetLoader from
@langchain/community/document_loaders/web/apify_dataset
- HNSWLib from
@langchain/community/vectorstores/hnswlib
- OpenAIEmbeddings from
@langchain/openai
- ChatOpenAI from
@langchain/openai
- Document from
@langchain/core/documents
- ChatPromptTemplate from
@langchain/core/prompts
- createStuffDocumentsChain from
langchain/chains/combine_documents
- createRetrievalChain from
langchain/chains/retrieval
From an Existing Dataset
If you've already run an Actor and have an existing dataset on the Apify platform, you can initialize the document loader directly using the constructor
import { ApifyDatasetLoader } from "@langchain/community/document_loaders/web/apify_dataset";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
const APIFY_API_TOKEN = "YOUR-APIFY-API-TOKEN"; // or set as process.env.APIFY_API_TOKEN
const OPENAI_API_KEY = "YOUR-OPENAI-API-KEY"; // or set as process.env.OPENAI_API_KEY
/*
* datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents.
* In the below example, the Apify dataset format looks like this:
* {
* "url": "https://apify.com",
* "text": "Apify is the best web scraping and automation platform."
* }
*/
const loader = new ApifyDatasetLoader("your-dataset-id", {
datasetMappingFunction: (item) =>
new Document({
pageContent: (item.text || "") as string,
metadata: { source: item.url },
}),
clientOptions: {
token: APIFY_API_TOKEN,
},
});
const docs = await loader.load();
const vectorStore = await HNSWLib.fromDocuments(
docs,
new OpenAIEmbeddings({ apiKey: OPENAI_API_KEY })
);
const model = new ChatOpenAI({
temperature: 0,
apiKey: OPENAI_API_KEY,
});
const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
["human", "{input}"],
]);
const combineDocsChain = await createStuffDocumentsChain({
llm: model,
prompt: questionAnsweringPrompt,
});
const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain,
});
const res = await chain.invoke({ input: "What is LangChain?" });
console.log(res.answer);
console.log(res.context.map((doc) => doc.metadata.source));
/*
LangChain is a framework for developing applications powered by language models.
[
'https://js.langchain.com/docs/',
'https://js.langchain.com/docs/modules/chains/',
'https://js.langchain.com/docs/modules/chains/llmchain/',
'https://js.langchain.com/docs/category/functions-4'
]
*/
API Reference:
- ApifyDatasetLoader from
@langchain/community/document_loaders/web/apify_dataset
- HNSWLib from
@langchain/community/vectorstores/hnswlib
- OpenAIEmbeddings from
@langchain/openai
- ChatOpenAI from
@langchain/openai
- Document from
@langchain/core/documents
- ChatPromptTemplate from
@langchain/core/prompts
- createRetrievalChain from
langchain/chains/retrieval
- createStuffDocumentsChain from
langchain/chains/combine_documents