Skip to main content

Vespa Retriever

This shows how to use Vespa.ai as a LangChain retriever. Vespa.ai is a platform for highly efficient structured text and vector search. Please refer to Vespa.ai for more information.

The following sets up a retriever that fetches results from Vespa's documentation search:

import { VespaRetriever } from "@langchain/community/retrievers/vespa";

export const run = async () => {
const url = "https://doc-search.vespa.oath.cloud";
const query_body = {
yql: "select content from paragraph where userQuery()",
hits: 5,
ranking: "documentation",
locale: "en-us",
};
const content_field = "content";

const retriever = new VespaRetriever({
url,
auth: false,
query_body,
content_field,
});

const result = await retriever.invoke("what is vespa?");
console.log(result);
};

API Reference:

Here, up to 5 results are retrieved from the content field in the paragraph document type, using documentation as the ranking method. The userQuery() is replaced with the actual query passed from LangChain.

Please refer to the pyvespa documentation for more information.

The URL is the endpoint of the Vespa application. You can connect to any Vespa endpoint, either a remote service or a local instance using Docker. However, most Vespa Cloud instances are protected with mTLS. If this is your case, you can, for instance set up a CloudFlare Worker that contains the necessary credentials to connect to the instance.

Now you can return the results and continue using them in LangChain.


Was this page helpful?


You can also leave detailed feedback on GitHub.