Qdrant
This guide will help you getting started with such a retriever backed by a Qdrant vector store. For detailed documentation of all features and configurations head to the API reference.
Overviewβ
A self-query retriever retrieves documents by dynamically generating metadata filters based on some input query. This allows the retriever to account for underlying document metadata in addition to pure semantic similarity when fetching results.
It uses a module called a Translator
that generates a filter based on
information about metadata fields and the query language that a given
vector store supports.
Integration detailsβ
Backing vector store | Self-host | Cloud offering | Package | Py support |
---|---|---|---|---|
QdrantVectorStore | β | β | @langchain/qdrant | β |
Setupβ
Set up a Qdrant instance as documented here. Set the following environment variables:
process.env.QDRANT_URL = "YOUR_QDRANT_URL_HERE"; // for example, http://localhost:6333
If you want to get automated tracing from individual queries, you can also set your LangSmith API key by uncommenting below:
// process.env.LANGSMITH_API_KEY = "<YOUR API KEY HERE>";
// process.env.LANGSMITH_TRACING = "true";
Installationβ
The vector store lives in the @langchain/qdrant
package. Youβll also
need to install the langchain
and @langchain/community
packages to
import the main SelfQueryRetriever
classes.
For this example, weβll also use OpenAI embeddings, so youβll need to
install the @langchain/openai
package and obtain an API
key:
- npm
- yarn
- pnpm
npm i @langchain/qdrant langchain @langchain/community @langchain/openai @langchain/core
yarn add @langchain/qdrant langchain @langchain/community @langchain/openai @langchain/core
pnpm add @langchain/qdrant langchain @langchain/community @langchain/openai @langchain/core
The official Qdrant SDK (@qdrant/js-client-rest
) is automatically
installed as a dependency of @langchain/qdrant
, but you may wish to
install it independently as well.
Instantiationβ
First, initialize your Qdrant vector store with some documents that contain metadata:
import { OpenAIEmbeddings } from "@langchain/openai";
import { QdrantVectorStore } from "@langchain/qdrant";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";
import { QdrantClient } from "@qdrant/js-client-rest";
/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];
/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];
/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/
const client = new QdrantClient({ url: process.env.QDRANT_URL });
const embeddings = new OpenAIEmbeddings();
const vectorStore = await QdrantVectorStore.fromDocuments(docs, embeddings, {
client,
collectionName: "movie-collection",
});
Now we can instantiate our retriever:
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { QdrantTranslator } from "@langchain/community/structured_query/qdrant";
const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new QdrantTranslator(),
});
Usageβ
Now, ask a question that requires some knowledge of the documentβs metadata to answer. You can see that the retriever will generate the correct result:
await selfQueryRetriever.invoke("Which movies are rated higher than 8.5?");
[
Document {
pageContent: 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea',
metadata: { director: 'Satoshi Kon', rating: 8.6, year: 2006 },
id: undefined
},
Document {
pageContent: 'Three men walk into the Zone, three men walk out of the Zone',
metadata: {
director: 'Andrei Tarkovsky',
genre: 'science fiction',
rating: 9.9,
year: 1979
},
id: undefined
}
]
Use within a chainβ
Like other retrievers, Qdrant self-query retrievers can be incorporated into LLM applications via chains.
Note that because their returned answers can heavily depend on document metadata, we format the retrieved documents differently to include that information.
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
};
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
await ragChain.invoke("Which movies are rated higher than 8.5?");
The movies rated higher than 8.5 are the ones directed by Satoshi Kon (rating: 8.6) and Andrei Tarkovsky (rating: 9.9).
Default search paramsβ
You can also pass a searchParams
field into the above method that
provides default filters applied in addition to any generated query. The
filter syntax is the same as the backing Qdrant vector store:
const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new QdrantTranslator(),
searchParams: {
filter: {
must: [
{
key: "metadata.rating",
range: {
gt: 8.5,
},
},
],
},
mergeFiltersOperator: "and",
},
});
API referenceβ
For detailed documentation of all Qdrant self-query retriever features and configurations head to the API reference.