Skip to main content

AzionRetriever

Overview

This will help you getting started with the AzionRetriever. For detailed documentation of all AzionRetriever features and configurations head to the API reference.

Integration details

RetrieverSelf-hostCloud offeringPackage[Py support]
AzionRetriever@langchain/community

Setup

To use the AzionRetriever, you need to set the AZION_TOKEN environment variable.

process.env.AZION_TOKEN = "your-api-key";

If you are using OpenAI embeddings for this guide, you’ll need to set your OpenAI key as well:

process.env.OPENAI_API_KEY = "YOUR_API_KEY";

If you want to get automated tracing from individual queries, you can also set your LangSmith API key by uncommenting below:

// process.env.LANGSMITH_API_KEY = "<YOUR API KEY HERE>";
// process.env.LANGSMITH_TRACING = "true";

Installation

This retriever lives in the @langchain/community/retrievers/azion_edgesql package:

yarn add azion @langchain/openai @langchain/community

Instantiation

Now we can instantiate our retriever:

import { AzionRetriever } from "@langchain/community/retrievers/azion_edgesql";
import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatOpenAI } from "@langchain/openai";

const embeddingModel = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});

const chatModel = new ChatOpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});

const retriever = new AzionRetriever(embeddingModel, {
dbName: "langchain",
vectorTable: "documents", // table where the vector embeddings are stored
ftsTable: "documents_fts", // table where the fts index is stored
searchType: "hybrid", // search type to use for the retriever
ftsK: 2, // number of results to return from the fts index
similarityK: 2, // number of results to return from the vector index
metadataItems: ["language", "topic"],
filters: [{ operator: "=", column: "language", value: "en" }],
entityExtractor: chatModel,
}); // number of results to return from the vector index

Usage

const query = "Australia";

await retriever.invoke(query);
[
Document {
pageContent: 'Australia s indigenous people have inhabited the continent for over 65,000 years',
metadata: { language: 'en', topic: 'history', searchtype: 'similarity' },
id: '3'
},
Document {
pageContent: 'Australia is a leader in solar energy adoption and renewable technology',
metadata: { language: 'en', topic: 'technology', searchtype: 'similarity' },
id: '5'
},
Document {
pageContent: 'Australia s tech sector is rapidly growing with innovation hubs in major cities',
metadata: { language: 'en', topic: 'technology', searchtype: 'fts' },
id: '7'
}
]

Use within a chain

Like other retrievers, AzionRetriever can be incorporated into LLM applications via chains.

We will need a LLM or chat model:

Pick your chat model:

Install dependencies

yarn add @langchain/groq 

Add environment variables

GROQ_API_KEY=your-api-key

Instantiate the model

import { ChatGroq } from "@langchain/groq";

const llm = new ChatGroq({
model: "llama-3.3-70b-versatile",
temperature: 0
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

import type { Document } from "@langchain/core/documents";

const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.

Context: {context}

Question: {question}`);

const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
};

// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
await ragChain.invoke("Paris");
The context mentions that the 2024 Olympics are in Paris.

API reference

For detailed documentation of all AzionRetriever features and configurations head to the API reference.


Was this page helpful?


You can also leave detailed feedback on GitHub.