AzionRetriever
Overview
This will help you getting started with the AzionRetriever. For detailed documentation of all AzionRetriever features and configurations head to the API reference.
Integration details
Retriever | Self-host | Cloud offering | Package | [Py support] |
---|---|---|---|---|
AzionRetriever | ❌ | ❌ | @langchain/community | ❌ |
Setup
To use the AzionRetriever, you need to set the AZION_TOKEN environment variable.
process.env.AZION_TOKEN = "your-api-key";
If you are using OpenAI embeddings for this guide, you’ll need to set your OpenAI key as well:
process.env.OPENAI_API_KEY = "YOUR_API_KEY";
If you want to get automated tracing from individual queries, you can also set your LangSmith API key by uncommenting below:
// process.env.LANGSMITH_API_KEY = "<YOUR API KEY HERE>";
// process.env.LANGSMITH_TRACING = "true";
Installation
This retriever lives in the
@langchain/community/retrievers/azion_edgesql
package:
- npm
- yarn
- pnpm
npm i azion @langchain/openai @langchain/community
yarn add azion @langchain/openai @langchain/community
pnpm add azion @langchain/openai @langchain/community
Instantiation
Now we can instantiate our retriever:
import { AzionRetriever } from "@langchain/community/retrievers/azion_edgesql";
import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatOpenAI } from "@langchain/openai";
const embeddingModel = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const chatModel = new ChatOpenAI({
model: "gpt-4o-mini",
apiKey: process.env.OPENAI_API_KEY,
});
const retriever = new AzionRetriever(embeddingModel, {
dbName: "langchain",
vectorTable: "documents", // table where the vector embeddings are stored
ftsTable: "documents_fts", // table where the fts index is stored
searchType: "hybrid", // search type to use for the retriever
ftsK: 2, // number of results to return from the fts index
similarityK: 2, // number of results to return from the vector index
metadataItems: ["language", "topic"],
filters: [{ operator: "=", column: "language", value: "en" }],
entityExtractor: chatModel,
}); // number of results to return from the vector index
Usage
const query = "Australia";
await retriever.invoke(query);
[
Document {
pageContent: 'Australia s indigenous people have inhabited the continent for over 65,000 years',
metadata: { language: 'en', topic: 'history', searchtype: 'similarity' },
id: '3'
},
Document {
pageContent: 'Australia is a leader in solar energy adoption and renewable technology',
metadata: { language: 'en', topic: 'technology', searchtype: 'similarity' },
id: '5'
},
Document {
pageContent: 'Australia s tech sector is rapidly growing with innovation hubs in major cities',
metadata: { language: 'en', topic: 'technology', searchtype: 'fts' },
id: '7'
}
]
Use within a chain
Like other retrievers, AzionRetriever can be incorporated into LLM applications via chains.
We will need a LLM or chat model:
Pick your chat model:
- Groq
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "llama-3.3-70b-versatile",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import type { Document } from "@langchain/core/documents";
const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.
Context: {context}
Question: {question}`);
const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
};
// See https://js.langchain.com/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
await ragChain.invoke("Paris");
The context mentions that the 2024 Olympics are in Paris.
API reference
For detailed documentation of all AzionRetriever features and configurations head to the API reference.
Related
- Retriever conceptual guide
- Retriever how-to guides