Skip to main content

RAG

Let's now look at adding in a retrieval step to a prompt and an LLM, which adds up to a "retrieval-augmented generation" chain:

Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
npm install @langchain/openai @langchain/community hnswlib-node
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { formatDocumentsAsString } from "langchain/util/document";
import { PromptTemplate } from "@langchain/core/prompts";
import {
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({});

const vectorStore = await HNSWLib.fromTexts(
["mitochondria is the powerhouse of the cell"],
[{ id: 1 }],
new OpenAIEmbeddings()
);
const retriever = vectorStore.asRetriever();

const prompt =
PromptTemplate.fromTemplate(`Answer the question based only on the following context:
{context}

Question: {question}`);

const chain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
model,
new StringOutputParser(),
]);

const result = await chain.invoke("What is the powerhouse of the cell?");

console.log(result);

/*
"The powerhouse of the cell is the mitochondria."
*/

API Reference:

Conversational Retrieval Chain

Because RunnableSequence.from and runnable.pipe both accept runnable-like objects, including single-argument functions, we can add in conversation history via a formatting function. This allows us to recreate the popular ConversationalRetrievalQAChain to "chat with data":

Interactive tutorial
The screencast below interactively walks through an example. You can update and run the code as it's being written in the video!
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { formatDocumentsAsString } from "langchain/util/document";
import { PromptTemplate } from "@langchain/core/prompts";
import {
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({});

const condenseQuestionTemplate = `Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.

Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:`;
const CONDENSE_QUESTION_PROMPT = PromptTemplate.fromTemplate(
condenseQuestionTemplate
);

const answerTemplate = `Answer the question based only on the following context:
{context}

Question: {question}
`;
const ANSWER_PROMPT = PromptTemplate.fromTemplate(answerTemplate);

const formatChatHistory = (chatHistory: [string, string][]) => {
const formattedDialogueTurns = chatHistory.map(
(dialogueTurn) => `Human: ${dialogueTurn[0]}\nAssistant: ${dialogueTurn[1]}`
);
return formattedDialogueTurns.join("\n");
};

const vectorStore = await HNSWLib.fromTexts(
[
"mitochondria is the powerhouse of the cell",
"mitochondria is made of lipids",
],
[{ id: 1 }, { id: 2 }],
new OpenAIEmbeddings()
);
const retriever = vectorStore.asRetriever();

type ConversationalRetrievalQAChainInput = {
question: string;
chat_history: [string, string][];
};

const standaloneQuestionChain = RunnableSequence.from([
{
question: (input: ConversationalRetrievalQAChainInput) => input.question,
chat_history: (input: ConversationalRetrievalQAChainInput) =>
formatChatHistory(input.chat_history),
},
CONDENSE_QUESTION_PROMPT,
model,
new StringOutputParser(),
]);

const answerChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
ANSWER_PROMPT,
model,
]);

const conversationalRetrievalQAChain =
standaloneQuestionChain.pipe(answerChain);

const result1 = await conversationalRetrievalQAChain.invoke({
question: "What is the powerhouse of the cell?",
chat_history: [],
});
console.log(result1);
/*
AIMessage { content: "The powerhouse of the cell is the mitochondria." }
*/

const result2 = await conversationalRetrievalQAChain.invoke({
question: "What are they made out of?",
chat_history: [
[
"What is the powerhouse of the cell?",
"The powerhouse of the cell is the mitochondria.",
],
],
});
console.log(result2);
/*
AIMessage { content: "Mitochondria are made out of lipids." }
*/

API Reference:

Note that the individual chains we created are themselves Runnables and can therefore be piped into each other.


Help us out by providing feedback on this documentation page: