Skip to main content

Expansion

Information retrieval systems can be sensitive to phrasing and specific keywords. To mitigate this, one classic retrieval technique is to generate multiple paraphrased versions of a query and return results for all versions of the query. This is called query expansion. LLMs are a great tool for generating these alternate versions of a query.

Let’s take a look at how we might do query expansion for our Q&A bot over the LangChain YouTube videos, which we started in the Quickstart.

Setup​

Install dependencies​

yarn add @langchain/core zod

Set environment variables​

# Optional, use LangSmith for best-in-class observability
LANGSMITH_API_KEY=your-api-key
LANGCHAIN_TRACING_V2=true

Query generation​

To make sure we get multiple paraphrasings we’ll use an LLM function-calling API.

Pick your chat model:

Install dependencies

yarn add @langchain/openai 

Add environment variables

OPENAI_API_KEY=your-api-key

Instantiate the model

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-3.5-turbo-0125",
temperature: 0
});
import { z } from "zod";

const paraphrasedQuerySchema = z
.object({
paraphrasedQuery: z
.string()
.describe("A unique paraphrasing of the original question."),
})
.describe(
"You have performed query expansion to generate a paraphrasing of a question."
);
import { ChatPromptTemplate } from "@langchain/core/prompts";

const system = `You are an expert at converting user questions into database queries.
You have access to a database of tutorial videos about a software library for building LLM-powered applications.

Perform query expansion. If there are multiple common ways of phrasing a user question
or common synonyms for key words in the question, make sure to return multiple versions
of the query with the different phrasings.

If there are acronyms or words you are not familiar with, do not try to rephrase them.

Return at least 3 versions of the question.`;
const prompt = ChatPromptTemplate.fromMessages([
["system", system],
["human", "{question}"],
]);
const llmWithTools = llm.withStructuredOutput(paraphrasedQuerySchema, {
name: "ParaphrasedQuery",
});
const queryAnalyzer = prompt.pipe(llmWithTools);

Let’s see what queries our analyzer generates for the questions we searched earlier:

await queryAnalyzer.invoke({
question:
"how to use multi-modal models in a chain and turn chain into a rest api",
});
{
paraphrasedQuery: "How to utilize multi-modal models sequentially and convert the sequence into a REST API?"
}
await queryAnalyzer.invoke({ question: "stream events from llm agent" });
{ paraphrasedQuery: "Retrieve real-time data from the LLM agent" }

Help us out by providing feedback on this documentation page: