Skip to main content

Retrieval

Retrieval is a common technique chatbots use to augment their responses with data outside a chat model’s training data. This section will cover how to implement retrieval in the context of chatbots, but it’s worth noting that retrieval is a very subtle and deep topic - we encourage you to explore other parts of the documentation that go into greater depth!

Setup​

You’ll need to install a few packages, and have your OpenAI API key set as an environment variable named OPENAI_API_KEY:

npm install @langchain/openai cheerio

Let’s also set up a chat model that we’ll use for the below examples.

import { ChatOpenAI } from "@langchain/openai";

const chat = new ChatOpenAI({
model: "gpt-3.5-turbo-1106",
temperature: 0.2,
});

Creating a retriever​

We’ll use the LangSmith documentation as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more in-depth documentation on creating retrieval systems here.

Let’s use a document loader to pull text from the docs:

import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/user_guide"
);

const rawDocs = await loader.load();

Next, we split it into smaller chunks that the LLM’s context window can handle and store it in a vector database:

import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";

const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 500,
chunkOverlap: 0,
});

const allSplits = await textSplitter.splitDocuments(rawDocs);

Then we embed and store those chunks in a vector database:

import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";

const vectorstore = await MemoryVectorStore.fromDocuments(
allSplits,
new OpenAIEmbeddings()
);

And finally, let’s create a retriever from our initialized vectorstore:

const retriever = vectorstore.asRetriever(4);

const docs = await retriever.invoke("how can langsmith help with testing?");

console.log(docs);
[
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' +
'well and we want to be more rigorous about testing changes. We can use a dataset\n' +
'that we’ve constructed along the way (see above). Alternatively, we could spend some\n' +
'time constructing a small dataset by hand. For these situations, LangSmith simplifies',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default​At LangChain, all of us have LangSmith’s tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
}
]

We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions. And now we’ve got a retriever that can return related data from the LangSmith docs!

Document chains​

Now that we have a retriever that can return LangChain docs, let’s create a chain that can use them as context to answer questions. We’ll use a createStuffDocumentsChain helper function to "stuff" all of the input documents into the prompt. It will also handle formatting the docs as strings.

In addition to a chat model, the function also expects a prompt that has a context variable, as well as a placeholder for chat history messages named messages. We’ll create an appropriate prompt and pass it as shown below:

import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";

const SYSTEM_TEMPLATE = `Answer the user's questions based on the below context.
If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":

<context>
{context}
</context>
`;

const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
["system", SYSTEM_TEMPLATE],
new MessagesPlaceholder("messages"),
]);

const documentChain = await createStuffDocumentsChain({
llm: chat,
prompt: questionAnsweringPrompt,
});

We can invoke this documentChain by itself to answer questions. Let’s use the docs we retrieved above and the same question, how can langsmith help with testing?:

import { HumanMessage, AIMessage } from "@langchain/core/messages";

await documentChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
context: docs,
});
Yes, LangSmith can help test LLM applications by providing tracing and monitoring capabilities. It can help in debugging bugs in formatting logic, unexpected transformations to user input, and missing user input. Additionally, it can also assist in understanding the exact output of an LLM, which can help determine if there is a need for different parsing.

Looks good! For comparison, we can try it with no context docs and compare the result:

await documentChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
context: [],
});
I don't know about LangSmith's specific capabilities for testing LLM applications. It's best to directly inquire with LangSmith or check their website for information on their services related to testing LLM applications.

We can see that the LLM does not return any results.

Retrieval chains​

Let’s combine this document chain with the retriever. Here’s one way this can look:

import type { BaseMessage } from "@langchain/core/messages";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";

const parseRetrieverInput = (params: { messages: BaseMessage[] }) => {
return params.messages[params.messages.length - 1].content;
};

const retrievalChain = RunnablePassthrough.assign({
context: RunnableSequence.from([parseRetrieverInput, retriever]),
}).assign({
answer: documentChain,
});

Given a list of input messages, we extract the content of the last message in the list and pass that to the retriever to fetch some documents. Then, we pass those documents as context to our document chain to generate a final response.

Invoking this chain combines both steps outlined above:

await retrievalChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
});
{
messages: [
HumanMessage {
content: 'Can LangSmith help test my LLM applications?'
}
],
context: [
Document {
pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a',
metadata: [Object]
},
Document {
pageContent: 'many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are',
metadata: [Object]
},
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: [Object]
},
Document {
pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like',
metadata: [Object]
}
],
answer: 'Yes, LangSmith can help test LLM applications by providing assistance in debugging LLMs, chains, and agents. It helps in understanding the exact input to the LLM, monitoring the application for issues, and manually reviewing and annotating runs for testing and evaluation purposes.'
}

Looks good!

Query transformation​

Our retrieval chain is capable of answering questions about LangSmith, but there’s a problem - chatbots interact with users conversationally, and therefore have to deal with followup questions.

The chain in its current form will struggle with this. Consider a followup question to our original question like Tell me more!. If we invoke our retriever with that query directly, we get documents irrelevant to LLM application testing:

await retriever.invoke("Tell me more!");
[
Document {
pageContent: 'shadowRing,',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'Pro,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;--joy-fontFamily-fallback:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'y-fontSize-xl7:4.5rem;--joy-fontFamily-body:"Public',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
}
]

This is because the retriever has no innate concept of state, and will only pull documents most similar to the query given. To solve this, we can transform the query into a standalone query without any external references an LLM.

Here’s an example:

const queryTransformPrompt = ChatPromptTemplate.fromMessages([
new MessagesPlaceholder("messages"),
[
"user",
"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.",
],
]);

const queryTransformationChain = queryTransformPrompt.pipe(chat);

await queryTransformationChain.invoke({
messages: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage(
"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
),
new HumanMessage("Tell me more!"),
],
});
AIMessage {
content: '"LangSmith LLM application testing and evaluation"'
}

Awesome! That transformed query would pull up context documents related to LLM application testing.

Let’s add this to our retrieval chain. We can wrap our retriever as follows:

import { RunnableBranch } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

const queryTransformingRetrieverChain = RunnableBranch.from([
[
(params: { messages: BaseMessage[] }) => params.messages.length === 1,
RunnableSequence.from([parseRetrieverInput, retriever]),
],
queryTransformPrompt
.pipe(chat)
.pipe(new StringOutputParser())
.pipe(retriever),
]).withConfig({ runName: "chat_retriever_chain" });

Then, we can use this query transformation chain to make our retrieval chain better able to handle such followup questions:

const conversationalRetrievalChain = RunnablePassthrough.assign({
context: queryTransformingRetrieverChain,
}).assign({
answer: documentChain,
});

Awesome! Let’s invoke this new chain with the same inputs as earlier:

await conversationalRetrievalChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
});
{
messages: [
HumanMessage {
content: 'Can LangSmith help test my LLM applications?'
}
],
context: [
Document {
pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a',
metadata: [Object]
},
Document {
pageContent: 'many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are',
metadata: [Object]
},
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: [Object]
},
Document {
pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like',
metadata: [Object]
}
],
answer: 'Yes, LangSmith can help test LLM applications by providing the ability to monitor the application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Additionally, it allows for manual review and annotation of runs, which can be useful for assessing subjective qualities that automatic evaluators struggle with.'
}
await conversationalRetrievalChain.invoke({
messages: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage(
"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
),
new HumanMessage("Tell me more!"),
],
});
{
messages: [
HumanMessage {
content: 'Can LangSmith help test my LLM applications?'
},
AIMessage {
content: 'Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'
},
HumanMessage {
content: 'Tell me more!'
}
],
context: [
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: [Object]
},
Document {
pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like',
metadata: [Object]
},
Document {
pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' +
'well and we want to be more rigorous about testing changes. We can use a dataset\n' +
'that we’ve constructed along the way (see above). Alternatively, we could spend some\n' +
'time constructing a small dataset by hand. For these situations, LangSmith simplifies',
metadata: [Object]
},
Document {
pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a',
metadata: [Object]
}
],
answer: 'LangSmith simplifies the process of manually reviewing and annotating runs through annotation queues. These queues allow you to select runs based on criteria like model type or automatic evaluation scores and queue them up for human review. As a reviewer, you can quickly step through the runs, view the input, output, and any existing tags before adding your own feedback. This can be particularly useful for assessing subjective qualities that automatic evaluators struggle with, as well as for testing changes and debugging issues related to formatting logic, unexpected transformations to user input, and missing user input. Additionally, LangSmith can help in understanding the exact output of an LLM, which may contain structured data intended to be parsed into a structured representation.'
}

You can check out this LangSmith trace to see the internal query transformation step for yourself.

Streaming​

Because this chain is constructed with LCEL, you can use familiar methods like .stream() with it:

const stream = await conversationalRetrievalChain.stream({
messages: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage(
"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
),
new HumanMessage("Tell me more!"),
],
});

for await (const chunk of stream) {
console.log(chunk);
}
{
messages: [
HumanMessage {
content: 'Can LangSmith help test my LLM applications?'
},
AIMessage {
content: 'Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'
},
HumanMessage {
content: 'Tell me more!'
}
]
}
{
context: [
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: [Object]
},
Document {
pageContent: 'LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like',
metadata: [Object]
},
Document {
pageContent: 'inputs, and see what happens. At some point though, our application is performing\n' +
'well and we want to be more rigorous about testing changes. We can use a dataset\n' +
'that we’ve constructed along the way (see above). Alternatively, we could spend some\n' +
'time constructing a small dataset by hand. For these situations, LangSmith simplifies',
metadata: [Object]
},
Document {
pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a',
metadata: [Object]
}
]
}
{ answer: '' }
{ answer: 'Lang' }
{ answer: 'Smith' }
{ answer: ' simpl' }
{ answer: 'ifies' }
{ answer: ' the' }
{ answer: ' process' }
{ answer: ' of' }
{ answer: ' manually' }
{ answer: ' reviewing' }
{ answer: ' and' }
{ answer: ' annot' }
{ answer: 'ating' }
{ answer: ' runs' }
{ answer: ' through' }
{ answer: ' annotation' }
{ answer: ' queues' }
{ answer: '.' }
{ answer: ' These' }
{ answer: ' queues' }
{ answer: ' allow' }
{ answer: ' you' }
{ answer: ' to' }
{ answer: ' select' }
{ answer: ' runs' }
{ answer: ' based' }
{ answer: ' on' }
{ answer: ' criteria' }
{ answer: ' like' }
{ answer: ' model' }
{ answer: ' type' }
{ answer: ' or' }
{ answer: ' automatic' }
{ answer: ' evaluation' }
{ answer: ' scores' }
{ answer: ' and' }
{ answer: ' queue' }
{ answer: ' them' }
{ answer: ' up' }
{ answer: ' for' }
{ answer: ' human' }
{ answer: ' review' }
{ answer: '.' }
{ answer: ' As' }
{ answer: ' a' }
{ answer: ' reviewer' }
{ answer: ',' }
{ answer: ' you' }
{ answer: ' can' }
{ answer: ' quickly' }
{ answer: ' step' }
{ answer: ' through' }
{ answer: ' the' }
{ answer: ' runs' }
{ answer: ',' }
{ answer: ' view' }
{ answer: ' the' }
{ answer: ' input' }
{ answer: ',' }
{ answer: ' output' }
{ answer: ',' }
{ answer: ' and' }
{ answer: ' any' }
{ answer: ' existing' }
{ answer: ' tags' }
{ answer: ' before' }
{ answer: ' adding' }
{ answer: ' your' }
{ answer: ' own' }
{ answer: ' feedback' }
{ answer: '.' }
{ answer: ' This' }
{ answer: ' can' }
{ answer: ' be' }
{ answer: ' particularly' }
{ answer: ' useful' }
{ answer: ' for' }
{ answer: ' assessing' }
{ answer: ' subjective' }
{ answer: ' qualities' }
{ answer: ' that' }
{ answer: ' automatic' }
{ answer: ' evalu' }
{ answer: 'ators' }
{ answer: ' struggle' }
{ answer: ' with' }
{ answer: ' and' }
{ answer: ' for' }
{ answer: ' rigor' }
{ answer: 'ously' }
{ answer: ' testing' }
{ answer: ' changes' }
{ answer: ' in' }
{ answer: ' your' }
{ answer: ' application' }
{ answer: '.' }
{ answer: ' Additionally' }
{ answer: ',' }
{ answer: ' Lang' }
{ answer: 'Smith' }
{ answer: ' can' }
{ answer: ' help' }
{ answer: ' debug' }
{ answer: ' bugs' }
{ answer: ' in' }
{ answer: ' formatting' }
{ answer: ' logic' }
{ answer: ',' }
{ answer: ' unexpected' }
{ answer: ' transformations' }
{ answer: ' to' }
{ answer: ' user' }
{ answer: ' input' }
{ answer: ',' }
{ answer: ' and' }
{ answer: ' missing' }
{ answer: ' user' }
{ answer: ' input' }
{ answer: '.' }
{ answer: ' It' }
{ answer: ' can' }
{ answer: ' also' }
{ answer: ' assist' }
{ answer: ' in' }
{ answer: ' understanding' }
{ answer: ' the' }
{ answer: ' output' }
{ answer: ' of' }
{ answer: ' an' }
{ answer: ' L' }
{ answer: 'LM' }
{ answer: ',' }
{ answer: ' which' }
{ answer: ' may' }
{ answer: ' contain' }
{ answer: ' structured' }
{ answer: ' data' }
{ answer: ' that' }
{ answer: ' needs' }
{ answer: ' to' }
{ answer: ' be' }
{ answer: ' parsed' }
{ answer: ' into' }
{ answer: ' a' }
{ answer: ' structured' }
{ answer: ' representation' }
{ answer: '.' }
{ answer: '' }

Further reading​

This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out this section of the docs.


Help us out by providing feedback on this documentation page: