Skip to main content

Step Back Prompting

Sometimes search quality and model generations can be tripped up by the specifics of a question. One way to handle this is to first generate a more abstract, โ€œstep backโ€ question and to query based on both the original and step back question.

For example, if we ask a question of the form โ€œWhy does my LangGraph agent streamEvents return {LONG_TRACE} instead of {DESIRED_OUTPUT}โ€ we will likely retrieve more relevant documents if we search with the more generic question โ€œHow does streamEvents work with a LangGraph agentโ€ than if we search with the specific user question.

Letโ€™s take a look at how we might use step back prompting in the context of our Q&A bot over the LangChain YouTube videos.

Setupโ€‹

Install dependenciesโ€‹

yarn add @langchain/core zod

Set environment variablesโ€‹

# Optional, use LangSmith for best-in-class observability
LANGSMITH_API_KEY=your-api-key
LANGCHAIN_TRACING_V2=true

Step back question generationโ€‹

Generating good step back questions comes down to writing a good prompt:

Pick your chat model:

Install dependencies

yarn add @langchain/openai 

Add environment variables

OPENAI_API_KEY=your-api-key

Instantiate the model

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-3.5-turbo-0125",
temperature: 0
});
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const system = `You are an expert at taking a specific question and extracting a more generic question that gets at \
the underlying principles needed to answer the specific question.

You will be asked about a set of software for building LLM-powered applications called LangChain, LangGraph, LangServe, and LangSmith.

LangChain is a Python framework that provides a large set of integrations that can easily be composed to build LLM applications.
LangGraph is a Python package built on top of LangChain that makes it easy to build stateful, multi-actor LLM applications.
LangServe is a Python package built on top of LangChain that makes it easy to deploy a LangChain application as a REST API.
LangSmith is a platform that makes it easy to trace and test LLM applications.

Given a specific user question about one or more of these products, write a more generic question that needs to be answered in order to answer the specific question. \

If you don't recognize a word or acronym to not try to rewrite it.

Write concise questions.`;
const prompt = ChatPromptTemplate.fromMessages([
["system", system],
["human", "{question}"],
]);
const stepBack = prompt.pipe(llm).pipe(new StringOutputParser());
const question = `I built a LangGraph agent using Gemini Pro and tools like vectorstores and duckduckgo search.
How do I get just the LLM calls from the event stream`;
const result = await stepBack.invoke({ question: question });
console.log(result);
What are the specific methods or functions within LangGraph that allow for filtering or extracting LLM calls from an event stream?

Returning the stepback question and the original questionโ€‹

To increase our recall weโ€™ll likely want to retrieve documents based on both the step back question and the original question. We can easily return both like so:

import { RunnablePassthrough } from "@langchain/core/runnables";

const stepBackAndOriginal = RunnablePassthrough.assign({ stepBack });

await stepBackAndOriginal.invoke({ question: question });
{
question: "I built a LangGraph agent using Gemini Pro and tools like vectorstores and duckduckgo search.\n" +
"How do"... 47 more characters,
stepBack: "What is the process for extracting specific types of calls, such as LLM calls, from an event stream "... 37 more characters
}

Using function-calling to get structured outputโ€‹

If we were composing this technique with other query analysis techniques, weโ€™d likely be using function calling to get out structured query objects. We can use function-calling for step back prompting like so:

import { z } from "zod";

const stepBackQuerySchema = z.object({
stepBackQuestion: z
.string()
.describe(
"Given a specific user question about one or more of these products, write a more generic question that needs to be answered in order to answer the specific question."
),
});

const llmWithTools = llm.withStructuredOutput(stepBackQuerySchema, {
name: "StepBackQuery",
});
const hydeChain = prompt.pipe(llmWithTools);
await hydeChain.invoke({ question: question });
{
stepBackQuestion: "What are the steps involved in extracting specific types of calls from an event stream in a software"... 13 more characters
}

Help us out by providing feedback on this documentation page: