INVALID_PROMPT_INPUT
A prompt template received missing or invalid input variables.
One unexpected way this can occur is if you add a JSON object directly into a prompt template:
import { PromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
const prompt = PromptTemplate.fromTemplate(`You are a helpful assistant.
Here is an example of how you should respond:
{
"firstName": "John",
"lastName": "Doe",
"age": 21
}
Now, answer the following question:
{question}`);
You might think that the above prompt template should require a single input key named question
, but the JSON object will be
interpreted as an additional variable because the curly braces ({
) are not escaped, and should be preceded by a second brace instead, like this:
import { PromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
const prompt = PromptTemplate.fromTemplate(`You are a helpful assistant.
Here is an example of how you should respond:
{{
"firstName": "John",
"lastName": "Doe",
"age": 21
}}
Now, answer the following question:
{question}`);
Troubleshooting
The following may help resolve this error:
- Double-check your prompt template to ensure that it is correct.
- If you are using default formatting and you are using curly braces
{
anywhere in your template, they should be double escaped like this:{{
, as shown above.
- If you are using default formatting and you are using curly braces
- If you are using a
MessagesPlaceholder
, make sure that you are passing in an array of messages or message-like objects.- If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (
["placeholder", "{messages}"]
).
- If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (
- Try viewing the inputs into your prompt template using LangSmith or log statements to confirm they appear as expected.
- If you are pulling a prompt from the LangChain Prompt Hub, try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect.