Skip to main content

Using Buffer Memory with Chat Models

This example covers how to use chat-specific memory classes with chat models. The key thing to notice is that setting returnMessages: true makes the memory return a list of chat messages instead of a string.

npm install @langchain/openai
import { ConversationChain } from "langchain/chains";
import { ChatOpenAI } from "@langchain/openai";
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
import { BufferMemory } from "langchain/memory";

const chat = new ChatOpenAI({ temperature: 0 });

const chatPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.",
],
new MessagesPlaceholder("history"),
["human", "{input}"],
]);

const chain = new ConversationChain({
memory: new BufferMemory({ returnMessages: true, memoryKey: "history" }),
prompt: chatPrompt,
llm: chat,
});

const response = await chain.invoke({
input: "hi! whats up?",
});

console.log(response);

API Reference:


Help us out by providing feedback on this documentation page: