Skip to main content

Using Buffer Memory with Chat Models

This example covers how to use chat-specific memory classes with chat models. The key thing to notice is that setting returnMessages: true makes the memory return a list of chat messages instead of a string.

npm install @langchain/openai
import { ConversationChain } from "langchain/chains";
import { ChatOpenAI } from "@langchain/openai";
import {
} from "@langchain/core/prompts";
import { BufferMemory } from "langchain/memory";

const chat = new ChatOpenAI({ temperature: 0 });

const chatPrompt = ChatPromptTemplate.fromMessages([
"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.",
new MessagesPlaceholder("history"),
["human", "{input}"],

const chain = new ConversationChain({
memory: new BufferMemory({ returnMessages: true, memoryKey: "history" }),
prompt: chatPrompt,
llm: chat,

const response = await{
input: "hi! whats up?",


API Reference:

Help us out by providing feedback on this documentation page: