Skip to main content

ChatGooglePaLM

note

This integration does not support gemini-* models. Check Google GenAI or VertexAI.

The Google PaLM API can be integrated by first installing the required packages:

npm install google-auth-library @google-ai/generativelanguage @langchain/community
tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

Create an API key from Google MakerSuite. You can then set the key as GOOGLE_PALM_API_KEY environment variable or pass it as apiKey parameter while instantiating the model.

import { ChatGooglePaLM } from "@langchain/community/chat_models/googlepalm";
import {
AIMessage,
HumanMessage,
SystemMessage,
} from "@langchain/core/messages";

export const run = async () => {
const model = new ChatGooglePaLM({
apiKey: "<YOUR API KEY>", // or set it in environment variable as `GOOGLE_PALM_API_KEY`
temperature: 0.7, // OPTIONAL
model: "models/chat-bison-001", // OPTIONAL
topK: 40, // OPTIONAL
topP: 1, // OPTIONAL
examples: [
// OPTIONAL
{
input: new HumanMessage("What is your favorite sock color?"),
output: new AIMessage("My favorite sock color be arrrr-ange!"),
},
],
});

// ask questions
const questions = [
new SystemMessage(
"You are a funny assistant that answers in pirate language."
),
new HumanMessage("What is your favorite food?"),
];

// You can also use the model as part of a chain
const res = await model.invoke(questions);
console.log({ res });
};

API Reference:

ChatGooglePaLM

LangChain.js supports Google Vertex AI chat models as an integration. It supports two different methods of authentication based on whether you're running in a Node environment or a web environment.

Setup

Node

To call Vertex AI models in Node, you'll need to install Google's official auth client as a peer dependency.

You should make sure the Vertex AI API is enabled for the relevant project and that you've authenticated to Google Cloud using one of these methods:

  • You are logged into an account (using gcloud auth application-default login) permitted to that project.
  • You are running on a machine using a service account that is permitted to the project.
  • You have downloaded the credentials for a service account that is permitted to the project and set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of this file.
npm install google-auth-library @langchain/community

Web

To call Vertex AI models in web environments (like Edge functions), you'll need to install the web-auth-library pacakge as a peer dependency:

npm install web-auth-library

Then, you'll need to add your service account credentials directly as a GOOGLE_VERTEX_AI_WEB_CREDENTIALS environment variable:

GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}

You can also pass your credentials directly in code like this:

import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";

const model = new ChatGoogleVertexAI({
authOptions: {
credentials: {"type":"service_account","project_id":"YOUR_PROJECT-12345",...},
},
});

Usage

Several models are available and can be specified by the model attribute in the constructor. These include:

  • code-bison (default)
  • code-bison-32k

The ChatGoogleVertexAI class works just like other chat-based LLMs, with a few exceptions:

  1. The first SystemMessage passed in is mapped to the "context" parameter that the PaLM model expects. No other SystemMessages are allowed.
  2. After the first SystemMessage, there must be an odd number of messages, representing a conversation between a human and the model.
  3. Human messages must alternate with AI messages.
import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";
// Or, if using the web entrypoint:
// import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai/web";

const model = new ChatGoogleVertexAI({
temperature: 0.7,
});

API Reference:

Streaming

ChatGoogleVertexAI also supports streaming in multiple chunks for faster responses:

import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";
// Or, if using the web entrypoint:
// import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai/web";

const model = new ChatGoogleVertexAI({
temperature: 0.7,
});
const stream = await model.stream([
["system", "You are a funny assistant that answers in pirate language."],
["human", "What is your favorite food?"],
]);

for await (const chunk of stream) {
console.log(chunk);
}

/*
AIMessageChunk {
content: ' Ahoy there, matey! My favorite food be fish, cooked any way ye ',
additional_kwargs: {}
}
AIMessageChunk {
content: 'like!',
additional_kwargs: {}
}
AIMessageChunk {
content: '',
name: undefined,
additional_kwargs: {}
}
*/

API Reference:

Examples

There is also an optional examples constructor parameter that can help the model understand what an appropriate response looks like.

import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai";
import {
AIMessage,
HumanMessage,
SystemMessage,
} from "@langchain/core/messages";

// Or, if using the web entrypoint:
// import { ChatGoogleVertexAI } from "@langchain/community/chat_models/googlevertexai/web";

const examples = [
{
input: new HumanMessage("What is your favorite sock color?"),
output: new AIMessage("My favorite sock color be arrrr-ange!"),
},
];
const model = new ChatGoogleVertexAI({
temperature: 0.7,
examples,
});
const questions = [
new SystemMessage(
"You are a funny assistant that answers in pirate language."
),
new HumanMessage("What is your favorite food?"),
];
// You can also use the model as part of a chain
const res = await model.invoke(questions);
console.log({ res });

API Reference:


Help us out by providing feedback on this documentation page: