Skip to main content

ChatBedrockConverse

Amazon Bedrock Converse is a fully managed service that makes Foundation Models (FMs) from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.

Setup

You'll need to install the @langchain/aws package:

npm install @langchain/aws

Usage

tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

import { ChatBedrockConverse } from "@langchain/aws";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});

const res = await model.invoke([
new HumanMessage({ content: "Tell me a joke" }),
]);
console.log(res);

/*
AIMessage {
content: "Here's a joke for you:\n" +
'\n' +
"Why can't a bicycle stand up by itself? Because it's two-tired!",
response_metadata: { ... },
id: '08afa4fb-c212-4c1e-853a-d854972bec78',
usage_metadata: { input_tokens: 11, output_tokens: 28, total_tokens: 39 }
}
*/

const stream = await model.stream([
new HumanMessage({ content: "Tell me a joke" }),
]);

for await (const chunk of stream) {
console.log(chunk.content);
}

/*
Here
's
a
silly
joke
for
you
:


Why
di
d the
tom
ato
turn
re
d?
Because
it
saw
the
sal
a
d
dressing
!
*/

API Reference:

tip

See the LangSmith traces for the above example here, and here for steaming.

Multimodal inputs

tip

Multimodal inputs are currently only supported by Anthropic Claude-3 models.

Anthropic Claude-3 models hosted on Bedrock have multimodal capabilities and can reason about images. Here's an example:

import * as fs from "node:fs/promises";

import { ChatBedrockConverse } from "@langchain/aws";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});

const imageData = await fs.readFile("./hotdog.jpg");

const res = await model.invoke([
new HumanMessage({
content: [
{
type: "text",
text: "What's in this image?",
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageData.toString("base64")}`,
},
},
],
}),
]);
console.log(res);

/*
AIMessage {
content: 'The image shows a hot dog or frankfurter. It has a reddish-pink sausage inside a light tan-colored bread bun. The hot dog bun is split open, allowing the sausage filling to be visible. The image appears to be focused solely on depicting this classic American fast food item against a plain white background.',
response_metadata: { ... },
id: '1608d043-575a-450e-8eac-2fef6297cfe2',
usage_metadata: { input_tokens: 276, output_tokens: 75, total_tokens: 351 }
}
*/

API Reference:

tip

See the LangSmith trace here.

Tool calling

The examples below demonstrate how to use tool calling, along with the withStructuredOutput method to easily compose structured output LLM calls.

import { ChatBedrockConverse } from "@langchain/aws";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});

const weatherTool = tool(
({ city, state }) => `The weather in ${city}, ${state} is 72°F and sunny`,
{
name: "weather_tool",
description: "Get the weather for a city",
schema: z.object({
city: z.string().describe("The city to get the weather for"),
state: z.string().describe("The state to get the weather for").optional(),
}),
}
);

const modelWithTools = model.bindTools([weatherTool]);
// Optionally, you can bind tools via the `.bind` method:
// const modelWithTools = model.bind({
// tools: [weatherTool]
// });

const res = await modelWithTools.invoke("What's the weather in New York?");
console.log(res);

/*
AIMessage {
content: [
{
type: 'text',
text: "Okay, let's get the weather for New York City."
}
],
response_metadata: { ... },
id: '49a97da0-e971-4d7f-9f04-2495e068c15e',
tool_calls: [
{
id: 'tooluse_O6Q1Ghm7SmKA9mn2ZKmBzg',
name: 'weather_tool',
args: {
'city': 'New York',
},
],
usage_metadata: { input_tokens: 289, output_tokens: 68, total_tokens: 357 }
}
*/

API Reference:

Check out the output of this tool call! We can see here it's using chain-of-thought before calling the tool, where it describes what it's going to do in plain text before calling the tool: Okay, let's get the weather for New York City..

tip

See the LangSmith trace here

.withStructuredOutput({ ... })

Using the .withStructuredOutput method, you can easily make the LLM return structured output, given only a Zod or JSON schema:

import { ChatBedrockConverse } from "@langchain/aws";
import { z } from "zod";

const model = new ChatBedrockConverse({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
});

const weatherSchema = z
.object({
city: z.string().describe("The city to get the weather for"),
state: z.string().describe("The state to get the weather for").optional(),
})
.describe("Get the weather for a city");

const modelWithStructuredOutput = model.withStructuredOutput(weatherSchema, {
name: "weather_tool", // Optional, defaults to 'extract'
});

const res = await modelWithStructuredOutput.invoke(
"What's the weather in New York?"
);
console.log(res);

/*
{ city: 'New York', state: 'NY' }
*/

API Reference:

tip

See the LangSmith trace here


Was this page helpful?


You can also leave detailed feedback on GitHub.