Skip to main content

BedrockChat

Amazon Bedrock is a fully managed service that makes Foundation Models (FMs) from leading AI startups and Amazon available via an API. You can choose from a wide range of FMs to find the model that is best suited for your use case.

Setup

tip

The ChatBedrockConverse chat model is now available via @langchain/aws. Access tool calling with more models with this package.

You'll need to install the @langchain/community package:

npm install @langchain/community

Then, you'll need to install a few official AWS packages as peer dependencies:

npm install @aws-crypto/sha256-js @aws-sdk/credential-provider-node @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types

You can also use BedrockChat in web environments such as Edge functions or Cloudflare Workers by omitting the @aws-sdk/credential-provider-node dependency and using the web entrypoint:

npm install @aws-crypto/sha256-js @smithy/protocol-http @smithy/signature-v4 @smithy/eventstream-codec @smithy/util-utf8 @aws-sdk/types

Usage

tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

Currently, only Anthropic, Cohere, and Mistral models are supported with the chat model integration. For foundation models from AI21 or Amazon, see the text generation Bedrock variant.

import { BedrockChat } from "@langchain/community/chat_models/bedrock";
// Or, from web environments:
// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";
import { HumanMessage } from "@langchain/core/messages";

// If no credentials are provided, the default credentials from
// @aws-sdk/credential-provider-node will be used.

// modelKwargs are additional parameters passed to the model when it
// is invoked.
const model = new BedrockChat({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
// endpointUrl: "custom.amazonaws.com",
// credentials: {
// accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
// secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
// },
// modelKwargs: {
// anthropic_version: "bedrock-2023-05-31",
// },
});

// Other model names include:
// "mistral.mistral-7b-instruct-v0:2"
// "mistral.mixtral-8x7b-instruct-v0:1"
//
// For a full list, see the Bedrock page in AWS.

const res = await model.invoke([
new HumanMessage({ content: "Tell me a joke" }),
]);
console.log(res);

/*
AIMessage {
content: "Here's a silly joke for you:\n" +
'\n' +
"Why can't a bicycle stand up by itself?\n" +
"Because it's two-tired!",
name: undefined,
additional_kwargs: { id: 'msg_01NYN7Rf39k4cgurqpZWYyDh' }
}
*/

const stream = await model.stream([
new HumanMessage({ content: "Tell me a joke" }),
]);

for await (const chunk of stream) {
console.log(chunk.content);
}

/*
Here
's
a
silly
joke
for
you
:


Why
can
't
a
bicycle
stand
up
by
itself
?

Because
it
's
two
-
tired
!
*/

API Reference:

Multimodal inputs

tip

Multimodal inputs are currently only supported by Anthropic Claude-3 models.

Anthropic Claude-3 models hosted on Bedrock have multimodal capabilities and can reason about images. Here's an example:

import * as fs from "node:fs/promises";

import { BedrockChat } from "@langchain/community/chat_models/bedrock";
// Or, from web environments:
// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";
import { HumanMessage } from "@langchain/core/messages";

// If no credentials are provided, the default credentials from
// @aws-sdk/credential-provider-node will be used.

// modelKwargs are additional parameters passed to the model when it
// is invoked.
const model = new BedrockChat({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
// endpointUrl: "custom.amazonaws.com",
// credentials: {
// accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
// secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
// },
// modelKwargs: {
// anthropic_version: "bedrock-2023-05-31",
// },
});

const imageData = await fs.readFile("./hotdog.jpg");

const res = await model.invoke([
new HumanMessage({
content: [
{
type: "text",
text: "What's in this image?",
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${imageData.toString("base64")}`,
},
},
],
}),
]);
console.log(res);

/*
AIMessage {
content: 'The image shows a hot dog or frankfurter. It has a reddish-pink sausage filling encased in a light brown bread-like bun. The hot dog bun is split open, revealing the sausage inside. This classic fast food item is a popular snack or meal, often served at events like baseball games or cookouts. The hot dog appears to be against a plain white background, allowing the details and textures of the food item to be clearly visible.',
name: undefined,
additional_kwargs: { id: 'msg_01XrLPL9vCb82U3Wrrpza18p' }
}
*/

API Reference:

Tool calling

info

Not all Bedrock models support tool calling. Please refer to the model documentation for more information.

The examples below demonstrate how to use tool calling, along with the withStructuredOutput method to easily compose structured output LLM calls.

import { BedrockChat } from "@langchain/community/chat_models/bedrock";
// Or, from web environments:
// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";

const model = new BedrockChat({
region: process.env.BEDROCK_AWS_REGION,
model: "anthropic.claude-3-sonnet-20240229-v1:0",
maxRetries: 0,
credentials: {
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
},
});

const weatherSchema = z
.object({
city: z.string().describe("The city to get the weather for"),
state: z.string().describe("The state to get the weather for").optional(),
})
.describe("Get the weather for a city");

const modelWithTools = model.bindTools([
{
name: "weather_tool",
description: weatherSchema.description,
input_schema: zodToJsonSchema(weatherSchema),
},
]);
// Optionally, you can bind tools via the `.bind` method:
// const modelWithTools = model.bind({
// tools: [
// {
// name: "weather_tool",
// description: weatherSchema.description,
// input_schema: zodToJsonSchema(weatherSchema),
// },
// ],
// });

const res = await modelWithTools.invoke("What's the weather in New York?");
console.log(res);

/*
AIMessage {
additional_kwargs: { id: 'msg_bdrk_01JF7hb4PNQPywP4gnBbgpHi' },
response_metadata: {
stop_reason: 'tool_use',
usage: { input_tokens: 300, output_tokens: 85 }
},
tool_calls: [
{
name: 'weather_tool',
args: {
city: 'New York',
state: 'NY'
},
id: 'toolu_bdrk_01AtEZRTCKioFXqhoNcpgaV7'
}
],
}
*/

API Reference:

  • BedrockChat from @langchain/community/chat_models/bedrock
tip

See the LangSmith trace here

.withStructuredOutput({ ... })

Using the .withStructuredOutput method, you can easily make the LLM return structured output, given only a Zod or JSON schema:

import { BedrockChat } from "@langchain/community/chat_models/bedrock";
// Or, from web environments:
// import { BedrockChat } from "@langchain/community/chat_models/bedrock/web";
import { z } from "zod";

const model = new BedrockChat({
region: process.env.BEDROCK_AWS_REGION,
model: "anthropic.claude-3-sonnet-20240229-v1:0",
maxRetries: 0,
credentials: {
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
},
});

const weatherSchema = z
.object({
city: z.string().describe("The city to get the weather for"),
state: z.string().describe("The state to get the weather for").optional(),
})
.describe("Get the weather for a city");

const modelWithStructuredOutput = model.withStructuredOutput(weatherSchema, {
name: "weather_tool", // Optional, defaults to 'extract'
});

const res = await modelWithStructuredOutput.invoke(
"What's the weather in New York?"
);
console.log(res);

/*
{ city: 'New York', state: 'NY' }
*/

API Reference:

  • BedrockChat from @langchain/community/chat_models/bedrock
tip

See the LangSmith trace here


Was this page helpful?


You can also leave detailed feedback on GitHub.