Skip to main content


Amazon Bedrock is a fully managed service that makes base models from Amazon and third-party model providers accessible through an API.

When this documentation was written, Bedrock supports one model for text embeddings, the Titan Embeddings G1 - Text model (amazon.titan-embed-text-v1). This model supports text retrieval, semantic similarity, and clustering. The maximum input text is 8K tokens and the maximum output vector length is 1536.


To use this embedding, please ensure you have the Bedrock runtime client installed in your project.

npm i @aws-sdk/client-bedrock-runtime@^3.422.0
npm install @langchain/community


The BedrockEmbeddings class uses the AWS Bedrock API to generate embeddings for a given text. It strips new line characters from the text as recommended.

/* eslint-disable @typescript-eslint/no-non-null-assertion */
import { BedrockEmbeddings } from "@langchain/community/embeddings/bedrock";

const embeddings = new BedrockEmbeddings({
region: process.env.BEDROCK_AWS_REGION!,
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
model: "amazon.titan-embed-text-v1", // Default value

const res = await embeddings.embedQuery(
"What would be a good company name a company that makes colorful socks?"
console.log({ res });

API Reference:

Configuring the Bedrock Runtime Client

You can pass in your own instance of the BedrockRuntimeClient if you want to customize options like credentials, region, retryPolicy, etc.

import { BedrockRuntimeClient } from "@aws-sdk/client-bedrock-runtime";
import { BedrockEmbeddings } from "langchain/embeddings/bedrock";

const client = new BedrockRuntimeClient({
region: "us-east-1",
credentials: getCredentials(),

const embeddings = new BedrockEmbeddings({