Integrations: LLMs
LangChain offers a number of LLM implementations that integrate with various model providers. These are:
OpenAI
import { OpenAI } from "langchain/llms/openai";
const model = new OpenAI({
temperature: 0.9,
openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
Azure OpenAI
import { OpenAI } from "langchain/llms/openai";
const model = new OpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-API-KEY",
azureOpenAIApiInstanceName: "YOUR-INSTANCE-NAME",
azureOpenAIApiDeploymentName: "YOUR-DEPLOYMENT-NAME",
azureOpenAIApiVersion: "YOUR-API-VERSION",
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
Google Vertex AI
The Vertex AI implementation is meant to be used in Node.js and not directly in a browser, since it requires a service account to use.
Before running this code, you should make sure the Vertex AI API is enabled for the relevant project in your Google Cloud dashboard and that you've authenticated to Google Cloud using one of these methods:
- You are logged into an account (using
gcloud auth application-default login
) permitted to that project. - You are running on a machine using a service account that is permitted to the project.
- You have downloaded the credentials for a service account that is permitted
to the project and set the
GOOGLE_APPLICATION_CREDENTIALS
environment variable to the path of this file.
- npm
- Yarn
- pnpm
npm install google-auth-library
yarn add google-auth-library
pnpm add google-auth-library
import { GoogleVertexAI } from "langchain/llms/googlevertexai";
/*
* Before running this, you should make sure you have created a
* Google Cloud Project that is permitted to the Vertex AI API.
*
* You will also need permission to access this project / API.
* Typically, this is done in one of three ways:
* - You are logged into an account permitted to that project.
* - You are running this on a machine using a service account permitted to
* the project.
* - The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to the
* path of a credentials file for a service account permitted to the project.
*/
export const run = async () => {
const model = new GoogleVertexAI({
temperature: 0.7,
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
};
API Reference:
- GoogleVertexAI from
langchain/llms/googlevertexai
HuggingFaceInference
- npm
- Yarn
- pnpm
npm install @huggingface/inference@1
yarn add @huggingface/inference@1
pnpm add @huggingface/inference@1
import { HuggingFaceInference } from "langchain/llms/hf";
const model = new HuggingFaceInference({
model: "gpt2",
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY
});
const res = await model.call("1 + 1 =");
console.log({ res });
Cohere
- npm
- Yarn
- pnpm
npm install cohere-ai
yarn add cohere-ai
pnpm add cohere-ai
import { Cohere } from "langchain/llms/cohere";
const model = new Cohere({
maxTokens: 20,
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.COHERE_API_KEY
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
Replicate
- npm
- Yarn
- pnpm
npm install replicate
yarn add replicate
pnpm add replicate
import { Replicate } from "langchain/llms/replicate";
const model = new Replicate({
model:
"daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8",
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.REPLICATE_API_KEY
});
const res = await modelA.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
AWS SageMakerEndpoint
Check Amazon SageMaker JumpStart for a list of available models, and how to deploy your own.
- npm
- Yarn
- pnpm
npm install @aws-sdk/client-sagemaker-runtime
yarn add @aws-sdk/client-sagemaker-runtime
pnpm add @aws-sdk/client-sagemaker-runtime
import {
SageMakerLLMContentHandler,
SageMakerEndpoint,
} from "langchain/llms/sagemaker_endpoint";
// Custom for whatever model you'll be using
class HuggingFaceTextGenerationGPT2ContentHandler
implements SageMakerLLMContentHandler
{
contentType = "application/json";
accepts = "application/json";
async transformInput(prompt: string, modelKwargs: Record<string, unknown>) {
const inputString = JSON.stringify({
text_inputs: prompt,
...modelKwargs,
});
return Buffer.from(inputString);
}
async transformOutput(output: Uint8Array) {
const responseJson = JSON.parse(Buffer.from(output).toString("utf-8"));
return responseJson.generated_texts[0];
}
}
const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler();
const model = new SageMakerEndpoint({
endpointName:
"jumpstart-example-huggingface-textgener-2023-05-16-22-35-45-660", // Your endpoint name here
modelKwargs: { temperature: 1e-10 },
contentHandler,
clientOptions: {
region: "YOUR AWS ENDPOINT REGION",
credentials: {
accessKeyId: "YOUR AWS ACCESS ID",
secretAccessKey: "YOUR AWS SECRET ACCESS KEY",
},
},
});
const res = await model.call("Hello, my name is ");
console.log({ res });
/*
{
res: "_____. I am a student at the University of California, Berkeley. I am a member of the American Association of University Professors."
}
*/
API Reference:
- SageMakerLLMContentHandler from
langchain/llms/sagemaker_endpoint
- SageMakerEndpoint from
langchain/llms/sagemaker_endpoint
Additional LLM Implementations
PromptLayerOpenAI
LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:
- Create a PromptLayer account here: https://promptlayer.com.
- Create an API token and pass it either as
promptLayerApiKey
argument in thePromptLayerOpenAI
constructor or in thePROMPTLAYER_API_KEY
environment variable.
import { PromptLayerOpenAI } from "langchain/llms/openai";
const model = new PromptLayerOpenAI({
temperature: 0.9,
openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
You can also pass in the optional returnPromptLayerId
boolean to get a promptLayerRequestId
like below
import { PromptLayerOpenAI } from "langchain/llms/openai";
const model = new PromptLayerOpenAI({
temperature: 0.9,
openAIApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY
returnPromptLayerId: true,
});
const res = await model.generate([
"What would be a good company name a company that makes colorful socks?",
]);
console.log(JSON.stringify(res, null, 3));
/*
{
"generations": [
[
{
"text": " Socktastic!",
"generationInfo": {
"finishReason": "stop",
"logprobs": null,
"promptLayerRequestId": 2066417
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 5,
"promptTokens": 23,
"totalTokens": 28
}
}
}
*/
Azure PromptLayerOpenAI
LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:
- Create a PromptLayer account here: https://promptlayer.com.
- Create an API token and pass it either as
promptLayerApiKey
argument in thePromptLayerOpenAI
constructor or in thePROMPTLAYER_API_KEY
environment variable.
import { PromptLayerOpenAI } from "langchain/llms/openai";
const model = new PromptLayerOpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiCompletionsDeploymentName:
"YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
azureOpenAIApiEmbeddingsDeploymentName:
"YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
The request and the response will be logged in the PromptLayer dashboard.
Note: In streaming mode PromptLayer will not log the response.