Skip to main content

Quick Start

Large Language Models (LLMs) are a core component of LangChain. LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.

There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.

In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.

Setup

First we'll need to install the LangChain OpenAI integration package:
npm install @langchain/openai

Accessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:

export OPENAI_API_KEY="..."

If you'd prefer not to set an environment variable you can pass the key in directly via the apiKey named parameter when initiating the OpenAI Chat Model class:

import { OpenAI } from "@langchain/openai";

const llm = new OpenAI({
apiKey: "YOUR_KEY_HERE",
});

otherwise you can initialize with an empty object:

import { OpenAI } from "@langchain/openai";

const llm = new OpenAI({});

LCEL

LLMs implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, stream, batch, and streamLog calls.

LLMs accept strings as inputs, or objects which can be coerced to string prompts, including BaseMessage[] and PromptValue.

await llm.invoke(
"What are some theories about the relationship between unemployment and inflation?"
);
'\n\n1. The Phillips Curve Theory: This suggests that there is an inverse relationship between unemployment and inflation, meaning that when unemployment is low, inflation will be higher, and when unemployment is high, inflation will be lower.\n\n2. The Monetarist Theory: This theory suggests that the relationship between unemployment and inflation is weak, and that changes in the money supply are more important in determining inflation.\n\n3. The Resource Utilization Theory: This suggests that when unemployment is low, firms are able to raise wages and prices in order to take advantage of the increased demand for their products and services. This leads to higher inflation.'

See the Runnable interface for more details on the available methods.

[Legacy] generate: batch calls, richer outputs

generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:

const llmResult = await llm.generate(
["Tell me a joke", "Tell me a poem"],
["Tell me a joke", "Tell me a poem"]
);

console.log(llmResult.generations.length);

// 30

console.log(llmResult.generations[0]);

/*
[
{
text: "\n\nQ: What did the fish say when it hit the wall?\nA: Dam!",
generationInfo: { finishReason: "stop", logprobs: null }
}
]
*/

console.log(llmResult.generations[1]);

/*
[
{
text: "\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.",
generationInfo: { finishReason: "stop", logprobs: null }
}
]
*/

You can also access provider specific information that is returned. This information is NOT standardized across providers.

console.log(llmResult.llmOutput);

/*
{
tokenUsage: { completionTokens: 46, promptTokens: 8, totalTokens: 54 }
}
*/

Here's an example with additional parameters, which sets -1 for max_tokens to turn on token size calculations:

import { OpenAI } from "@langchain/openai";

const model = new OpenAI({
// customize openai model that's used, `gpt-3.5-turbo-instruct` is the default
model: "gpt-3.5-turbo-instruct",

// `max_tokens` supports a magic -1 param where the max token length for the specified modelName
// is calculated and included in the request to OpenAI as the `max_tokens` param
maxTokens: -1,

// use `modelKwargs` to pass params directly to the openai call
// note that OpenAI uses snake_case instead of camelCase
modelKwargs: {
user: "me",
},

// for additional logging for debugging purposes
verbose: true,
});

const resA = await model.invoke(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ resA });
// { resA: '\n\nSocktastic Colors' }

API Reference:


Help us out by providing feedback on this documentation page: