Skip to main content

Quickstart

In this quickstart we'll show you how to:

  • Get setup with LangChain and LangSmith
  • Use the most basic and common components of LangChain: prompt templates, models, and output parsers
  • Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining
  • Build a simple application with LangChain
  • Trace your application with LangSmith

That's a fair amount to cover! Let's dive in.

Installation

To install LangChain run:

npm install langchain

For more details, see our Installation guide.

LangSmith

Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.

Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."

Building with LangChain

LangChain enables building applications that connect external sources of data and computation to LLMs.

In this quickstart, we will walk through a few different ways of doing that:

  • We will start with a simple LLM chain, which just relies on information in the prompt template to respond.
  • Next, we will build a retrieval chain, which fetches data from a separate database and passes that into the prompt template.
  • We will then add in chat history, to create a conversational retrieval chain. This allows you interact in a chat manner with this LLM, so it remembers previous questions.
  • Finally, we will build an agent - which utilizes an LLM to determine whether or not it needs to fetch data to answer questions.

We will cover these at a high level, but keep in mind there is a lot more to each piece! We will link to more in-depth docs as appropriate.

LLM Chain

tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

First we'll need to install the LangChain OpenAI integration package:

npm install @langchain/openai

Accessing the API requires an API key, which you can get by creating an account here. Once we have a key we'll want to set it as an environment variable:

OPENAI_API_KEY="..."

If you'd prefer not to set an environment variable you can pass the key in directly via the apiKey named parameter when initiating the OpenAI Chat Model class:

import { ChatOpenAI } from "@langchain/openai";

const chatModel = new ChatOpenAI({
apiKey: "...",
});

Otherwise you can initialize without any params:

import { ChatOpenAI } from "@langchain/openai";

const chatModel = new ChatOpenAI({});

Once you've installed and initialized the LLM of your choice, we can try using it! Let's ask it what LangSmith is - this is something that wasn't present in the training data so it shouldn't have a very good response.

await chatModel.invoke("what is LangSmith?");
AIMessage {
content: 'LangSmith refers to the combination of two surnames, Lang and Smith. It is most commonly used as a fictional or hypothetical name for a person or a company. This term may also refer to specific individuals or entities named LangSmith in certain contexts.',
additional_kwargs: { function_call: undefined, tool_calls: undefined }
}

We can also guide it's response with a prompt template. Prompt templates are used to convert raw user input to a better input to the LLM.

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a world class technical documentation writer."],
["user", "{input}"],
]);

We can now combine these into a simple LLM chain:

const chain = prompt.pipe(chatModel);

We can now invoke it and ask the same question:

await chain.invoke({
input: "what is LangSmith?",
});
AIMessage {
content: 'LangSmith is a powerful programming language created for high-performance software development. It is designed to be efficient, intuitive, and capable of handling complex computations and data manipulations. With its extensive set of features and libraries, LangSmith provides developers with the tools necessary to build robust and scalable applications.\n' +
'\n' +
'Some key features of LangSmith include:\n' +
'\n' +
'1. Strong typing: LangSmith enforces type safety, preventing common programming errors and ensuring code reliability.\n' +
'\n' +
'2. Advanced memory management: The language provides built-in memory management mechanisms, such as automatic garbage collection, to optimize memory usage and reduce the risk of memory leaks.\n' +
'\n' +
'3. Multi-paradigm support: LangSmith supports both procedural and object-oriented programming paradigms, giving developers the flexibility to choose the most suitable approach for their projects.\n' +
'\n' +
'4. Modular design: The language promotes modular programming, allowing developers to organize their code into reusable components for easier maintenance and collaboration.\n' +
'\n' +
'5. High-performance libraries: LangSmith offers a rich set of libraries for various domains, including graphics, networking, database access, and more. These libraries enhance productivity by providing pre-built solutions for common tasks.\n' +
'\n' +
'6. Interoperability: LangSmith enables seamless integration with other programming languages, allowing developers to leverage existing codebases and resources.\n' +
'\n' +
"7. Extensibility: Developers can extend LangSmith's functionality through custom libraries and modules, allowing for the creation of domain-specific solutions.\n" +
'\n' +
'Overall, LangSmith aims to provide a robust and efficient development environment for creating software applications across various domains, from scientific simulations to web development and beyond.',
additional_kwargs: { function_call: undefined, tool_calls: undefined }
}

The model hallucinated an incorrect answer this time, but it did respond in a more proper tone for a technical writer!

The output of a ChatModel (and therefore, of this chain) is a message. However, it's often much more convenient to work with strings. Let's add a simple output parser to convert the chat message to a string.

import { StringOutputParser } from "@langchain/core/output_parsers";

const outputParser = new StringOutputParser();

const llmChain = prompt.pipe(chatModel).pipe(outputParser);

await llmChain.invoke({
input: "what is LangSmith?",
});
LangSmith is a sophisticated online language translation tool. It leverages artificial intelligence and machine learning algorithms to provide accurate and efficient translation services across multiple languages. Whether it's translating documents, websites, or text snippets, LangSmith offers a seamless, user-friendly experience while maintaining the integrity and nuances of the original content. Its advanced features include context-aware translations, language customization options, and quality assurance checks, making it an invaluable tool for businesses, individuals, and language professionals alike.

Diving deeper

We've now successfully set up a basic LLM chain. We only touched on the basics of prompts, models, and output parsers - for a deeper dive into everything mentioned here, see this section of documentation.

Retrieval Chain

In order to properly answer the original question ("what is LangSmith?") and avoid hallucinations, we need to provide additional context to the LLM. We can do this via retrieval. Retrieval is useful when you have too much data to pass to the LLM directly. You can then use a retriever to fetch only the most relevant pieces and pass those in.

In this process, we will look up relevant documents from a Retriever and then pass them into the prompt. A Retriever can be backed by anything - a SQL table, the internet, etc - but in this instance we will populate a vector store and use that as a retriever. For more information on vectorstores, see this documentation.

First, we need to load the data that we want to index. We'll use a document loader that uses the popular Cheerio npm package as a peer dependency to parse data from webpages. Install it as shown below:

npm install cheerio

Then, use it like this:

import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/user_guide"
);

const docs = await loader.load();

console.log(docs.length);
console.log(docs[0].pageContent.length);
45772

Note that the size of the loaded document is large and may exceed the maximum amount of data we can pass in a single model call. We can split the document into more manageable chunks to get around this limitation and to reduce the amount of distraction to the model using a text splitter:

import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";

const splitter = new RecursiveCharacterTextSplitter();

const splitDocs = await splitter.splitDocuments(docs);

console.log(splitDocs.length);
console.log(splitDocs[0].pageContent.length);
60
441

Next, we need to index the loaded documents into a vectorstore. This requires a few components, namely an embedding model and a vectorstore.

There are many options for both components. Here are some examples for accessing via OpenAI and via local models:

Make sure you have the @langchain/openai package installed and the appropriate environment variables set (these are the same as needed for the model above).

import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings = new OpenAIEmbeddings();

Now, we can use this embedding model to ingest documents into a vectorstore. We will use a simple in-memory demo vectorstore for simplicity's sake:

Note: If you are using local embeddings, this ingestion process may take some time depending on your local hardware.

import { MemoryVectorStore } from "langchain/vectorstores/memory";

const vectorstore = await MemoryVectorStore.fromDocuments(
splitDocs,
embeddings
);

The LangChain vectorstore class will automatically prepare each raw document using the embeddings model.

Now that we have this data indexed in a vectorstore, we will create a retrieval chain. This chain will take an incoming question, look up relevant documents, then pass those documents along with the original question into an LLM and ask it to answer the original question.

First, let's set up the chain that takes a question and the retrieved documents and generates an answer.

import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt =
ChatPromptTemplate.fromTemplate(`Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}`);

const documentChain = await createStuffDocumentsChain({
llm: chatModel,
prompt,
});

If we wanted to, we could run this ourselves by passing in documents directly:

import { Document } from "@langchain/core/documents";

await documentChain.invoke({
input: "what is LangSmith?",
context: [
new Document({
pageContent:
"LangSmith is a platform for building production-grade LLM applications.",
}),
],
});
 LangSmith is a platform for building production-grade Large Language Model (LLM) applications.

However, we want the documents to first come from the retriever we just set up. That way, for a given question we can use the retriever to dynamically select the most relevant documents and pass those in.

import { createRetrievalChain } from "langchain/chains/retrieval";

const retriever = vectorstore.asRetriever();

const retrievalChain = await createRetrievalChain({
combineDocsChain: documentChain,
retriever,
});

We can now invoke this chain. This returns an object - the response from the LLM is in the answer key:

const result = await retrievalChain.invoke({
input: "what is LangSmith?",
});

console.log(result.answer);
 LangSmith is a tool developed by LangChain that is used for debugging and monitoring LLMs, chains, and agents in order to improve their performance and reliability for use in production.
tip

Check out this public LangSmith trace showing the steps of the retrieval chain.

This answer should be much more accurate!

Diving Deeper

We've now successfully set up a basic retrieval chain. We only touched on the basics of retrieval - for a deeper dive into everything mentioned here, see this section of documentation.

Conversational Retrieval Chain

The chain we've created so far can only answer single questions. One of the main types of LLM applications that people are building are chat bots. So how do we turn this chain into one that can answer follow up questions?

We can still use the createRetrievalChain function, but we need to change two things:

  1. The retrieval method should now not just work on the most recent input, but rather should take the whole history into account.
  2. The final LLM chain should likewise take the whole history into account.

Updating Retrieval

In order to update retrieval, we will create a new chain. This chain will take in the most recent input (input) and the conversation history (chat_history) and use an LLM to generate a search query.

import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever";
import { MessagesPlaceholder } from "@langchain/core/prompts";

const historyAwarePrompt = ChatPromptTemplate.fromMessages([
new MessagesPlaceholder("chat_history"),
["user", "{input}"],
[
"user",
"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation",
],
]);

const historyAwareRetrieverChain = await createHistoryAwareRetriever({
llm: chatModel,
retriever,
rephrasePrompt: historyAwarePrompt,
});

We can test this "history aware retriever" out by creating a situation where the user is asking a follow up question:

import { HumanMessage, AIMessage } from "@langchain/core/messages";

const chatHistory = [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage("Yes!"),
];

await historyAwareRetrieverChain.invoke({
chat_history: chatHistory,
input: "Tell me how!",
});
tip

Here's a public LangSmith trace of the above run!

The above trace illustrates that this returns documents about testing in LangSmith. This is because the LLM generated a new query, combining the chat history with the follow up question.

Now that we have this new retriever, we can create a new chain to continue the conversation with these retrieved documents in mind:

const historyAwareRetrievalPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
new MessagesPlaceholder("chat_history"),
["user", "{input}"],
]);

const historyAwareCombineDocsChain = await createStuffDocumentsChain({
llm: chatModel,
prompt: historyAwareRetrievalPrompt,
});

const conversationalRetrievalChain = await createRetrievalChain({
retriever: historyAwareRetrieverChain,
combineDocsChain: historyAwareCombineDocsChain,
});

Let's now test this out end-to-end!

const result2 = await conversationalRetrievalChain.invoke({
chat_history: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage("Yes!"),
],
input: "tell me how",
});

console.log(result2.answer);
LangSmith can help test and debug your LLM (Language Model) applications in several ways:

1. Exact Input/Output Visualization: LangSmith provides a straightforward visualization of the exact inputs and outputs for all LLM calls. This helps you understand the specific inputs provided to the model and the corresponding output generated.

2. Editing Prompts: If you encounter a bad output or want to experiment with different inputs, you can edit the prompts directly in LangSmith. By modifying the prompt, you can observe the resulting changes in the output. LangSmith includes a playground feature where you can modify prompts and re-run them multiple times to analyze the impact on the output.

3. Constructing Datasets: LangSmith simplifies the process of constructing datasets for testing changes in your application. You can quickly edit examples and add them to datasets, expanding your evaluation sets or fine-tuning your model for improved quality or reduced costs.

4. Monitoring and Troubleshooting: Once your application is ready for production, LangSmith can be used to monitor its performance. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. LangSmith also allows you to associate feedback programmatically with runs, enabling you to track performance over time and pinpoint underperforming data points.

In summary, LangSmith helps you test, debug, and monitor your LLM applications, providing tools to visualize inputs/outputs, edit prompts, construct datasets, and monitor performance.
tip

Here's a public LangSmith trace of the above run!

We can see that this gives a coherent answer - we've successfully turned our retrieval chain into a chatbot!

Agent

We've so far created examples of chains - where each step is known ahead of time. The final thing we will create is an agent - where the LLM decides what steps to take.

NOTE: for this example we will only show how to create an agent using OpenAI models, as local models runnable on consumer hardware are not reliable enough yet.

One of the first things to do when building an agent is to decide what tools it should have access to. For this example, we will give the agent access two tools:

  1. The retriever we just created. This will let it easily answer questions about LangSmith
  2. A search tool. This will let it easily answer questions that require up to date information.

First, let's set up a tool for the retriever we just created:

import { createRetrieverTool } from "langchain/tools/retriever";

const retrieverTool = await createRetrieverTool(retriever, {
name: "langsmith_search",
description:
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
});

The search tool that we will use is Tavily. This will require you to create an API key (they have generous free tier). After signing up and creating one in their dashboard, you need to set it as an environment variable:

export TAVILY_API_KEY=...

If you do not want to set up an API key, you can skip creating this tool.

import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const searchTool = new TavilySearchResults();

We can now create a list of the tools we want to work with:

const tools = [retrieverTool, searchTool];

Now that we have the tools, we can create an agent to use them and an executor to run the agent. We will go over this pretty quickly. For a deeper dive into what exactly is going on, check out the agent documentation pages.

import { pull } from "langchain/hub";
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";

// Get the prompt to use - you can modify this!
// If you want to see the prompt in full, you can at:
// https://smith.langchain.com/hub/hwchase17/openai-functions-agent
const agentPrompt = await pull<ChatPromptTemplate>(
"hwchase17/openai-functions-agent"
);

const agentModel = new ChatOpenAI({
model: "gpt-3.5-turbo-1106",
temperature: 0,
});

const agent = await createOpenAIFunctionsAgent({
llm: agentModel,
tools,
prompt: agentPrompt,
});

const agentExecutor = new AgentExecutor({
agent,
tools,
verbose: true,
});

We can now invoke the agent and see how it responds! We can ask it questions about LangSmith:

const agentResult = await agentExecutor.invoke({
input: "how can LangSmith help with testing?",
});

console.log(agentResult.output);
LangSmith can help with testing in the following ways:

1. Debugging: LangSmith helps in debugging unexpected end results, agent looping, slow chains, and token usage. It provides a visualization of the exact inputs/outputs to all LLM calls, making it easier to understand them.

2. Modifying Prompts: LangSmith allows you to modify prompts and observe resulting changes to the output. This feature supports OpenAI and Anthropic models and works for LLM and Chat Model calls.

3. Dataset Construction: LangSmith simplifies dataset construction for testing changes. It provides a straightforward visualization of inputs/outputs to LLM calls, allowing you to understand them easily.

4. Monitoring: LangSmith can be used to monitor applications in production by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for programmatically associating feedback with runs to track performance over time.

Overall, LangSmith is a valuable tool for testing, debugging, and monitoring applications that utilize language models and agents.
tip

Here's a public LangSmith trace of the above run!

We can ask it about the weather:

const agentResult2 = await agentExecutor.invoke({
input: "what is the weather in SF?",
});

console.log(agentResult2.output);
The weather in San Francisco, California for December 29, 2023 is expected to have average high temperatures of 50 to 65 °F and average low temperatures of 40 to 55 °F. There may be periods of rain with a high of 59°F and winds from the SSE at 10 to 20 mph. For more detailed information, you can visit [this link](https://www.weathertab.com/en/g/o/12/united-states/california/san-francisco/).
tip

Here's a public LangSmith trace of the above run!

We can have conversations with it:

const agentResult3 = await agentExecutor.invoke({
chat_history: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage("Yes!"),
],
input: "Tell me how",
});

console.log(agentResult3.output);
LangSmith can help test your LLM applications by providing the following features:
1. Debugging: LangSmith helps in debugging LLMs, chains, and agents by providing a visualization of the exact inputs/outputs to all LLM calls, allowing you to understand them easily.
2. Prompt Editing: You can modify the prompt and re-run it to observe the resulting changes to the output as many times as needed using LangSmith's playground feature.
3. Monitoring: LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.
4. Feedback and Dataset Expansion: You can associate feedback programmatically with runs, add examples to datasets, and fine-tune a model for improved quality or reduced costs.
5. Failure Analysis: LangSmith allows you to identify how your chain can fail and monitor these failures, which can be valuable data points for testing future chain versions.

These features make LangSmith a valuable tool for testing and improving LLM applications.
tip

Here's a public LangSmith trace of the above run!

Diving Deeper

We've now successfully set up a basic agent. We only touched on the basics of agents - for a deeper dive into everything mentioned here, see this section of documentation.

Next steps

We've touched on how to build an application with LangChain, and how to trace it with LangSmith. There are a lot more features than we can cover here. To continue on your journey, we recommend you read the following (in order):


Help us out by providing feedback on this documentation page: