Skip to main content

ChatMistralAI

Mistral AI is a research organization and hosting platform for LLMs. They're most known for their family of 7B models (mistral7b // mistral-tiny, mixtral8x7b // mistral-small).

The LangChain implementation of Mistral's models uses their hosted generation API, making it easier to access their models without needing to run them locally.

Models

Mistral's API offers access to two of their open source, and proprietary models:

  • open-mistral-7b (aka mistral-tiny-2312)
  • open-mixtral-8x7b (aka mistral-small-2312)
  • mistral-small-latest (aka mistral-small-2402) (default)
  • mistral-medium-latest (aka mistral-medium-2312)
  • mistral-large-latest (aka mistral-large-2402)

See this page for an up to date list.

Setup

In order to use the Mistral API you'll need an API key. You can sign up for a Mistral account and create an API key here.

You'll first need to install the @langchain/mistralai package:

npm install @langchain/mistralai
tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

Usage

When sending chat messages to mistral, there are a few requirements to follow:

  • The first message can not be an assistant (ai) message.
  • Messages must alternate between user and assistant (ai) messages.
  • Messages can not end with an assistant (ai) or system message.
import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const model = new ChatMistralAI({
apiKey: process.env.MISTRAL_API_KEY,
model: "mistral-small",
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["human", "{input}"],
]);
const chain = prompt.pipe(model);
const response = await chain.invoke({
input: "Hello",
});
console.log("response", response);
/**
response AIMessage {
lc_namespace: [ 'langchain_core', 'messages' ],
content: "Hello! I'm here to help answer any questions you might have or provide information on a variety of topics. How can I assist you today?\n" +
'\n' +
'Here are some common tasks I can help with:\n' +
'\n' +
'* Setting alarms or reminders\n' +
'* Sending emails or messages\n' +
'* Making phone calls\n' +
'* Providing weather information\n' +
'* Creating to-do lists\n' +
'* Offering suggestions for restaurants, movies, or other local activities\n' +
'* Providing definitions and explanations for words or concepts\n' +
'* Translating text into different languages\n' +
'* Playing music or podcasts\n' +
'* Setting timers\n' +
'* Providing directions or traffic information\n' +
'* And much more!\n' +
'\n' +
"Let me know how I can help you specifically, and I'll do my best to make your day easier and more productive!\n" +
'\n' +
'Best regards,\n' +
'Your helpful assistant.',
name: undefined,
additional_kwargs: {}
}
*/

API Reference:

info

You can see a LangSmith trace of this example here

Streaming

Mistral's API also supports streaming token responses. The example below demonstrates how to use this feature.

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatMistralAI({
apiKey: process.env.MISTRAL_API_KEY,
model: "mistral-small",
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["human", "{input}"],
]);
const outputParser = new StringOutputParser();
const chain = prompt.pipe(model).pipe(outputParser);
const response = await chain.stream({
input: "Hello",
});
for await (const item of response) {
console.log("stream item:", item);
}
/**
stream item:
stream item: Hello! I'm here to help answer any questions you
stream item: might have or assist you with any task you'd like to
stream item: accomplish. I can provide information
stream item: on a wide range of topics
stream item: , from math and science to history and literature. I can
stream item: also help you manage your schedule, set reminders, and
stream item: much more. Is there something specific you need help with? Let
stream item: me know!
stream item:
*/

API Reference:

info

You can see a LangSmith trace of this example here

Tool calling

Mistral's API now supports tool calling and JSON mode! The examples below demonstrates how to use them, along with how to use the withStructuredOutput method to easily compose structured output LLM calls.

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { JsonOutputKeyToolsParser } from "@langchain/core/output_parsers/openai_tools";
import { z } from "zod";
import { StructuredTool } from "@langchain/core/tools";

const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});

// Extend the StructuredTool class to create a new tool
class CalculatorTool extends StructuredTool {
name = "calculator";

description = "A simple calculator tool";

schema = calculatorSchema;

async _call(input: z.infer<typeof calculatorSchema>) {
return JSON.stringify(input);
}
}

// Or you can convert the tool to a JSON schema using
// a library like zod-to-json-schema
// Uncomment the lines below to use tools this way.
// import { zodToJsonSchema } from "zod-to-json-schema";
// const calculatorJsonSchema = zodToJsonSchema(calculatorSchema);

const model = new ChatMistralAI({
apiKey: process.env.MISTRAL_API_KEY,
model: "mistral-large",
});

// Bind the tool to the model
const modelWithTool = model.bind({
tools: [new CalculatorTool()],
});

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Define an output parser that can handle tool responses
const outputParser = new JsonOutputKeyToolsParser({
keyName: "calculator",
returnSingle: true,
});

// Chain your prompt, model, and output parser together
const chain = prompt.pipe(modelWithTool).pipe(outputParser);

const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/

API Reference:

.withStructuredOutput({ ... })

info

The .withStructuredOutput method is in beta. It is actively being worked on, so the API may change.

Using the .withStructuredOutput method, you can easily make the LLM return structured output, given only a Zod or JSON schema:

note

The Mistral tool calling API requires descriptions for each tool field. If descriptions are not supplied, the API will error.

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

const calculatorSchema = z
.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
})
.describe("A simple calculator tool");

const model = new ChatMistralAI({
apiKey: process.env.MISTRAL_API_KEY,
model: "mistral-large",
});

// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorSchema);

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);

const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/

/**
* You can supply a "name" field to give the LLM additional context
* around what you are trying to generate. You can also pass
* 'includeRaw' to get the raw message back from the model too.
*/
const includeRawModel = model.withStructuredOutput(calculatorSchema, {
name: "calculator",
includeRaw: true,
});
const includeRawChain = prompt.pipe(includeRawModel);

const includeRawResponse = await includeRawChain.invoke({
input: "What is 2 + 2?",
});
console.log(JSON.stringify(includeRawResponse, null, 2));
/*
{
"raw": {
"kwargs": {
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "null",
"type": "function",
"function": {
"name": "calculator",
"arguments": "{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}"
}
}
]
}
}
},
"parsed": {
"operation": "add",
"number1": 2,
"number2": 2
}
}
*/

API Reference:

Using JSON schema:

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const calculatorJsonSchema = {
type: "object",
properties: {
operation: {
type: "string",
enum: ["add", "subtract", "multiply", "divide"],
description: "The type of operation to execute.",
},
number1: { type: "number", description: "The first number to operate on." },
number2: {
type: "number",
description: "The second number to operate on.",
},
},
required: ["operation", "number1", "number2"],
description: "A simple calculator tool",
};

const model = new ChatMistralAI({
apiKey: process.env.MISTRAL_API_KEY,
model: "mistral-large",
});

// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorJsonSchema);

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);

// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);

const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/

API Reference:

Tool calling agent

The larger Mistral models not only support tool calling, but can also be used in the Tool Calling agent. Here's an example:

import { z } from "zod";

import { ChatMistralAI } from "@langchain/mistralai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { AgentExecutor, createToolCallingAgent } from "langchain/agents";

import { ChatPromptTemplate } from "@langchain/core/prompts";

const llm = new ChatMistralAI({
temperature: 0,
model: "mistral-large-latest",
});

// Prompt template must have "input" and "agent_scratchpad input variables"
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);

const currentWeatherTool = new DynamicStructuredTool({
name: "get_current_weather",
description: "Get the current weather in a given location",
schema: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
}),
func: async () => Promise.resolve("28 °C"),
});

const agent = await createToolCallingAgent({
llm,
tools: [currentWeatherTool],
prompt,
});

const agentExecutor = new AgentExecutor({
agent,
tools: [currentWeatherTool],
});

const input = "What's the weather like in Paris?";
const { output } = await agentExecutor.invoke({ input });

console.log(output);

/*
The current weather in Paris is 28 °C.
*/

API Reference:


Help us out by providing feedback on this documentation page: