Skip to main content

ChatGroq

Setup

In order to use the Groq API you'll need an API key. You can sign up for a Groq account and create an API key here.

You'll first need to install the @langchain/groq package:

npm install @langchain/groq
tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

Usage

import { ChatGroq } from "@langchain/groq";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const model = new ChatGroq({
apiKey: process.env.GROQ_API_KEY,
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["human", "{input}"],
]);
const chain = prompt.pipe(model);
const response = await chain.invoke({
input: "Hello",
});
console.log("response", response);
/**
response AIMessage {
content: "Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you have?",
}
*/

API Reference:

info

You can see a LangSmith trace of this example here

Tool calling

Groq chat models support calling multiple functions to get all required data to answer a question. Here's an example:

import { ChatGroq } from "@langchain/groq";

// Mocked out function, could be a database/API call in production
function getCurrentWeather(location: string, _unit?: string) {
if (location.toLowerCase().includes("tokyo")) {
return JSON.stringify({ location, temperature: "10", unit: "celsius" });
} else if (location.toLowerCase().includes("san francisco")) {
return JSON.stringify({
location,
temperature: "72",
unit: "fahrenheit",
});
} else {
return JSON.stringify({ location, temperature: "22", unit: "celsius" });
}
}

// Bind function to the model as a tool
const chat = new ChatGroq({
model: "mixtral-8x7b-32768",
maxTokens: 128,
}).bind({
tools: [
{
type: "function",
function: {
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
},
},
],
tool_choice: "auto",
});

const res = await chat.invoke([
["human", "What's the weather like in San Francisco?"],
]);
console.log(res.additional_kwargs.tool_calls);
/*
[
{
id: 'call_01htk055jpftwbb9tvphyf9bnf',
type: 'function',
function: {
name: 'get_current_weather',
arguments: '{"location":"San Francisco, CA"}'
}
}
]
*/

API Reference:

.withStructuredOutput({ ... })

info

The .withStructuredOutput method is in beta. It is actively being worked on, so the API may change.

You can also use the .withStructuredOutput({ ... }) method to coerce ChatGroq into returning a structured output.

The method allows for passing in either a Zod object, or a valid JSON schema (like what is returned from zodToJsonSchema).

Using the method is simple. Just define your LLM and call .withStructuredOutput({ ... }) on it, passing the desired schema.

Here is an example using a Zod schema and the functionCalling mode (default mode):

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatGroq } from "@langchain/groq";
import { z } from "zod";

const model = new ChatGroq({
temperature: 0,
model: "mixtral-8x7b-32768",
});

const calculatorSchema = z.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]),
number1: z.number(),
number2: z.number(),
});

const modelWithStructuredOutput = model.withStructuredOutput(calculatorSchema);

const prompt = ChatPromptTemplate.fromMessages([
["system", "You are VERY bad at math and must always use a calculator."],
["human", "Please help me!! What is 2 + 2?"],
]);
const chain = prompt.pipe(modelWithStructuredOutput);
const result = await chain.invoke({});
console.log(result);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/

/**
* You can also specify 'includeRaw' to return the parsed
* and raw output in the result.
*/
const includeRawModel = model.withStructuredOutput(calculatorSchema, {
name: "calculator",
includeRaw: true,
});

const includeRawChain = prompt.pipe(includeRawModel);
const includeRawResult = await includeRawChain.invoke({});
console.log(includeRawResult);
/*
{
raw: AIMessage {
content: '',
additional_kwargs: {
tool_calls: [
{
"id": "call_01htk094ktfgxtkwj40n0ehg61",
"type": "function",
"function": {
"name": "calculator",
"arguments": "{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}"
}
}
]
},
response_metadata: {
"tokenUsage": {
"completionTokens": 197,
"promptTokens": 1214,
"totalTokens": 1411
},
"finish_reason": "tool_calls"
}
},
parsed: { operation: 'add', number1: 2, number2: 2 }
}
*/

API Reference:

Streaming

Groq's API also supports streaming token responses. The example below demonstrates how to use this feature.

import { ChatGroq } from "@langchain/groq";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatGroq({
apiKey: process.env.GROQ_API_KEY,
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["human", "{input}"],
]);
const outputParser = new StringOutputParser();
const chain = prompt.pipe(model).pipe(outputParser);
const response = await chain.stream({
input: "Hello",
});
let res = "";
for await (const item of response) {
res += item;
console.log("stream:", res);
}
/**
stream: Hello
stream: Hello!
stream: Hello! I
stream: Hello! I'
stream: Hello! I'm
stream: Hello! I'm happy
stream: Hello! I'm happy to
stream: Hello! I'm happy to assist
stream: Hello! I'm happy to assist you
stream: Hello! I'm happy to assist you in
stream: Hello! I'm happy to assist you in any
stream: Hello! I'm happy to assist you in any way
stream: Hello! I'm happy to assist you in any way I
stream: Hello! I'm happy to assist you in any way I can
stream: Hello! I'm happy to assist you in any way I can.
stream: Hello! I'm happy to assist you in any way I can. Is
stream: Hello! I'm happy to assist you in any way I can. Is there
stream: Hello! I'm happy to assist you in any way I can. Is there something
stream: Hello! I'm happy to assist you in any way I can. Is there something specific
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you have
stream: Hello! I'm happy to assist you in any way I can. Is there something specific you need help with or a question you have?
*/

API Reference:

info

You can see a LangSmith trace of this example here


Help us out by providing feedback on this documentation page: