Skip to main content

ChatPerplexity

This guide will help you getting started with Perplexity chat models. For detailed documentation of all ChatPerplexity features and configurations head to the API reference.

Overview​

Integration details​

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatPerplexity@langchain/community❌betaβœ…NPM - DownloadsNPM - Version

Model features​

See the links in the table headers below for guides on how to use specific features.

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingToken usageLogprobs
βŒβœ…βŒβŒβŒβŒβœ…βœ…βŒ

Note that at the time of writing, Perplexity only supports structured outputs on certain usage tiers.

Setup​

To access Perplexity models you’ll need to create a Perplexity account, get an API key, and install the @langchain/community integration package.

Credentials​

Head to https://perplexity.ai to sign up for Perplexity and generate an API key. Once you’ve done this set the PERPLEXITY_API_KEY environment variable:

export PERPLEXITY_API_KEY="your-api-key"

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# export LANGSMITH_TRACING="true"
# export LANGSMITH_API_KEY="your-api-key"

Installation​

The LangChain Perplexity integration lives in the @langchain/community package:

yarn add @langchain/community @langchain/core

Instantiation​

Now we can instantiate our model object and generate chat completions:

import { ChatPerplexity } from "@langchain/community/chat_models/perplexity";

const llm = new ChatPerplexity({
model: "sonar",
temperature: 0,
maxTokens: undefined,
timeout: undefined,
maxRetries: 2,
// other params...
});

Invocation​

const aiMsg = await llm.invoke([
{
role: "system",
content:
"You are a helpful assistant that translates English to French. Translate the user sentence.",
},
{
role: "user",
content: "I love programming.",
},
]);
aiMsg;
AIMessage {
"id": "run-71853938-aa30-4861-9019-f12323c09f9a",
"content": "J'adore la programmation.",
"additional_kwargs": {
"citations": [
"https://careersatagoda.com/blog/why-we-love-programming/",
"https://henrikwarne.com/2012/06/02/why-i-love-coding/",
"https://forum.freecodecamp.org/t/i-love-programming-but/497502",
"https://ilovecoding.org",
"https://thecodinglove.com"
]
},
"response_metadata": {
"tokenUsage": {
"promptTokens": 20,
"completionTokens": 9,
"totalTokens": 29
}
},
"tool_calls": [],
"invalid_tool_calls": []
}
console.log(aiMsg.content);
J'adore la programmation.

Chaining​

We can chain our model with a prompt template like so:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);

const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"id": "run-a44dc452-4a71-423d-a4ee-50a2d7c90abd",
"content": "**English to German Translation:**\n\n\"I love programming\" translates to **\"Ich liebe das Programmieren.\"**\n\nIf you'd like to express your passion for programming in more detail, here are some additional translations:\n\n- **\"Programming is incredibly rewarding and fulfilling.\"** translates to **\"Das Programmieren ist unglaublich lohnend und erfüllend.\"**\n- **\"I enjoy solving problems through coding.\"** translates to **\"Ich genieße es, Probleme durch Codieren zu lâsen.\"**\n- **\"I find the process of creating something from nothing very satisfying.\"** translates to **\"Ich finde den Prozess, etwas aus dem Nichts zu schaffen, sehr befriedigend.\"**",
"additional_kwargs": {
"citations": [
"https://careersatagoda.com/blog/why-we-love-programming/",
"https://henrikwarne.com/2012/06/02/why-i-love-coding/",
"https://dev.to/dvddpl/coding-is-boring-why-do-you-love-coding-cl0",
"https://forum.freecodecamp.org/t/i-love-programming-but/497502",
"https://ilovecoding.org"
]
},
"response_metadata": {
"tokenUsage": {
"promptTokens": 15,
"completionTokens": 149,
"totalTokens": 164
}
},
"tool_calls": [],
"invalid_tool_calls": []
}

API reference​

For detailed documentation of all ChatPerplexity features and configurations head to the API reference: https://api.js.langchain.com/classes/\_langchain_community.chat_models_perplexity.ChatPerplexity.html


Was this page helpful?


You can also leave detailed feedback on GitHub.