Skip to main content


Note: This is a community-built integration and is not officially supported by Raycast.

You can utilize the LangChain's RaycastAI class within the Raycast Environment to enhance your Raycast extension with Langchain's capabilities.

  • The RaycastAI class is only available in the Raycast environment and only to Raycast Pro users as of August 2023. You may check how to create an extension for Raycast here.

  • There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. You can set your desired rpm limit by passing rateLimitPerMinute to the RaycastAI constructor as shown in the example, as this rate limit may change in the future.

npm install @langchain/community
import { RaycastAI } from "@langchain/community/llms/raycast";

import { showHUD } from "@raycast/api";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { Tool } from "@langchain/core/tools";

const model = new RaycastAI({
rateLimitPerMinute: 10, // It is 10 by default so you can omit this line
model: "gpt-3.5-turbo",
creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs

const tools: Tool[] = [
// Add your tools here

export default async function main() {
// Initialize the agent executor with RaycastAI model
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "chat-conversational-react-description",

const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`;

const answer = await executor.invoke({ input });

await showHUD(answer.output);

API Reference:

Was this page helpful?

You can also leave detailed feedback on GitHub.