RaycastAI
Note: This is a community-built integration and is not officially supported by Raycast.
You can utilize the LangChain's RaycastAI class within the Raycast Environment to enhance your Raycast extension with Langchain's capabilities.
The RaycastAI class is only available in the Raycast environment and only to Raycast Pro users as of August 2023. You may check how to create an extension for Raycast here.
There is a rate limit of approx 10 requests per minute for each Raycast Pro user. If you exceed this limit, you will receive an error. You can set your desired rpm limit by passing
rateLimitPerMinute
to theRaycastAI
constructor as shown in the example, as this rate limit may change in the future.
- npm
- Yarn
- pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { RaycastAI } from "@langchain/community/llms/raycast";
import { showHUD } from "@raycast/api";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { Tool } from "@langchain/core/tools";
const model = new RaycastAI({
rateLimitPerMinute: 10, // It is 10 by default so you can omit this line
model: "gpt-3.5-turbo",
creativity: 0, // `creativity` is a term used by Raycast which is equivalent to `temperature` in some other LLMs
});
const tools: Tool[] = [
// Add your tools here
];
export default async function main() {
// Initialize the agent executor with RaycastAI model
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "chat-conversational-react-description",
});
const input = `Describe my today's schedule as Gabriel Garcia Marquez would describe it`;
const answer = await executor.invoke({ input });
await showHUD(answer.output);
}
API Reference:
- RaycastAI from
@langchain/community/llms/raycast
- initializeAgentExecutorWithOptions from
langchain/agents
- Tool from
@langchain/core/tools
Related
- LLM conceptual guide
- LLM how-to guides