Skip to main content

Interface

In an effort to make it as easy as possible to create custom chains, we've implemented a "Runnable" protocol that most components implement. This is a standard interface with a few different methods, which make it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:

  • stream: stream back chunks of the response
  • invoke: call the chain on an input
  • batch: call the chain on a list of inputs
  • streamLog: stream back intermediate steps as they happen, in addition to the final response
  • streamEvents: beta stream events as they happen in the chain (introduced in @langchain/core 0.1.27)

The input type varies by component :

ComponentInput Type
PromptObject
RetrieverSingle string
LLM, ChatModelSingle string, list of chat messages or PromptValue
ToolSingle string, or object, depending on the tool
OutputParserThe output of an LLM or ChatModel

The output type also varies by component :

ComponentOutput Type
LLMString
ChatModelChatMessage
PromptPromptValue
RetrieverList of documents
ToolDepends on the tool
OutputParserDepends on the parser

You can combine runnables (and runnable-like objects such as functions and objects whose values are all functions) into sequences in two ways:

  • Call the .pipe instance method, which takes another runnable-like as an argument
  • Use the RunnableSequence.from([]) static method with an array of runnable-likes, which will run in sequence when invoked

See below for examples of how this looks.

Stream

npm install @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";

const model = new ChatOpenAI({});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);

const chain = promptTemplate.pipe(model);

const stream = await chain.stream({ topic: "bears" });

// Each chunk has the same interface as a chat message
for await (const chunk of stream) {
console.log(chunk?.content);
}

/*
Why don't bears wear shoes?

Because they have bear feet!
*/

API Reference:

Invoke

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";

const model = new ChatOpenAI({});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);

// You can also create a chain using an array of runnables
const chain = RunnableSequence.from([promptTemplate, model]);

const result = await chain.invoke({ topic: "bears" });

console.log(result);
/*
AIMessage {
content: "Why don't bears wear shoes?\n\nBecause they have bear feet!",
}
*/

API Reference:

Batch

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";

const model = new ChatOpenAI({});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);

const chain = promptTemplate.pipe(model);

const result = await chain.batch([{ topic: "bears" }, { topic: "cats" }]);

console.log(result);
/*
[
AIMessage {
content: "Why don't bears wear shoes?\n\nBecause they have bear feet!",
},
AIMessage {
content: "Why don't cats play poker in the wild?\n\nToo many cheetahs!"
}
]
*/

API Reference:

You can also pass additional arguments to the call. The standard LCEL config object contains an option to set maximum concurrency, and an additional batch() specific config object that includes an option for whether or not to return exceptions instead of throwing them (useful for gracefully handling failures!):

import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";

const model = new ChatOpenAI({
model: "badmodel",
});
const promptTemplate = PromptTemplate.fromTemplate(
"Tell me a joke about {topic}"
);

const chain = promptTemplate.pipe(model);

const result = await chain.batch(
[{ topic: "bears" }, { topic: "cats" }],
{ maxConcurrency: 1 },
{ returnExceptions: true }
);

console.log(result);
/*
[
NotFoundError: The model `badmodel` does not exist
at Function.generate (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/error.ts:71:6)
at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:381:13)
at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:442:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///Users/jacoblee/langchain/langchainjs/langchain/dist/chat_models/openai.js:514:29
at RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) {
status: 404,
NotFoundError: The model `badmodel` does not exist
at Function.generate (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/error.ts:71:6)
at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:381:13)
at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/node_modules/openai/src/core.ts:442:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///Users/jacoblee/langchain/langchainjs/langchain/dist/chat_models/openai.js:514:29
at RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) {
status: 404,
]
*/

API Reference:

Stream log

All runnables also have a method called .streamLog() which is used to stream all or part of the intermediate steps of your chain/sequence as they happen.

This is useful to show progress to the user, to use intermediate results, or to debug your chain. You can stream all steps (default) or include/exclude steps by name, tags or metadata.

This method yields JSONPatch ops that when applied in the same order as received build up the RunState.

To reconstruct the JSONPatches into a single JSON object you can use the applyPatch method. The example below demonstrates how to pass the patch to the applyPatch method.

Here's an example with streaming intermediate documents from a retrieval chain:

npm install @langchain/community @langchain/openai
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { formatDocumentsAsString } from "langchain/util/document";
import { StringOutputParser } from "@langchain/core/output_parsers";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
} from "@langchain/core/prompts";

// Initialize the LLM to use to answer the question.
const model = new ChatOpenAI({});

const vectorStore = await HNSWLib.fromTexts(
[
"mitochondria is the powerhouse of the cell",
"mitochondria is made of lipids",
],
[{ id: 1 }, { id: 2 }],
new OpenAIEmbeddings()
);

// Initialize a retriever wrapper around the vector store
const vectorStoreRetriever = vectorStore.asRetriever();

// Create a system & human prompt for the chat model
const SYSTEM_TEMPLATE = `Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------
{context}`;
const messages = [
SystemMessagePromptTemplate.fromTemplate(SYSTEM_TEMPLATE),
HumanMessagePromptTemplate.fromTemplate("{question}"),
];
const prompt = ChatPromptTemplate.fromMessages(messages);

const chain = RunnableSequence.from([
{
context: vectorStoreRetriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
model,
new StringOutputParser(),
]);

const logStream = await chain.streamLog("What is the powerhouse of the cell?");

let state;

for await (const logPatch of logStream) {
console.log(JSON.stringify(logPatch));
if (!state) {
state = logPatch;
} else {
state = state.concat(logPatch);
}
}

console.log("aggregate", state);

/*
{"ops":[{"op":"replace","path":"","value":{"id":"5a79d2e7-171a-4034-9faa-63af88e5a451","streamed_output":[],"logs":{}}}]}

{"ops":[{"op":"add","path":"/logs/RunnableMap","value":{"id":"5948dd9f-b827-45f8-9fa6-74e5cc972a56","name":"RunnableMap","type":"chain","tags":["seq:step:1"],"metadata":{},"start_time":"2023-12-23T00:20:46.664Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/RunnableSequence","value":{"id":"e9e9ef5e-3a04-4110-9a24-517c929b9137","name":"RunnableSequence","type":"chain","tags":["context"],"metadata":{},"start_time":"2023-12-23T00:20:46.804Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/RunnablePassthrough","value":{"id":"4c79d835-87e5-4ff8-b560-987aea83c0e4","name":"RunnablePassthrough","type":"chain","tags":["question"],"metadata":{},"start_time":"2023-12-23T00:20:46.805Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/RunnablePassthrough/final_output","value":{"output":"What is the powerhouse of the cell?"}},{"op":"add","path":"/logs/RunnablePassthrough/end_time","value":"2023-12-23T00:20:46.947Z"}]}

{"ops":[{"op":"add","path":"/logs/VectorStoreRetriever","value":{"id":"1e169f18-711e-47a3-910e-ee031f70b6e0","name":"VectorStoreRetriever","type":"retriever","tags":["seq:step:1","hnswlib"],"metadata":{},"start_time":"2023-12-23T00:20:47.082Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/VectorStoreRetriever/final_output","value":{"documents":[{"pageContent":"mitochondria is the powerhouse of the cell","metadata":{"id":1}},{"pageContent":"mitochondria is made of lipids","metadata":{"id":2}}]}},{"op":"add","path":"/logs/VectorStoreRetriever/end_time","value":"2023-12-23T00:20:47.398Z"}]}

{"ops":[{"op":"add","path":"/logs/RunnableLambda","value":{"id":"a0d61a88-8282-42be-8949-fb0e8f8f67cd","name":"RunnableLambda","type":"chain","tags":["seq:step:2"],"metadata":{},"start_time":"2023-12-23T00:20:47.495Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/RunnableLambda/final_output","value":{"output":"mitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids"}},{"op":"add","path":"/logs/RunnableLambda/end_time","value":"2023-12-23T00:20:47.604Z"}]}

{"ops":[{"op":"add","path":"/logs/RunnableSequence/final_output","value":{"output":"mitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids"}},{"op":"add","path":"/logs/RunnableSequence/end_time","value":"2023-12-23T00:20:47.690Z"}]}

{"ops":[{"op":"add","path":"/logs/RunnableMap/final_output","value":{"question":"What is the powerhouse of the cell?","context":"mitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids"}},{"op":"add","path":"/logs/RunnableMap/end_time","value":"2023-12-23T00:20:47.780Z"}]}

{"ops":[{"op":"add","path":"/logs/ChatPromptTemplate","value":{"id":"5b6cff77-0c52-4218-9bde-d92c33ad12f3","name":"ChatPromptTemplate","type":"prompt","tags":["seq:step:2"],"metadata":{},"start_time":"2023-12-23T00:20:47.864Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/ChatPromptTemplate/final_output","value":{"lc":1,"type":"constructor","id":["langchain_core","prompt_values","ChatPromptValue"],"kwargs":{"messages":[{"lc":1,"type":"constructor","id":["langchain_core","messages","SystemMessage"],"kwargs":{"content":"Use the following pieces of context to answer the question at the end.\nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\nmitochondria is the powerhouse of the cell\n\nmitochondria is made of lipids","additional_kwargs":{}}},{"lc":1,"type":"constructor","id":["langchain_core","messages","HumanMessage"],"kwargs":{"content":"What is the powerhouse of the cell?","additional_kwargs":{}}}]}}},{"op":"add","path":"/logs/ChatPromptTemplate/end_time","value":"2023-12-23T00:20:47.956Z"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI","value":{"id":"0cc3b220-ca7f-4fd3-88d5-bea1f7417c3d","name":"ChatOpenAI","type":"llm","tags":["seq:step:3"],"metadata":{},"start_time":"2023-12-23T00:20:48.126Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/StrOutputParser","value":{"id":"47d9bd52-c14a-420d-8d52-1106d751581c","name":"StrOutputParser","type":"parser","tags":["seq:step:4"],"metadata":{},"start_time":"2023-12-23T00:20:48.666Z","streamed_output_str":[]}}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":""}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":""}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":"The"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":"The"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" mitochond"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":" mitochond"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":"ria"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":"ria"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" is"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":" is"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" the"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":" the"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" powerhouse"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":" powerhouse"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" of"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":" of"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" the"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":" the"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":" cell"}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":" cell"}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":"."}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":"."}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/streamed_output_str/-","value":""}]}

{"ops":[{"op":"add","path":"/streamed_output/-","value":""}]}

{"ops":[{"op":"add","path":"/logs/ChatOpenAI/final_output","value":{"generations":[[{"text":"The mitochondria is the powerhouse of the cell.","generationInfo":{"prompt":0,"completion":0},"message":{"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessageChunk"],"kwargs":{"content":"The mitochondria is the powerhouse of the cell.","additional_kwargs":{}}}}]]}},{"op":"add","path":"/logs/ChatOpenAI/end_time","value":"2023-12-23T00:20:48.841Z"}]}

{"ops":[{"op":"add","path":"/logs/StrOutputParser/final_output","value":{"output":"The mitochondria is the powerhouse of the cell."}},{"op":"add","path":"/logs/StrOutputParser/end_time","value":"2023-12-23T00:20:48.945Z"}]}

{"ops":[{"op":"replace","path":"/final_output","value":{"output":"The mitochondria is the powerhouse of the cell."}}]}
*/

// Aggregate
/**
aggregate {
id: '1ed678b9-e1cf-4ef9-bb8b-2fa083b81725',
streamed_output: [
'', 'The',
' powerhouse', ' of',
' the', ' cell',
' is', ' the',
' mitochond', 'ria',
'.', ''
],
final_output: { output: 'The powerhouse of the cell is the mitochondria.' },
logs: {
RunnableMap: {
id: 'ff268fa1-a621-41b5-a832-4f23eae99d8e',
name: 'RunnableMap',
type: 'chain',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:33.851Z',
streamed_output_str: [],
final_output: [Object],
end_time: '2024-01-04T20:21:35.000Z'
},
RunnablePassthrough: {
id: '62b54982-edb3-4101-a53e-1d4201230668',
name: 'RunnablePassthrough',
type: 'chain',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:34.073Z',
streamed_output_str: [],
final_output: [Object],
end_time: '2024-01-04T20:21:34.226Z'
},
RunnableSequence: {
id: 'a8893fb5-63ec-4b13-bb49-e6d4435cc5e4',
name: 'RunnableSequence',
type: 'chain',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:34.074Z',
streamed_output_str: [],
final_output: [Object],
end_time: '2024-01-04T20:21:34.893Z'
},
VectorStoreRetriever: {
id: 'd145704c-64bb-491d-9a2c-814ee3d1e6a2',
name: 'VectorStoreRetriever',
type: 'retriever',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:34.234Z',
streamed_output_str: [],
final_output: [Object],
end_time: '2024-01-04T20:21:34.518Z'
},
RunnableLambda: {
id: 'a23a552a-b96f-4c07-a45d-c5f3861fad5d',
name: 'RunnableLambda',
type: 'chain',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:34.610Z',
streamed_output_str: [],
final_output: [Object],
end_time: '2024-01-04T20:21:34.785Z'
},
ChatPromptTemplate: {
id: 'a5e8439e-a6e4-4cf3-ba17-c223ea874a0a',
name: 'ChatPromptTemplate',
type: 'prompt',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:35.097Z',
streamed_output_str: [],
final_output: [ChatPromptValue],
end_time: '2024-01-04T20:21:35.193Z'
},
ChatOpenAI: {
id: 'd9c9d340-ea38-4ef4-a8a8-60f52da4e838',
name: 'ChatOpenAI',
type: 'llm',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:35.282Z',
streamed_output_str: [Array],
final_output: [Object],
end_time: '2024-01-04T20:21:36.059Z'
},
StrOutputParser: {
id: 'c55f9f3f-048b-43d5-ba48-02f3b24b8f96',
name: 'StrOutputParser',
type: 'parser',
tags: [Array],
metadata: {},
start_time: '2024-01-04T20:21:35.842Z',
streamed_output_str: [],
final_output: [Object],
end_time: '2024-01-04T20:21:36.157Z'
}
}
}
*/

API Reference:

Stream events

Event Streaming is a beta API, and may change a bit based on feedback. It provides a way to stream both intermediate steps and final output from the chain.

Note: Introduced in @langchain/core 0.1.27

For now, when using the streamEvents API, for everything to work properly please:

  • Any custom functions / runnables must propragate callbacks
  • Set proper parameters on models to force the LLM to stream tokens.

Event Reference

Here is a reference table that shows some events that might be emitted by the various Runnable objects. Definitions for some of the Runnable are included after the table.

⚠️ When streaming the inputs for the runnable will not be available until the input stream has been entirely consumed This means that the inputs will be available at for the corresponding end hook rather than start event.

eventnamechunkinputoutput
on_chat_model_start[model name]{"messages": [[SystemMessage, HumanMessage]]}
on_chat_model_stream[model name]AIMessageChunk(content="hello")
on_chat_model_end[model name]{"messages": [[SystemMessage, HumanMessage]]}{"generations": [...], "llm_output": None, ...}
on_llm_start[model name]{'input': 'hello'}
on_llm_stream[model name]'Hello'
on_llm_end[model name]'Hello human!'
on_chain_startformat_docs
on_chain_streamformat_docs"hello world!, goodbye world!"
on_chain_endformat_docs[Document(...)]"hello world!, goodbye world!"
on_tool_startsome_tool{"x": 1, "y": "2"}
on_tool_streamsome_tool{"x": 1, "y": "2"}
on_tool_endsome_tool{"x": 1, "y": "2"}
on_retriever_start[retriever name]{"query": "hello"}
on_retriever_chunk[retriever name]{documents: [...]}
on_retriever_end[retriever name]{"query": "hello"}{documents: [...]}
on_prompt_start[template_name]{"question": "hello"}
on_prompt_end[template_name]{"question": "hello"}ChatPromptValue(messages: [SystemMessage, ...])
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { ChatOpenAI } from "@langchain/openai";
import type { ChatPromptTemplate } from "@langchain/core/prompts";

import { pull } from "langchain/hub";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";

// Define the tools the agent will have access to.
const tools = [new TavilySearchResults({})];

const llm = new ChatOpenAI({
model: "gpt-3.5-turbo-1106",
temperature: 0,
streaming: true,
});

// Get the prompt to use - you can modify this!
// If you want to see the prompt in full, you can at:
// https://smith.langchain.com/hub/hwchase17/openai-functions-agent
const prompt = await pull<ChatPromptTemplate>(
"hwchase17/openai-functions-agent"
);

const agent = await createOpenAIFunctionsAgent({
llm,
tools,
prompt,
});

const agentExecutor = new AgentExecutor({
agent,
tools,
}).withConfig({ runName: "Agent" });

const eventStream = await agentExecutor.streamEvents(
{
input: "what is the weather in SF",
},
{ version: "v1" }
);

for await (const event of eventStream) {
const eventType = event.event;
if (eventType === "on_chain_start") {
// Was assigned when creating the agent with `.withConfig({"runName": "Agent"})` above
if (event.name === "Agent") {
console.log("\n-----");
console.log(
`Starting agent: ${event.name} with input: ${JSON.stringify(
event.data.input
)}`
);
}
} else if (eventType === "on_chain_end") {
// Was assigned when creating the agent with `.withConfig({"runName": "Agent"})` above
if (event.name === "Agent") {
console.log("\n-----");
console.log(`Finished agent: ${event.name}\n`);
console.log(`Agent output was: ${event.data.output}`);
console.log("\n-----");
}
} else if (eventType === "on_llm_stream") {
const content = event.data?.chunk?.message?.content;
// Empty content in the context of OpenAI means
// that the model is asking for a tool to be invoked via function call.
// So we only print non-empty content
if (content !== undefined && content !== "") {
console.log(`| ${content}`);
}
} else if (eventType === "on_tool_start") {
console.log("\n-----");
console.log(
`Starting tool: ${event.name} with inputs: ${event.data.input}`
);
} else if (eventType === "on_tool_end") {
console.log("\n-----");
console.log(`Finished tool: ${event.name}\n`);
console.log(`Tool output was: ${event.data.output}`);
console.log("\n-----");
}
}

API Reference:

-----
Starting agent: Agent with input: {"input":"what is the weather in SF"}

-----
Starting tool: TavilySearchResults with inputs: weather in San Francisco

-----
Finished tool: TavilySearchResults

Tool output was: [{"title":"Weather in San Francisco","url":"https://www.weatherapi.com/","content":"Weather in San Francisco is {'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1707638479, 'localtime': '2024-02-11 0:01'}, 'current': {'last_updated_epoch': 1707638400, 'last_updated': '2024-02-11 00:00', 'temp_c': 11.1, 'temp_f': 52.0, 'is_day': 0, 'condition': {'text': 'Partly cloudy', 'icon': '//cdn.weatherapi.com/weather/64x64/night/116.png', 'code': 1003}, 'wind_mph': 9.4, 'wind_kph': 15.1, 'wind_degree': 270, 'wind_dir': 'W', 'pressure_mb': 1022.0, 'pressure_in': 30.18, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 83, 'cloud': 25, 'feelslike_c': 11.5, 'feelslike_f': 52.6, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 1.0, 'gust_mph': 13.9, 'gust_kph': 22.3}}","score":0.98371,"raw_content":null},{"title":"San Francisco, California November 2024 Weather Forecast","url":"https://www.weathertab.com/en/c/e/11/united-states/california/san-francisco/","content":"Temperature Forecast Temperature Forecast Normal Avg High Temps 60 to 70 °F Avg Low Temps 45 to 55 °F Weather Forecast Legend WeatherTAB helps you plan activities on days with the least risk of rain. Our forecasts are not direct predictions of rain/snow. Not all risky days will have rain/snow.","score":0.9517,"raw_content":null},{"title":"Past Weather in San Francisco, California, USA — Yesterday or Further Back","url":"https://www.timeanddate.com/weather/usa/san-francisco/historic","content":"Past Weather in San Francisco, California, USA — Yesterday and Last 2 Weeks. Weather. Time Zone. DST Changes. Sun & Moon. Weather Today Weather Hourly 14 Day Forecast Yesterday/Past Weather Climate (Averages) Currently: 52 °F. Light rain. Overcast.","score":0.945,"raw_content":null},{"title":"San Francisco, California February 2024 Weather Forecast - detailed","url":"https://www.weathertab.com/en/g/e/02/united-states/california/san-francisco/","content":"Free Long Range Weather Forecast for San Francisco, California February 2024. Detailed graphs of monthly weather forecast, temperatures, and degree days.","score":0.92177,"raw_content":null},{"title":"San Francisco Weather in 2024 - extremeweatherwatch.com","url":"https://www.extremeweatherwatch.com/cities/san-francisco/year-2024","content":"Year: What's the hottest temperature in San Francisco so far this year? As of February 2, the highest temperature recorded in San Francisco, California in 2024 is 73 °F which happened on January 29. Highest Temperatures: All-Time By Year Highest Temperatures in San Francisco in 2024 What's the coldest temperature in San Francisco so far this year?","score":0.91598,"raw_content":null}]

-----
| The
| current
| weather
| in
| San
| Francisco
| is
| partly
| cloudy
| with
| a
| temperature
| of
|
| 52
| .
| 0
| °F
| (
| 11
| .
| 1
| °C
| ).
| The
| wind
| speed
| is
|
| 15
| .
| 1
| k
| ph
| coming
| from
| the
| west
| ,
| and
| the
| humidity
| is
| at
|
| 83
| %.
| If
| you
| need
| more
| detailed
| information
| ,
| you
| can
| visit
| [
| Weather
| in
| San
| Francisco
| ](
| https
| ://
| www
| .weather
| api
| .com
| /
| ).

-----
Finished agent: Agent

Agent output was: The current weather in San Francisco is partly cloudy with a temperature of 52.0°F (11.1°C). The wind speed is 15.1 kph coming from the west, and the humidity is at 83%. If you need more detailed information, you can visit [Weather in San Francisco](https://www.weatherapi.com/).

-----

Help us out by providing feedback on this documentation page: