Skip to main content

HTTP Response Output Parser

The HTTP Response output parser allows you to stream LLM output properly formatted bytes a web HTTP response:

npm install @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
import { HttpResponseOutputParser } from "langchain/output_parsers";

const handler = async () => {
const parser = new HttpResponseOutputParser();

const model = new ChatOpenAI({ temperature: 0 });

const stream = await model.pipe(parser).stream("Hello there!");

const httpResponse = new Response(stream, {
headers: {
"Content-Type": "text/plain; charset=utf-8",
},
});

return httpResponse;
};

await handler();

API Reference:

You can also stream back chunks as an event stream:

import { ChatOpenAI } from "@langchain/openai";
import { HttpResponseOutputParser } from "langchain/output_parsers";

const handler = async () => {
const parser = new HttpResponseOutputParser({
contentType: "text/event-stream",
});

const model = new ChatOpenAI({ temperature: 0 });

// Values are stringified to avoid dealing with newlines and should
// be parsed with `JSON.parse()` when consuming.
const stream = await model.pipe(parser).stream("Hello there!");

const httpResponse = new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
},
});

return httpResponse;
};

await handler();

API Reference:

Or pass a custom output parser to internally parse chunks for e.g. streaming function outputs:

import { ChatOpenAI } from "@langchain/openai";
import {
HttpResponseOutputParser,
JsonOutputFunctionsParser,
} from "langchain/output_parsers";

const handler = async () => {
const parser = new HttpResponseOutputParser({
contentType: "text/event-stream",
outputParser: new JsonOutputFunctionsParser({ diff: true }),
});

const model = new ChatOpenAI({ temperature: 0 }).bind({
functions: [
{
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
},
],
// You can set the `function_call` arg to force the model to use a function
function_call: {
name: "get_current_weather",
},
});

const stream = await model.pipe(parser).stream("Hello there!");

const httpResponse = new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
},
});

return httpResponse;
};

await handler();

API Reference:


Help us out by providing feedback on this documentation page: