Skip to main content

Chat models

Features (natively supported)

All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. invoke, batch, stream. This gives all ChatModels basic support for invoking, streaming and batching, which by default is implemented as below:

  • Streaming support defaults to returning an AsyncIterator of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.
  • Batch support defaults to calling the underlying ChatModel in parallel for each input. The concurrency can be controlled with the maxConcurrency key in RunnableConfig.
  • Map support defaults to calling .invoke across all instances of the array which it was called on.

Each ChatModel integration can optionally provide native implementations to truly enable invoke, streaming or batching requests.

Additionally, some chat models support additional ways of guaranteeing structure in their outputs by allowing you to pass in a defined schema. Function calling and parallel function calling (tool calling) are two common ones, and those capabilities allow you to use the chat model as the LLM in certain types of agents. Some models in LangChain have also implemented a withStructuredOutput() method that unifies many of these different ways of constraining output to a schema.

The table shows, for each integration, which features have been implemented with native support. Yellow circles (🟡) indicates partial support - for example, if the model supports tool calling but not tool messages for agents.

ModelInvokeStreamBatchFunction CallingTool CallingwithStructuredOutput()
BedrockChat
ChatAlibabaTongyi
ChatAnthropic
ChatBaiduWenxin
ChatCloudflareWorkersAI
ChatCohere
ChatFireworks
ChatGoogleGenerativeAI
ChatGoogleVertexAI
ChatVertexAI
ChatGooglePaLM
ChatGroq🟡
ChatLlamaCpp
ChatMinimax
ChatMistralAI
ChatOllama
ChatOpenAI
ChatTogetherAI
ChatYandexGPT
ChatZhipuAI

Help us out by providing feedback on this documentation page: