Skip to main content

Model caches

Caching LLM calls can be useful for testing, cost savings, and speed.

Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.

NameDescription
Azure Cosmos DB NoSQL Semantic CacheThe Semantic Cache feature is supported with Azure Cosmos DB for NoSQ...

Was this page helpful?


You can also leave detailed feedback on GitHub.