mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-09 08:08:32 +00:00
- Add enable_prompt_caching and cache_control parameters to LLM class - Implement cache_control formatting for Anthropic models via LiteLLM - Add helper method to detect prompt caching support for different providers - Create comprehensive tests covering all prompt caching functionality - Add example demonstrating usage with kickoff_for_each and kickoff_async - Supports OpenAI, Anthropic, Bedrock, and Deepseek providers - Enables cost optimization for workflows with repetitive context Addresses issue #3535 for prompt caching support in CrewAI Co-Authored-By: João <joao@crewai.com>