- Add enable_prompt_caching and cache_control parameters to LLM class
- Implement cache_control formatting for Anthropic models via LiteLLM
- Add helper method to detect prompt caching support for different providers
- Create comprehensive tests covering all prompt caching functionality
- Add example demonstrating usage with kickoff_for_each and kickoff_async
- Supports OpenAI, Anthropic, Bedrock, and Deepseek providers
- Enables cost optimization for workflows with repetitive context
Addresses issue #3535 for prompt caching support in CrewAI
Co-Authored-By: João <joao@crewai.com>