Files
crewAI/src/crewai
Devin AI a395a5cde1 feat: Add prompt caching support for AWS Bedrock and Anthropic models
- Add enable_prompt_caching and cache_control parameters to LLM class
- Implement cache_control formatting for Anthropic models via LiteLLM
- Add helper method to detect prompt caching support for different providers
- Create comprehensive tests covering all prompt caching functionality
- Add example demonstrating usage with kickoff_for_each and kickoff_async
- Supports OpenAI, Anthropic, Bedrock, and Deepseek providers
- Enables cost optimization for workflows with repetitive context

Addresses issue #3535 for prompt caching support in CrewAI

Co-Authored-By: João <joao@crewai.com>
2025-09-18 20:21:50 +00:00
..
2024-09-27 12:11:17 -03:00
2025-09-18 10:17:34 -07:00
2025-09-10 15:20:21 -07:00
2025-05-25 15:24:59 -07:00
2025-09-18 10:17:34 -07:00
2024-02-02 13:56:35 -03:00