Files
crewAI/src/crewai
Devin AI ae59abb052 feat: implement Google Batch Mode support for LLM calls
- Add google-generativeai dependency to pyproject.toml
- Extend LLM class with batch mode parameters (batch_mode, batch_size, batch_timeout)
- Implement batch request management methods for Gemini models
- Add batch-specific event types (BatchJobStartedEvent, BatchJobCompletedEvent, BatchJobFailedEvent)
- Create comprehensive test suite for batch mode functionality
- Add example demonstrating batch mode usage with cost savings
- Support inline batch requests for up to 50% cost reduction on Gemini models

Resolves issue #3116

Co-Authored-By: João <joao@crewai.com>
2025-07-07 22:01:56 +00:00
..
2025-07-02 15:22:18 -07:00
2024-09-27 12:11:17 -03:00
2025-06-02 18:12:24 -04:00
2025-03-14 03:00:30 -03:00
2025-06-10 13:32:32 -04:00
2025-06-10 13:32:32 -04:00
2025-05-25 15:24:59 -07:00
2024-02-02 13:56:35 -03:00