feat: implement Google Batch Mode support for LLM calls

- Add google-generativeai dependency to pyproject.toml
- Extend LLM class with batch mode parameters (batch_mode, batch_size, batch_timeout)
- Implement batch request management methods for Gemini models
- Add batch-specific event types (BatchJobStartedEvent, BatchJobCompletedEvent, BatchJobFailedEvent)
- Create comprehensive test suite for batch mode functionality
- Add example demonstrating batch mode usage with cost savings
- Support inline batch requests for up to 50% cost reduction on Gemini models

Resolves issue #3116

Co-Authored-By: João <joao@crewai.com>
This commit is contained in:
Devin AI
2025-07-07 22:01:56 +00:00
parent 34a03f882c
commit ae59abb052
4 changed files with 519 additions and 0 deletions

View File

@@ -39,6 +39,7 @@ dependencies = [
"tomli>=2.0.2",
"blinker>=1.9.0",
"json5>=0.10.0",
"google-generativeai>=0.8.0",
]
[project.urls]