This commit fixes the validation error that occurred when using the
google-generativeai embedder provider with a flat configuration format.
Changes:
1. Made the 'config' field optional in GenerativeAiProviderSpec by adding
'total=False' and marking 'provider' as Required, consistent with other
provider specs like VertexAIProviderSpec.
2. Added normalization in the Crew class to automatically convert flat
embedder configs to nested format before validation. This allows users
to use either format:
- Flat: {'provider': 'google-generativeai', 'api_key': '...', 'model_name': '...'}
- Nested: {'provider': 'google-generativeai', 'config': {'api_key': '...', 'model_name': '...'}}
3. Updated the embedder factory to support both flat and nested config
formats by checking for the presence of 'config' key and extracting
config fields accordingly.
4. Added comprehensive tests to verify both formats work correctly:
- Test for flat config format (the issue reported in #3741)
- Test for nested config format (recommended format)
- Test for TypedDict validation
Fixes#3741
Co-Authored-By: João <joao@crewai.com>
Fixes nested boolean conditions being flattened in @listen, @start, and @router decorators. The or_() and and_() combinators now preserve their nested structure using a "conditions" key instead of flattening to a list. Added recursive evaluation logic to properly handle complex patterns like or_(and_(A, B), and_(C, D)).
- Bumped the `crewai` version in `__init__.py` to 0.203.1.
- Updated the dependency versions in the crew, flow, and tool templates' `pyproject.toml` files to reflect the new `crewai` version.
- Revised the security policy to clarify the reporting process for vulnerabilities.
- Added detailed sections on scope, reporting requirements, and our commitment to addressing reported issues.
- Emphasized the importance of not disclosing vulnerabilities publicly and provided guidance on how to report them securely.
- Included a new section on coordinated disclosure and safe harbor provisions for ethical reporting.
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
- Updated the `crewai-tools` dependency in `pyproject.toml` and `uv.lock` to version 0.76.0.
- Updated the `crewai` version in `__init__.py` to 0.203.0.
- Updated the dependency versions in the crew, flow, and tool templates to reflect the new `crewai` version.
- Introduced a new documentation page detailing how to capture telemetry logs from CrewAI AMP deployments.
- Updated the main documentation to include the new guide in the enterprise section.
- Added prerequisites and step-by-step instructions for configuring OTEL collector setup.
- Included an example image for OTEL log collection capture to Datadog.
* feat: enhance knowledge event handling in Agent class
- Updated the Agent class to include task context in knowledge retrieval events.
- Emitted new events for knowledge retrieval and query processes, capturing task and agent details.
- Refactored knowledge event classes to inherit from a base class for better structure and maintainability.
- Added tracing for knowledge events in the TraceCollectionListener to improve observability.
This change improves the tracking and management of knowledge queries and retrievals, facilitating better debugging and performance monitoring.
* refactor: remove task_id from knowledge event emissions in Agent class
- Removed the task_id parameter from various knowledge event emissions in the Agent class to streamline event handling.
- This change simplifies the event structure and focuses on the essential context of knowledge retrieval and query processes.
This refactor enhances the clarity of knowledge events and aligns with the recent improvements in event handling.
* surface association for guardrail events
* fix: improve LLM selection logic in converter
- Updated the logic for selecting the LLM in the convert_with_instructions function to handle cases where the agent may not have a function_calling_llm attribute.
- This change ensures that the converter can still function correctly by falling back to the standard LLM if necessary, enhancing robustness and preventing potential errors.
This fix improves the reliability of the conversion process when working with different agent configurations.
* fix test
* fix: enforce valid LLM instance requirement in converter
- Updated the convert_with_instructions function to ensure that a valid LLM instance is provided by the agent.
- If neither function_calling_llm nor the standard llm is available, a ValueError is raised, enhancing error handling and robustness.
- Improved error messaging for conversion failures to provide clearer feedback on issues encountered during the conversion process.
This change strengthens the reliability of the conversion process by ensuring that agents are properly configured with a valid LLM.
- Introduced a new documentation page for CrewAI Tracing, detailing setup and usage.
- Updated the main documentation to include the new tracing page in the observability section.
- Added example code snippets for enabling tracing in both Crews and Flows.
- Included instructions for global tracing configuration via environment variables.
- Added a new image for the CrewAI Tracing interface.
- prefix provider env vars with embeddings_
- rename watson → watsonx in providers
- add deprecation warning and alias for legacy 'watson' key (to be removed in v1.0.0)
- Bump CrewAI version to 0.201.0 in __init__.py
- Update dependency versions in pyproject.toml for crew, flow, and tool templates to require CrewAI 0.201.0
- Remove unnecessary blank line in pyproject.toml
- update imports and include handling for chromadb v1.1.0
- fix mypy and typing_compat issues (required, typeddict, voyageai)
- refine embedderconfig typing and allow base provider instances
- handle mem0 as special case for external memory storage
- bump tools and clean up redundant deps
- introduce baseembeddingsprovider and helper for embedding functions
- add core embedding types and migrate providers, factory, and storage modules
- remove unused type aliases and fix pydantic schema error
- update providers with env var support and related fixes
- Add pydantic-settings>=2.10.1 dependency for configuration management
- Update pydantic to 2.11.9 and python-dotenv to 1.1.1
- Migrate from deprecated tool.uv.dev-dependencies to dependency-groups.dev format
- Remove unnecessary dev dependencies: pillow, cairosvg
- Update all dev tooling to latest versions
- Remove duplicate python-dotenv from dev dependencies
- add batch_size field to baseragconfig (default=100)
- update chromadb/qdrant clients and factories to use batch_size
- extract and filter batch_size from embedder config in knowledgestorage
- fix large csv files exceeding embedder token limits (#3574)
- remove unneeded conditional for type
Co-authored-by: Vini Brasil <vini@hey.com>
- support nested config format with embedderconfig typeddict
- fix parsing for model/model_name compatibility
- add validation, typing_extensions, and improved type hints
- enhance embedding factory with env var injection and provider support
- add tests for openai, azure, and all embedding providers
- misc fixes: test file rename, updated mocking patterns
* fix: Make 'ready' parameter optional in _create_reasoning_plan function
This PR fixes Issue #3466 where the _create_reasoning_plan function was missing
the 'ready' parameter when called by the LLM. The fix makes the 'ready' parameter
optional with a default value of False, which allows the function to be called
with only the 'plan' argument.
Fixes#3466
* Change default value of 'ready' parameter to True
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>