Commit Graph

40 Commits

Author SHA1 Message Date
Greyson LaLonde
6b9a797000 refactor: convert project module to metaclass with full typing 2025-10-17 12:50:43 -04:00
Lorenze Jay
6515c7faeb feat: azure native tests
* feat: add Azure AI Inference support and related tests

- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.

* feat: enhance AzureCompletion class with Azure OpenAI support

- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.
2025-10-17 11:36:29 -04:00
Lorenze Jay
3b32793e78 feat: enhance BedrockCompletion class with advanced features
* feat: enhance BedrockCompletion class with advanced features and error handling

- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.

* chore: add boto typing

* fix: use typing_extensions.Required for Python 3.10 compatibility

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-17 11:30:35 -04:00
Lorenze Jay
02d7ce7621 feat: enhance GeminiCompletion class with client parameter support (#3717)
* feat: enhance GeminiCompletion class with client parameter support

- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.

* add optional dependancies

* refactor: update test fixture to mock GOOGLE_API_KEY

- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.

* fix tests
2025-10-16 11:42:46 -07:00
Lorenze Jay
06a45b29db feat: enhance AnthropicCompletion class with additional client parame… (#3707)
* feat: enhance AnthropicCompletion class with additional client parameters and tool handling

- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.

* drop print statements

* test: add fixture to mock ANTHROPIC_API_KEY for tests

- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.

* refactor: streamline streaming message handling in AnthropicCompletion class

- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.

* fix streaming here too

* fix: improve error handling in tool conversion for AnthropicCompletion class

- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.
2025-10-16 10:39:54 -07:00
Lorenze Jay
7351e4b0ef feat: bump versions to 1.0.0b2 (#3713) 2025-10-16 08:27:57 -07:00
Lorenze Jay
d9b68ddd85 moved stagehand as optional dep (#3712) 2025-10-15 15:50:59 -07:00
Lorenze Jay
2d5ad7a187 Lorenze/tools drop litellm (#3710)
* completely drop litellm and correctly pass config for qdrant

* feat: add support for additional embedding models in EmbeddingService

- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.

* fix: update collection parameter handling in CrewAIRagAdapter

- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.
2025-10-15 15:34:44 -07:00
Lorenze Jay
a9aff87db3 feat: bump versions to 1.0.0b1 (#3706) 2025-10-14 11:15:31 -07:00
lorenzejay
c9ff264e8e Merge branch 'release/v1.0.0' of github.com:crewAIInc/crewAI into release/v1.0.0 2025-10-14 11:01:43 -07:00
lorenzejay
0816b810c7 Merge branch 'main' of github.com:crewAIInc/crewAI into release/v1.0.0 2025-10-14 11:01:24 -07:00
Lorenze Jay
96d142e353 feat: enhance OpenAICompletion class with additional client parameters (#3701)
* feat: enhance OpenAICompletion class with additional client parameters

- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.

* fix: correct test case for unsupported OpenAI model

- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.

* fix: enhance error handling in OpenAICompletion class

- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.

* fix: improve test for unsupported OpenAI model handling

- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.
2025-10-14 10:51:15 -07:00
Greyson LaLonde
53b239c6df feat: improve event bus thread safety and async support
Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.

Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.
2025-10-14 13:28:58 -04:00
Lorenze Jay
cec4e4c2e9 Lorenze/bedrock llm (#3693)
* feat: add AWS Bedrock support and update dependencies

- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.

* using converse api

* converse

* linted

* refactor: update BedrockCompletion class to improve parameter handling

- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.
2025-10-13 20:42:34 -07:00
Greyson Lalonde
6c5ac13242 Merge branch 'main' into release/v1.0.0 2025-10-12 02:07:55 -04:00
Lorenze Jay
0648e88f22 feat: bump versions to 1.0.0a4 (#3686) 2025-10-09 11:05:05 -07:00
Greyson LaLonde
06f7d224c0 fix: remove duplicate line and add explicit env var 2025-10-09 12:44:29 -04:00
lorenzejay
faddcd0de7 Merge branch 'main' of github.com:crewAIInc/crewAI into release/v1.0.0 2025-10-09 09:30:27 -07:00
Greyson LaLonde
51e8fb1f90 fix: resolve linting errors across codebase 2025-10-08 20:18:01 -04:00
Greyson LaLonde
f094df6015 chore: add noqa for intentional print in printer utility 2025-10-08 17:33:13 -04:00
Greyson LaLonde
458f56fb33 fix: replace print statements with logger in agent and memory handling 2025-10-08 17:28:50 -04:00
Greyson LaLonde
79d4e42e62 chore: merge main into release/v1.0.0
Resolved merge conflicts:
- agent.py: kept main's docker validation, restored get_platform_tools method, fixed KnowledgeRetrievalStartedEvent position
- agent_utils.py: merged both LLM and LiteAgent imports
- test_cache_hitting.yaml: accepted main's test recordings
- Removed 4 deprecated test cassettes from old tests/ directory

Key changes from main:
- Enhanced knowledge event handling with from_agent/from_task parameters
- Updated LLMMessage typing in kickoff methods
- Added guardrail parameter to async kickoff
- Restored runtime validation in guardrail.py
2025-10-08 16:53:36 -04:00
Lucas Gomide
8b9186311f feat: introduce trigger listing and execution commands for local development (#3643) 2025-10-06 10:20:49 -04:00
Greyson LaLonde
38bc5a9dc4 fix: match tag and release title, ignore devtools build for pypi 2025-10-03 19:18:19 -04:00
Lorenze Jay
0b305dabc9 feat: bump versions to 1.0.0a3
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-03 18:51:27 -04:00
Greyson LaLonde
ebeed0b752 feat: allow release after tag 2025-10-03 18:49:58 -04:00
Greyson LaLonde
5865d39137 fix: ensure dep refs are updated for devtools 2025-10-03 18:32:31 -04:00
Greyson LaLonde
e529ebff2b feat: add base devtooling 2025-10-03 18:23:54 -04:00
Lorenze Jay
126b91eab3 Lorenze/native inference sdks (#3619)
* ruff linted

* using native sdks with litellm fallback

* drop exa

* drop print on completion

* Refactor LLM and utility functions for type consistency

- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.

* fix agent_tests

* fix litellm tests and usagemetrics fix

* drop print

* Refactor LLM event handling and improve test coverage

- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.

* fix test_converter

* fixed tests/agents/test_agent.py

* Refactor LLM context length exception handling and improve provider integration

- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.

* Enhance LLM context window handling across providers

- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.

* fix test agent again

* fix test agent

* feat: add native LLM providers for Anthropic, Azure, and Gemini

- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.

* chore: update dependencies and improve context management

- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.

* refactor(tests): update LLM instantiation to include is_litellm flag in test cases

- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.

* linter

* linted

* revert constants

* fix(tests): correct type hint in expected model description

- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.

* refactor(llm): enhance LLM instantiation and error handling

- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.

* fixed test

* refactor(llm): enhance token usage tracking and add copy methods

- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.

* refactor(tests): reorganize imports and enhance error messages in test cases

- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.
2025-10-03 14:32:35 -07:00
Greyson LaLonde
428810bd6f feat: bump version to 1.0.0a2 2025-10-02 16:38:17 -04:00
Lucas Gomide
e73c5887d9 fix: handle properly anyOf oneOf allOf schema's props
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-02 14:32:17 -04:00
Mike Plachta
c5ac5fa78a feat: add required env var validation for brightdata
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-02 12:35:29 -04:00
Greyson LaLonde
5456c80556 chore: apply linting fixes to crewai-tools 2025-10-02 11:06:38 -04:00
tonykipkemboi
9e1dae0746 docs: parity for all translations 2025-10-01 14:11:23 -04:00
Lorenze Jay
0fe9352149 chore: bump version to 1.0.0a1 across all packages
- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency
2025-09-28 11:53:35 -04:00
Greyson LaLonde
417a4e3d91 chore: ci publish and pin versions 2025-09-28 00:59:25 -04:00
Greyson LaLonde
68dce92003 chore: update CI workflows and docs for monorepo structure
* chore: update CI workflows and docs for monorepo structure

* fix: actions syntax
2025-09-28 00:28:49 -04:00
Greyson LaLonde
289b90f00a feat: add crewai-tools workspace and fix tests/dependencies
* feat: add crewai-tools workspace structure

* Squashed 'temp-crewai-tools/' content from commit 9bae5633

git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9

* feat: configure crewai-tools workspace package with dependencies

* fix: apply ruff auto-formatting to crewai-tools code

* chore: update lockfile

* fix: don't allow tool tests yet

* fix: comment out extra pytest flags for now

* fix: remove conflicting conftest.py from crewai-tools tests

* fix: resolve dependency conflicts and test issues

- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0
2025-09-28 00:05:42 -04:00
Greyson LaLonde
c591c1ac87 chore: update python version to 3.13 and package metadata 2025-09-27 23:09:52 -04:00
Greyson LaLonde
86f0dfc2d7 feat: monorepo restructure and test/ci updates
- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules
2025-09-27 22:53:02 -04:00