- Add test for malformed template handling
- Add test for missing required parameters with proper error handling
- Improve test documentation and edge case coverage
Addresses GitHub review feedback from joaomdmoura and mplachta
Co-Authored-By: João <joao@crewai.com>
- Replace non-existent 'output_format' attribute with 'output_json'
- Update test_custom_format_instructions to use correct Pydantic model approach
- Enhance test_stop_words_configuration to properly test agent executor creation
- Update documentation example to use correct API (output_json instead of output_format)
- Validated API corrections work with local test script
Co-Authored-By: João <joao@crewai.com>
- Fix undefined i18n variable error in test_i18n_slice_access method
- Replace Mock tools with proper BaseTool instances to fix validation errors
- Add comprehensive docstrings to all test methods explaining validation purpose
- Add pytest fixtures for test isolation with @pytest.fixture(autouse=True)
- Add parametrized tests for agent initialization patterns using @pytest.mark.parametrize
- Add negative test cases for default template behavior and incomplete templates
- Remove unused Mock and patch imports to fix lint errors
- Improve test organization by moving Pydantic models to top of file
- Add metadata (title, description, categoryId, priority) to documentation frontmatter
- Add showLineNumbers to all Python code blocks for better readability
- Add explicit security warnings about stop sequence pitfalls and template injection
- Improve header hierarchy consistency using #### for subsections
- Add cross-references between troubleshooting sections
- Document default parameter behaviors explicitly
- Add additional troubleshooting steps for debugging prompts
Addresses all actionable feedback from GitHub reviews by joaomdmoura and mplachta.
Fixes failing CI tests by using proper CrewAI API patterns and BaseTool instances.
Co-Authored-By: João <joao@crewai.com>
- Remove unused imports (pytest, Crew) to fix lint errors
- Fix LiteAgent import path from crewai.lite_agent
- Resolves CI test collection error for Python 3.10
Co-Authored-By: João <joao@crewai.com>
- Create detailed guide explaining CrewAI's prompt generation system
- Document template system stored in translations/en.json
- Explain prompt assembly process using Prompts class
- Document LiteAgent prompt generation methods
- Show how to customize system/user prompts with templates
- Explain format parameter and structured output control
- Document stop words configuration through response_template
- Add practical examples for common customization scenarios
- Include test file validating all documentation examples
Addresses issue #3045: How system and user prompts are generated
Co-Authored-By: João <joao@crewai.com>
* Added Union of List of Task, None, NotSpecified
* Seems like a flaky test
* Fixed run time issue
* Fixed Linting issues
* fix pydantic error
* aesthetic changes
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
- Introduced detailed documentation for integrations including Asana, Box, ClickUp, GitHub, Gmail, Google Calendar, Google Sheets, HubSpot, Jira, Linear, Notion, Salesforce, Shopify, Slack, Stripe, and Zendesk.
- Updated main docs.json to include a new "Integration Docs" section, organizing the documentation for easy access.
- Each integration includes setup instructions, available actions, and example tasks to streamline user onboarding and usage.
When running behind cloud-based security users are struggling to donwload LLM data from Github. Usually the following error is raised
```
SSL certificate verification failed: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /BerriAI/litellm/main/model_prices_and_context_window.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1010)')))
Current CA bundle path: /usr/local/etc///.pem
```
This commit ensures the SSL config is beign provided while requesting data
* fix: possible fix for Thinking stuck
* feat: add agent logging events for execution tracking
- Introduced AgentLogsStartedEvent and AgentLogsExecutionEvent to enhance logging capabilities during agent execution.
- Updated CrewAgentExecutor to emit these events at the start and during execution, respectively.
- Modified EventListener to handle the new logging events and format output accordingly in the console.
- Enhanced ConsoleFormatter to display agent logs in a structured format, improving visibility of agent actions and outputs.
* drop emoji
* refactor: improve code structure and logging in LiteAgent and ConsoleFormatter
- Refactored imports in lite_agent.py for better readability.
- Enhanced guardrail property initialization in LiteAgent.
- Updated logging functionality to emit AgentLogsExecutionEvent for better tracking.
- Modified ConsoleFormatter to include tool arguments and final output in status updates.
- Improved output formatting for long text in ConsoleFormatter.
* fix tests
---------
Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
- Bump CrewAI version from 0.126.0 to 0.130.0 in pyproject.toml and uv.lock.
- Update optional dependency 'crewai-tools' version from 0.46.0 to 0.47.1.
- Adjust dependency specifications in CLI templates to reflect the new version.
* Fix issue 2993: Prevent Flow status logs from hiding human input
- Add pause_live_updates() and resume_live_updates() methods to ConsoleFormatter
- Modify _ask_human_input() to pause Flow status updates during human input
- Add comprehensive tests for pause/resume functionality and integration
- Ensure Live session is properly managed during human input prompts
- Fix prevents Flow status logs from overwriting user input prompts
Fixes#2993
Co-Authored-By: João <joao@crewai.com>
* Fix lint: Remove unused pytest import
- Remove unused pytest import from test_console_formatter_pause_resume.py
- Fixes F401 lint error identified in CI
Co-Authored-By: João <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
* Fix telemetry singleton pattern to respect dynamic environment variables
- Modified Telemetry.__init__ to prevent re-initialization with _initialized flag
- Updated _safe_telemetry_operation to check _is_telemetry_disabled() dynamically
- Added comprehensive tests for environment variables set after singleton creation
- Fixed singleton contamination in existing tests by adding proper reset
- Resolves issue #2945 where CREWAI_DISABLE_TELEMETRY=true was ignored when set after import
Co-Authored-By: João <joao@crewai.com>
* Implement code review improvements
- Move _initialized flag to __new__ method for better encapsulation
- Add type hints to _safe_telemetry_operation method
- Consolidate telemetry execution checks into _should_execute_telemetry helper
- Add pytest fixtures to reduce test setup redundancy
- Enhanced documentation for singleton behavior
Co-Authored-By: João <joao@crewai.com>
* Fix mypy type-checker errors
- Add explicit bool type annotation to _initialized field
- Fix return value in task_started method to not return _safe_telemetry_operation result
- Simplify initialization logic to set _initialized once in __init__
Co-Authored-By: João <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* feat: add guardrail support for Agents when using direct kickoff calls
* refactor: expose guardrail func in a proper utils file
* fix: resolve Self import on python 3.10
* test: fix structured tool tests
No tests were being executed from this file
* feat: support to run async tool
Some Tool requires async execution. This commit allow us to collect tool result from coroutines
* docs: add docs about asynchronous tool support
- Introduced a new documentation file for Integrations, detailing supported services and setup instructions.
- Updated the main docs.json to include the new "integrations" feature in the contextual options.
- Added several images related to integrations to enhance the documentation.
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
* docs: add organization management in our CLI docs
* feat: improve user feedback when user is not authenticated
* feat: improve logging about current organization while publishing/install a Tool
* feat: improve logging when Agent repository is not found during fetch
* fix linter offences
* test: fix auth token error
* docs: added Maxim support for Agent Observability
* enhanced the maxim integration doc page as per the github PR reviewer bot suggestions
* Update maxim-observability.mdx
* Update maxim-observability.mdx
- Fixed Python version, >=3.10
- added expected_output field in Task
- Removed marketing links and added github link
* added maxim in observability
---------
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
* feat: support to list, switch and see your current organization
* feat: store the current org after logged in
* feat: filtering agents, tools and their actions by organization_uuid if present
* fix linter offenses
* refactor: propagate the current org thought Header instead of params
* refactor: rename org column name to ID instead of Handle
---------
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Previously, we only supported tools from the crewai-tools open-source repository. Now, we're introducing improved support for private tool repositories.
* feat: add capability to see and expose public Tool classes
* feat: persist available Tools from repository on publish
* ci: ignore explictly templates from ruff check
Ruff only applies --exclude to files it discovers itself. So we have to skip manually the same files excluded from `ruff.toml`
* sytle: fix linter issues
* refactor: renaming available_tools_classes by available_exports
* feat: provide more context about exportable tools
* feat: allow to install a Tool from pypi
* test: fix tests
* feat: add env_vars attribute to BaseTool
* remove TODO: security check since we are handle that on enterprise side
* ci: support python 3.13 on CI
* docs: update docs about support python version
* build: adds requires python <3.14
* build: explicit tokenizers dependency
Added explicit tokenizers dependency: Added tokenizers>=0.20.3 to ensure a version compatible with Python 3.13 is used.
* build: drop fastembed is not longer used
* build: attempt to build PyTorch on Python 3.13
* feat: upgrade fastavro, pyarrow and lancedb
* build: ensure tiktoken greather than 0.8.0 due Python 3.13 compatibility