Compare commits

...

297 Commits

Author SHA1 Message Date
Devin AI
8105e74267 Fix agent loop not stopping after max iterations reached
Fixes #3847

The agent execution loop was not stopping after max_iter was reached.
After calling handle_max_iterations_exceeded(), the loop continued and
made additional LLM calls, overwriting the formatted answer.

Changes:
- Modified handle_max_iterations_exceeded() to always return AgentFinish
  instead of AgentAction | AgentFinish, ensuring a final answer is always
  produced when max iterations are exceeded
- Added early return in _invoke_loop() after handle_max_iterations_exceeded()
  to prevent additional LLM calls
- Updated test_agent_custom_max_iterations to expect 2 LLM calls instead of 3
  (1 initial + 1 from max_iter handler, no longer 3 due to extra call bug)
- Added test_agent_max_iterations_zero to verify no get_llm_response calls
  when max_iter=0
- Added test_agent_max_iterations_one_stops_after_two_calls to verify exactly
  2 LLM calls with max_iter=1
- Fixed type annotations to properly handle AgentAction | AgentFinish | None

Co-Authored-By: João <joao@crewai.com>
2025-11-06 15:16:12 +00:00
Greyson LaLonde
e4cc9a664c fix: handle unpickleable values in flow state
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 01:29:21 -05:00
Greyson LaLonde
7e6171d5bc fix: ensure lite agents course-correct on validation errors
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: ensure lite agents course-correct on validation errors

* chore: update cassettes and test expectations

* fix: ensure multiple guardrails propogate
2025-11-05 19:02:11 -05:00
Greyson LaLonde
61ad1fb112 feat: add support for llm message interceptor hooks 2025-11-05 11:38:44 -05:00
Greyson LaLonde
54710a8711 fix: hash callback args correctly to ensure caching works
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-05 07:19:09 -05:00
Lucas Gomide
5abf976373 fix: allow adding RAG source content from valid URLs (#3831)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-04 07:58:40 -05:00
Greyson LaLonde
329567153b fix: make plot node selection smoother
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-03 07:49:31 -05:00
Greyson LaLonde
60332e0b19 feat: cache i18n prompts for efficient use 2025-11-03 07:39:05 -05:00
Lorenze Jay
40932af3fa feat: bump versions to 1.3.0 (#3820)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.3.0

* chore: update crew and flow templates to use crewai[tools] version 1.3.0
2025-10-31 18:54:02 -07:00
Greyson LaLonde
e134e5305b Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers

* feat: a2a schemas and utils

* chore: move agent class, update imports

* refactor: organize imports to avoid circularity, add a2a to console

* feat: pass response_model through call chain

* feat: add standard openapi spec serialization to tools and structured output

* feat: a2a events

* chore: add a2a to pyproject

* docs: minimal base for learn docs

* fix: adjust a2a conversation flow, allow llm to decide exit until max_retries

* fix: inject agent skills into initial prompt

* fix: format agent card as json in prompt

* refactor: simplify A2A agent prompt formatting and improve skill display

* chore: wide cleanup

* chore: cleanup logic, add auth cache, use json for messages in prompt

* chore: update docs

* fix: doc snippets formatting

* feat: optimize A2A agent card fetching and improve error reporting

* chore: move imports to top of file

* chore: refactor hasattr check

* chore: add httpx-auth, update lockfile

* feat: create base public api

* chore: cleanup modules, add docstrings, types

* fix: exclude extra fields in prompt

* chore: update docs

* tests: update to correct import

* chore: lint for ruff, add missing import

* fix: tweak openai streaming logic for response model

* tests: add reimport for test

* tests: add reimport for test

* fix: don't set a2a attr if not set

* fix: don't set a2a attr if not set

* chore: update cassettes

* tests: fix tests

* fix: use instructor and dont pass response_format for litellm

* chore: consolidate event listeners, add typing

* fix: address race condition in test, update cassettes

* tests: add correct mocks, rerun cassette for json

* tests: update cassette

* chore: regenerate cassette after new run

* fix: make token manager access-safe

* fix: make token manager access-safe

* merge

* chore: update test and cassete for output pydantic

* fix: tweak to disallow deadlock

* chore: linter

* fix: adjust event ordering for threading

* fix: use conditional for batch check

* tests: tweak for emission

* tests: simplify api + event check

* fix: ensure non-function calling llms see json formatted string

* tests: tweak message comparison

* fix: use internal instructor for litellm structure responses

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
2025-10-31 18:42:03 -07:00
Greyson LaLonde
e229ef4e19 refactor: improve flow handling, typing, and logging; update UI and tests
fix: refine nested flow conditionals and ensure router methods and routes are fully parsed
fix: improve docstrings, typing, and logging coverage across all events
feat: update flow.plot feature with new UI enhancements
chore: apply Ruff linting, reorganize imports, and remove deprecated utilities/files
chore: split constants and utils, clean JS comments, and add typing for linters
tests: strengthen test coverage for flow execution paths and router logic
2025-10-31 21:15:06 -04:00
Greyson LaLonde
2e9eb8c32d fix: refactor use_stop_words to property, add check for stop words
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-10-29 19:14:01 +01:00
Lucas Gomide
4ebb5114ed Fix Firecrawl tools & adding tests (#3810)
* fix: fix Firecrawl Scrape tool

* fix: fix Firecrawl Search tool

* fix: fix Firecrawl Website tool

* tests: adding tests for Firecrawl
2025-10-29 13:37:57 -04:00
Daniel Barreto
70b083945f Enhance QdrantVectorSearchTool (#3806)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-28 13:42:40 -04:00
Tony Kipkemboi
410db1ff39 docs: migrate embedder→embedding_model and require vectordb across tool docs; add provider examples (en/ko/pt-BR) (#3804)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs(tools): migrate embedder->embedding_model, require vectordb; add Chroma/Qdrant examples across en/ko/pt-BR PDF/TXT/XML/MDX/DOCX/CSV/Directory docs

* docs(observability): apply latest Datadog tweaks in ko and pt-BR
2025-10-27 13:29:21 -04:00
Lorenze Jay
5d6b4c922b feat: bump versions to 1.2.1 (#3800)
* feat: bump versions to 1.2.1

* updated templates too
2025-10-27 09:12:04 -07:00
Lucas Gomide
b07c0fc45c docs: describe mandatory env-var to call Platform tools for each integration (#3803) 2025-10-27 10:01:41 -04:00
Sam Brenner
97853199c7 Add Datadog Integration Documentation (#3642)
* add datadog llm observability integration guide

* spacing fix

* wording changes

* alphabetize docs listing

* Update docs/en/observability/datadog.mdx

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>

* add translations

* fix korean code block

---------

Co-authored-by: Barry Eom <31739208+barieom@users.noreply.github.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-27 09:48:38 -04:00
Lorenze Jay
494ed7e671 liteagent supports apps and mcps (#3794)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* liteagent supports apps and mcps

* generated cassettes for these
2025-10-24 18:42:08 -07:00
Lorenze Jay
a83c57a2f2 feat: bump versions to 1.2.0 (#3787)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.2.0

* also include projects
2025-10-23 18:04:34 -07:00
Lorenze Jay
08e15ab267 fix: update default LLM model and improve error logging in LLM utilities (#3785)
* fix: update default LLM model and improve error logging in LLM utilities

* Updated the default LLM model from "gpt-4o-mini" to "gpt-4.1-mini" for better performance.
* Enhanced error logging in the LLM utilities to use logger.error instead of logger.debug, ensuring that errors are properly reported and raised.
* Added tests to verify behavior when OpenAI API key is missing and when Anthropic dependency is not available, improving robustness and error handling in LLM creation.

* fix: update test for default LLM model usage

* Refactored the test_create_llm_with_none_uses_default_model to use the imported DEFAULT_LLM_MODEL constant instead of a hardcoded string.
* Ensured that the test correctly asserts the model used is the current default, improving maintainability and consistency across tests.

* change default model to gpt-4.1-mini

* change default model use defualt
2025-10-23 17:54:11 -07:00
Greyson LaLonde
9728388ea7 fix: change flow viz del dir; method inspection
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update flow viz deletion dir, add typing
* tests: add flow viz tests to ensure lib dir is not deleted
2025-10-22 19:32:38 -04:00
Greyson LaLonde
4371cf5690 chore: remove aisuite
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Little usage + blocking some features
2025-10-21 23:18:06 -04:00
Lorenze Jay
d28daa26cd feat: bump versions to 1.1.0 (#3770)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.1.0

* chore: bump template versions

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-21 15:52:44 -07:00
Lorenze Jay
a850813f2b feat: enhance InternalInstructor to support multiple LLM providers (#3767)
* feat: enhance InternalInstructor to support multiple LLM providers

- Updated InternalInstructor to conditionally create an instructor client based on the LLM provider.
- Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern.
- Added functionality to extract the provider from the LLM model name.
- Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling.

This update improves flexibility and extensibility for different LLM integrations.

* fix test
2025-10-21 15:24:59 -07:00
Cameron Warren
5944a39629 fix: correct broken integration documentation links
Fix navigation paths for two integration tool cards that were redirecting to the
introduction page instead of their intended documentation pages.

Fixes #3516

Co-authored-by: Cwarre33 <cwarre33@charlotte.edu>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 18:12:08 -04:00
Greyson LaLonde
c594859ed0 feat: mypy plugin base
* feat: base mypy plugin with CrewBase

* fix: add crew method to protocol
2025-10-21 17:36:08 -04:00
Daniel Barreto
2ee27efca7 feat: improvements on QdrantVectorSearchTool
* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 16:50:08 -04:00
Greyson LaLonde
f6e13eb890 chore: update codeql config paths to new folders
* chore: update codeql config paths to new folders

* tests: use threading.Condition for event check

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:43:25 -04:00
Lorenze Jay
e7b3ce27ca docs: update LLM integration details and examples
* docs: update LLM integration details and examples

- Changed references from LiteLLM to native SDKs for LLM providers.
- Enhanced OpenAI and AWS Bedrock sections with new usage examples and advanced configuration options.
- Added structured output examples and supported environment variables for better clarity.
- Improved documentation on additional parameters and features for LLM configurations.

* drop this example - should use strucutred output from task instead

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 14:39:50 -04:00
Greyson LaLonde
dba27cf8b5 fix: fix double trace call; add types
* fix: fix double trace call; add types

* tests: skip long running uv install test, refactor in future

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:15:39 -04:00
Greyson LaLonde
6469f224f6 chore: improve CrewBase typing 2025-10-21 13:58:35 -04:00
Greyson LaLonde
f3a63be215 tests: cassettes, threading for flow tests 2025-10-21 13:48:21 -04:00
Greyson LaLonde
01d8c189f0 fix: pin template versions to latest
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-10-21 10:56:41 -04:00
Lorenze Jay
cc83c1ead5 feat: bump versions to 1.0.0
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-20 17:34:38 -04:00
Greyson LaLonde
7578901f6d chore: use main for devtools branch 2025-10-20 17:29:09 -04:00
Lorenze Jay
d1343b96ed Release/v1.0.0 (#3618)
* feat: add `apps` & `actions` attributes to Agent (#3504)

* feat: add app attributes to Agent

* feat: add actions attribute to Agent

* chore: resolve linter issues

* refactor: merge the apps and actions parameters into a single one

* fix: remove unnecessary print

* feat: logging error when CrewaiPlatformTools fails

* chore: export CrewaiPlatformTools directly from crewai_tools

* style: resolver linter issues

* test: fix broken tests

* style: solve linter issues

* fix: fix broken test

* feat: monorepo restructure and test/ci updates

- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules

* chore: update python version to 3.13 and package metadata

* feat: add crewai-tools workspace and fix tests/dependencies

* feat: add crewai-tools workspace structure

* Squashed 'temp-crewai-tools/' content from commit 9bae5633

git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9

* feat: configure crewai-tools workspace package with dependencies

* fix: apply ruff auto-formatting to crewai-tools code

* chore: update lockfile

* fix: don't allow tool tests yet

* fix: comment out extra pytest flags for now

* fix: remove conflicting conftest.py from crewai-tools tests

* fix: resolve dependency conflicts and test issues

- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0

* chore: update CI workflows and docs for monorepo structure

* chore: update CI workflows and docs for monorepo structure

* fix: actions syntax

* chore: ci publish and pin versions

* fix: add permission to action

* chore: bump version to 1.0.0a1 across all packages

- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency

* WIP: v1 docs (#3626)

(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)

* docs: parity for all translations

* docs: full name of acronym AMP

* docs: fix lingering unused code

* docs: expand contextual options in docs.json

* docs: add contextual action to request feature on GitHub (#3635)

* chore: apply linting fixes to crewai-tools

* feat: add required env var validation for brightdata

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* fix: handle properly anyOf oneOf allOf schema's props

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: bump version to 1.0.0a2

* Lorenze/native inference sdks (#3619)

* ruff linted

* using native sdks with litellm fallback

* drop exa

* drop print on completion

* Refactor LLM and utility functions for type consistency

- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.

* fix agent_tests

* fix litellm tests and usagemetrics fix

* drop print

* Refactor LLM event handling and improve test coverage

- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.

* fix test_converter

* fixed tests/agents/test_agent.py

* Refactor LLM context length exception handling and improve provider integration

- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.

* Enhance LLM context window handling across providers

- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.

* fix test agent again

* fix test agent

* feat: add native LLM providers for Anthropic, Azure, and Gemini

- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.

* chore: update dependencies and improve context management

- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.

* refactor(tests): update LLM instantiation to include is_litellm flag in test cases

- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.

* linter

* linted

* revert constants

* fix(tests): correct type hint in expected model description

- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.

* refactor(llm): enhance LLM instantiation and error handling

- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.

* fixed test

* refactor(llm): enhance token usage tracking and add copy methods

- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.

* refactor(tests): reorganize imports and enhance error messages in test cases

- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.

* feat: add base devtooling

* fix: ensure dep refs are updated for devtools

* fix: allow pre-release

* feat: allow release after tag

* feat: bump versions to 1.0.0a3 

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: match tag and release title, ignore devtools build for pypi

* fix: allow failed pypi publish

* feat: introduce trigger listing and execution commands for local development (#3643)

* chore: exclude tests from ruff linting

* chore: exclude tests from GitHub Actions linter

* fix: replace print statements with logger in agent and memory handling

* chore: add noqa for intentional print in printer utility

* fix: resolve linting errors across codebase

* feat: update docs with new approach to consume Platform Actions (#3675)

* fix: remove duplicate line and add explicit env var

* feat: bump versions to 1.0.0a4 (#3686)

* Update triggers docs (#3678)

* docs: introduce triggers list & triggers run command

* docs: add KO triggers docs

* docs: ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned on docs (#3687)

* Lorenze/bedrock llm (#3693)

* feat: add AWS Bedrock support and update dependencies

- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.

* using converse api

* converse

* linted

* refactor: update BedrockCompletion class to improve parameter handling

- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.

* feat: improve event bus thread safety and async support

Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.

Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.

* feat: enhance OpenAICompletion class with additional client parameters (#3701)

* feat: enhance OpenAICompletion class with additional client parameters

- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.

* fix: correct test case for unsupported OpenAI model

- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.

* fix: enhance error handling in OpenAICompletion class

- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.

* fix: improve test for unsupported OpenAI model handling

- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.

* feat: bump versions to 1.0.0b1 (#3706)

* Lorenze/tools drop litellm (#3710)

* completely drop litellm and correctly pass config for qdrant

* feat: add support for additional embedding models in EmbeddingService

- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.

* fix: update collection parameter handling in CrewAIRagAdapter

- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.

* moved stagehand as optional dep (#3712)

* feat: bump versions to 1.0.0b2 (#3713)

* feat: enhance AnthropicCompletion class with additional client parame… (#3707)

* feat: enhance AnthropicCompletion class with additional client parameters and tool handling

- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.

* drop print statements

* test: add fixture to mock ANTHROPIC_API_KEY for tests

- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.

* refactor: streamline streaming message handling in AnthropicCompletion class

- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.

* fix streaming here too

* fix: improve error handling in tool conversion for AnthropicCompletion class

- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.

* feat: enhance GeminiCompletion class with client parameter support (#3717)

* feat: enhance GeminiCompletion class with client parameter support

- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.

* add optional dependancies

* refactor: update test fixture to mock GOOGLE_API_KEY

- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.

* fix tests

* feat: enhance BedrockCompletion class with advanced features

* feat: enhance BedrockCompletion class with advanced features and error handling

- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.

* chore: add boto typing

* fix: use typing_extensions.Required for Python 3.10 compatibility

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: azure native tests

* feat: add Azure AI Inference support and related tests

- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.

* feat: enhance AzureCompletion class with Azure OpenAI support

- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.

* refactor: convert project module to metaclass with full typing

* Lorenze/OpenAI base url backwards support (#3723)

* fix: enhance OpenAICompletion class base URL handling

- Updated the base URL assignment in the OpenAICompletion class to prioritize the new `api_base` attribute and fallback to the environment variable `OPENAI_BASE_URL` if both are not set.
- Added `api_base` to the list of parameters in the OpenAICompletion class to ensure proper configuration and flexibility in API endpoint management.

* feat: enhance OpenAICompletion class with api_base support

- Added the `api_base` parameter to the OpenAICompletion class to allow for flexible API endpoint configuration.
- Updated the `_get_client_params` method to prioritize `base_url` over `api_base`, ensuring correct URL handling.
- Introduced comprehensive tests to validate the behavior of `api_base` and `base_url` in various scenarios, including environment variable fallback.
- Enhanced test coverage for client parameter retrieval, ensuring robust integration with the OpenAI API.

* fix: improve OpenAICompletion class configuration handling

- Added a debug print statement to log the client configuration parameters during initialization for better traceability.
- Updated the base URL assignment logic to ensure it defaults to None if no valid base URL is provided, enhancing robustness in API endpoint configuration.
- Refined the retrieval of the `api_base` environment variable to streamline the configuration process.

* drop print

* feat: improvements on import native sdk support (#3725)

* feat: add support for Anthropic provider and enhance logging

- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.

* feat: enhance LLM support with Bedrock provider and update dependencies

- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.

* test: update native provider fallback tests to expect ImportError

* adjust the test with the expected bevaior - raising ImportError

* this is exoecting the litellm format, all gemini native tests are in test_google.py

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: remove stdout prints, improve test determinism, and update trace handling

Removed `print` statements from the `LLMStreamChunkEvent` handler to prevent
LLM response chunks from being written directly to stdout. The listener now
only tracks chunks internally.

Fixes #3715

Added explicit return statements for trace-related tests.

Updated cassette for `test_failed_evaluation` to reflect new behavior where
an empty trace dict is used instead of returning early.

Ensured deterministic cleanup order in test fixtures by making
`clear_event_bus_handlers` depend on `setup_test_environment`. This guarantees
event bus shutdown and file handle cleanup occur before temporary directory
deletion, resolving intermittent “Directory not empty” errors in CI.

* chore: remove lib/crewai exclusion from pre-commit hooks

* feat: enhance task guardrail functionality and validation

* feat: enhance task guardrail functionality and validation

- Introduced support for multiple guardrails in the Task class, allowing for sequential processing of guardrails.
- Added a new `guardrails` field to the Task model to accept a list of callable guardrails or string descriptions.
- Implemented validation to ensure guardrails are processed correctly, including handling of retries and error messages.
- Enhanced the `_invoke_guardrail_function` method to manage guardrail execution and integrate with existing task output processing.
- Updated tests to cover various scenarios involving multiple guardrails, including success, failure, and retry mechanisms.

This update improves the flexibility and robustness of task execution by allowing for more complex validation scenarios.

* refactor: enhance guardrail type handling in Task model

- Updated the Task class to improve guardrail type definitions, introducing GuardrailType and GuardrailsType for better clarity and type safety.
- Simplified the validation logic for guardrails, ensuring that both single and multiple guardrails are processed correctly.
- Enhanced error messages for guardrail validation to provide clearer feedback when incorrect types are provided.
- This refactor improves the maintainability and robustness of task execution by standardizing guardrail handling.

* feat: implement per-guardrail retry tracking in Task model

- Introduced a new private attribute `_guardrail_retry_counts` to the Task class for tracking retry attempts on a per-guardrail basis.
- Updated the guardrail processing logic to utilize the new retry tracking, allowing for independent retry counts for each guardrail.
- Enhanced error handling to provide clearer feedback when guardrails fail validation after exceeding retry limits.
- Modified existing tests to validate the new retry tracking behavior, ensuring accurate assertions on guardrail retries.

This update improves the robustness and flexibility of task execution by allowing for more granular control over guardrail validation and retry mechanisms.

* chore: 1.0.0b3 bump (#3734)

* chore: full ruff and mypy

improved linting, pre-commit setup, and internal architecture. Configured Ruff to respect .gitignore, added stricter rules, and introduced a lock pre-commit hook with virtualenv activation. Fixed type shadowing in EXASearchTool using a type_ alias to avoid PEP 563 conflicts and resolved circular imports in agent executor and guardrail modules. Removed agent-ops attributes, deprecated watson alias, and dropped crewai-enterprise tools with corresponding test updates. Refactored cache and memoization for thread safety and cleaned up structured output adapters and related logic.

* New MCL DSL (#3738)

* Adding MCP implementation

* New tests for MCP implementation

* fix tests

* update docs

* Revert "New tests for MCP implementation"

This reverts commit 0bbe6dee90.

* linter

* linter

* fix

* verify mcp pacakge exists

* adjust docs to be clear only remote servers are supported

* reverted

* ensure args schema generated properly

* properly close out

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: a2a experimental

experimental a2a support

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Mike Plachta <mplachta@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-10-20 14:10:19 -07:00
Greyson LaLonde
42f2b4d551 fix: preserve nested condition structure in Flow decorators
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Fixes nested boolean conditions being flattened in @listen, @start, and @router decorators. The or_() and and_() combinators now preserve their nested structure using a "conditions" key instead of flattening to a list. Added recursive evaluation logic to properly handle complex patterns like or_(and_(A, B), and_(C, D)).
2025-10-17 17:06:19 -04:00
Greyson LaLonde
0229390ad1 fix: add standard print parameters to Printer.print method
- Adds sep, end, file, and flush parameters to match Python's built-in print function signature.
2025-10-17 15:27:22 -04:00
Vidit Ostwal
f0fb349ddf Fixing copy and adding NOT_SPECIFIED check in task.py (#3690)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* Fixing copy and adding NOT_SPECIFIED check:

* Fixed mypy issues

* Added test Cases

* added linting checks

* Removed the docs bot folder

* Fixed ruff checks

* Remove secret_folder from tracking

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-14 09:52:39 -07:00
João Moura
bf2e2a42da fix: don't error out if there it no input() available
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Specific to jupyter notebooks
2025-10-13 22:36:19 -04:00
Lorenze Jay
814c962196 chore: update crewAI version to 0.203.1 in multiple templates (#3699)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
- Bumped the `crewai` version in `__init__.py` to 0.203.1.
- Updated the dependency versions in the crew, flow, and tool templates' `pyproject.toml` files to reflect the new `crewai` version.
2025-10-13 11:46:22 -07:00
Heitor Carvalho
2ebb2e845f fix: add a leeway of 10s when decoding jwt (#3698) 2025-10-13 12:42:03 -03:00
Greyson LaLonde
7b550ebfe8 fix: inject tool repository credentials in crewai run command
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-10 15:00:04 -04:00
Greyson LaLonde
29919c2d81 fix: revert bad cron sched
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This reverts commit b71c88814f.
2025-10-09 13:52:25 -04:00
Greyson LaLonde
b71c88814f fix: correct cron schedule to run every 5 days at specific dates 2025-10-09 13:10:45 -04:00
Rip&Tear
cb8bcfe214 docs: update security policy for vulnerability reporting
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
- Revised the security policy to clarify the reporting process for vulnerabilities.
- Added detailed sections on scope, reporting requirements, and our commitment to addressing reported issues.
- Emphasized the importance of not disclosing vulnerabilities publicly and provided guidance on how to report them securely.
- Included a new section on coordinated disclosure and safe harbor provisions for ethical reporting.

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-09 00:57:57 -04:00
Lorenze Jay
13a514f8be chore: update crewAI and crewAI-tools dependencies to version 0.203.0 and 0.76.0 respectively (#3674)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Updated the `crewai-tools` dependency in `pyproject.toml` and `uv.lock` to version 0.76.0.
- Updated the `crewai` version in `__init__.py` to 0.203.0.
- Updated the dependency versions in the crew, flow, and tool templates to reflect the new `crewai` version.
2025-10-08 14:34:51 -07:00
Lorenze Jay
316b1cea69 docs: add guide for capturing telemetry logs in CrewAI AMP (#3673)
- Introduced a new documentation page detailing how to capture telemetry logs from CrewAI AMP deployments.
- Updated the main documentation to include the new guide in the enterprise section.
- Added prerequisites and step-by-step instructions for configuring OTEL collector setup.
- Included an example image for OTEL log collection capture to Datadog.
2025-10-08 14:06:10 -07:00
Lorenze Jay
6f2e39c0dd feat: enhance knowledge and guardrail event handling in Agent class (#3672)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
* feat: enhance knowledge event handling in Agent class

- Updated the Agent class to include task context in knowledge retrieval events.
- Emitted new events for knowledge retrieval and query processes, capturing task and agent details.
- Refactored knowledge event classes to inherit from a base class for better structure and maintainability.
- Added tracing for knowledge events in the TraceCollectionListener to improve observability.

This change improves the tracking and management of knowledge queries and retrievals, facilitating better debugging and performance monitoring.

* refactor: remove task_id from knowledge event emissions in Agent class

- Removed the task_id parameter from various knowledge event emissions in the Agent class to streamline event handling.
- This change simplifies the event structure and focuses on the essential context of knowledge retrieval and query processes.

This refactor enhances the clarity of knowledge events and aligns with the recent improvements in event handling.

* surface association for guardrail events

* fix: improve LLM selection logic in converter

- Updated the logic for selecting the LLM in the convert_with_instructions function to handle cases where the agent may not have a function_calling_llm attribute.
- This change ensures that the converter can still function correctly by falling back to the standard LLM if necessary, enhancing robustness and preventing potential errors.

This fix improves the reliability of the conversion process when working with different agent configurations.

* fix test

* fix: enforce valid LLM instance requirement in converter

- Updated the convert_with_instructions function to ensure that a valid LLM instance is provided by the agent.
- If neither function_calling_llm nor the standard llm is available, a ValueError is raised, enhancing error handling and robustness.
- Improved error messaging for conversion failures to provide clearer feedback on issues encountered during the conversion process.

This change strengthens the reliability of the conversion process by ensuring that agents are properly configured with a valid LLM.
2025-10-08 11:53:13 -07:00
Lucas Gomide
8d93361cb3 docs: add missing /resume files (#3661)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-07 12:21:27 -04:00
Lucas Gomide
54ec245d84 docs: clarify webhook URL parameter in HITL workflows (#3660) 2025-10-07 12:06:11 -04:00
Vidit Ostwal
f589ab9b80 chore: load json tool input before console output
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-07 10:18:28 -04:00
Greyson LaLonde
fadb59e0f0 chore: add scheduled cache rebuild to prevent expiration
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-10-06 11:45:28 -04:00
Greyson LaLonde
1a60848425 chore: remove crewAI.excalidraw file 2025-10-06 11:03:55 -04:00
Greyson LaLonde
0135163040 chore: remove mkdocs cache directory
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Remove obsolete .cache directory from mkdocs-material social plugin as the project no longer uses mkdocs for documentation.
2025-10-05 21:41:09 -04:00
Greyson LaLonde
dac5d6d664 fix: use system PATH for Docker binary instead of hardcoded path 2025-10-05 21:36:05 -04:00
Rip&Tear
f0f94f2540 fix: add CodeQL configuration to properly exclude template directories (#3641) 2025-10-06 08:21:51 +08:00
Tony Kipkemboi
bf9e0423f2 chore(docs): bring AMP doc refresh from release/v1.0.0 into main (#3637)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* WIP: v1 docs (#3626)

(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)

* docs: parity for all translations

* docs: full name of acronym AMP

* docs: fix lingering unused code

* docs: expand contextual options in docs.json

* docs: add contextual action to request feature on GitHub

* chore: tidy docs formatting
2025-10-02 11:36:04 -04:00
Lorenze Jay
f47e0c82c4 Add tracing documentation and enable tracing feature in CrewAI
- Introduced a new documentation page for CrewAI Tracing, detailing setup and usage.
- Updated the main documentation to include the new tracing page in the observability section.
- Added example code snippets for enabling tracing in both Crews and Flows.
- Included instructions for global tracing configuration via environment variables.
- Added a new image for the CrewAI Tracing interface.
2025-10-02 07:33:18 -04:00
Doug Guthrie
eabced321c Add braintrust docs (#3628)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add braintrust docs

* Add more things

* fix eval command

* Add missing crewai-tools import

* Allow for dynamic inputs
2025-10-01 14:38:22 -04:00
Greyson LaLonde
b77074e48e docs: add HITL webhook authentication examples
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-09-29 23:51:39 -04:00
Lorenze Jay
7d5cd4d3e2 chore: bump CrewAI version to 0.201.1 and update dependencies in project templates (#3605)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Update version in __init__.py to 0.201.1
- Modify dependency versions in pyproject.toml for crew, flow, and tool templates to require CrewAI 0.201.1
2025-09-26 09:58:00 -07:00
Greyson LaLonde
73e932bfee fix: update embedding functions to inherit from chromadb callable 2025-09-26 12:25:19 -04:00
Greyson LaLonde
12fa7e2ff1 fix: rename watson to watsonx embedding provider and prefix env vars
- prefix provider env vars with embeddings_  
- rename watson → watsonx in providers  
- add deprecation warning and alias for legacy 'watson' key (to be removed in v1.0.0)
2025-09-26 10:57:18 -04:00
Greyson LaLonde
091d1267d8 fix: prefix embedding provider env vars with EMBEDDINGS_ 2025-09-26 10:50:45 -04:00
Lorenze Jay
b5b10a8cde chore: update version and dependencies to 0.201.0 (#3593)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Bump CrewAI version to 0.201.0 in __init__.py
- Update dependency versions in pyproject.toml for crew, flow, and tool templates to require CrewAI 0.201.0
- Remove unnecessary blank line in pyproject.toml
2025-09-25 18:04:12 -07:00
Greyson LaLonde
2485ed93d6 feat: upgrade chromadb to v1.1.0, improve types
- update imports and include handling for chromadb v1.1.0  
- fix mypy and typing_compat issues (required, typeddict, voyageai)  
- refine embedderconfig typing and allow base provider instances  
- handle mem0 as special case for external memory storage  
- bump tools and clean up redundant deps
2025-09-25 20:48:37 -04:00
Greyson LaLonde
ce5ea9be6f feat: add custom embedding types and migrate providers
- introduce baseembeddingsprovider and helper for embedding functions  
- add core embedding types and migrate providers, factory, and storage modules  
- remove unused type aliases and fix pydantic schema error  
- update providers with env var support and related fixes
2025-09-25 18:28:39 -04:00
Greyson LaLonde
e070c1400c feat: update pydantic, add pydantic-settings, migrate to dependency-groups
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add pydantic-settings>=2.10.1 dependency for configuration management
- Update pydantic to 2.11.9 and python-dotenv to 1.1.1
- Migrate from deprecated tool.uv.dev-dependencies to dependency-groups.dev format
- Remove unnecessary dev dependencies: pillow, cairosvg
- Update all dev tooling to latest versions
- Remove duplicate python-dotenv from dev dependencies
2025-09-24 14:42:18 -04:00
Greyson LaLonde
6537e3737d fix: correct directory name in quickstart documentation 2025-09-24 11:41:33 -04:00
Greyson LaLonde
346faf229f feat: add pydantic-compatible import validation and deprecate old utilities 2025-09-24 11:36:02 -04:00
Lorenze Jay
a0b757a12c Lorenze/traces mark as failed (#3586)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* marking trace batch as failed if its failed

* fix test
2025-09-23 22:02:27 -07:00
Greyson LaLonde
1dbe8aab52 fix: add batch_size support to prevent embedder token limit errors
- add batch_size field to baseragconfig (default=100)  
- update chromadb/qdrant clients and factories to use batch_size  
- extract and filter batch_size from embedder config in knowledgestorage  
- fix large csv files exceeding embedder token limits (#3574)  
- remove unneeded conditional for type  

Co-authored-by: Vini Brasil <vini@hey.com>
2025-09-24 00:05:43 -04:00
Greyson LaLonde
4ac65eb0a6 fix: support nested config format for embedder configuration
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
- support nested config format with embedderconfig typeddict  
- fix parsing for model/model_name compatibility  
- add validation, typing_extensions, and improved type hints  
- enhance embedding factory with env var injection and provider support  
- add tests for openai, azure, and all embedding providers  
- misc fixes: test file rename, updated mocking patterns
2025-09-23 11:57:46 -04:00
Greyson LaLonde
3e97393f58 chore: improve typing and consolidate utilities
- add type annotations across utility modules  
- refactor printer system, agent utils, and imports for consistency  
- remove unused modules, constants, and redundant patterns  
- improve runtime type checks, exception handling, and guardrail validation  
- standardize warning suppression and logging utilities  
- fix llm typing, threading/typing edge cases, and test behavior
2025-09-23 11:33:46 -04:00
Heitor Carvalho
34bed359a6 feat: add crewai uv wrapper for uv commands (#3581) 2025-09-23 10:55:15 -04:00
Tony Kipkemboi
feeed505bb docs(changelog): add releases 0.193.2, 0.193.1, 0.193.0, 0.186.1, 0.186.0 across en/ko/pt-BR (#3577)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-09-22 16:19:55 -07:00
Greyson LaLonde
cb0efd05b4 chore: fix ruff linting issues in tools module
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
linting, args_schema default, and validator check
2025-09-22 13:13:23 -04:00
Greyson LaLonde
db5f565dea fix: apply ruff linting fixes to tasks module 2025-09-22 13:09:53 -04:00
Greyson LaLonde
58413b663a chore: fix ruff linting issues in rag module
linting, list embedding handling, and test update
2025-09-22 13:06:22 -04:00
Greyson LaLonde
37636f0dd7 chore: fix ruff linting and mypy issues in flow module 2025-09-22 13:03:06 -04:00
Greyson LaLonde
0e370593f1 chore: resolve all ruff and mypy issues in experimental module
resolve linting, typing, and import issues; update Okta test
2025-09-22 12:56:28 -04:00
Vini Brasil
aa8dc9d77f Add source to LLM Guardrail events (#3572)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This commit adds the source attribute to LLM Guardrail event calls to
identify the Lite Agent or Task that executed the guardrail.
2025-09-22 11:58:00 +09:00
Jonathan Hill
9c1096dbdc fix: Make 'ready' parameter optional in _create_reasoning_plan function (#3561)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: Make 'ready' parameter optional in _create_reasoning_plan function

This PR fixes Issue #3466 where the _create_reasoning_plan function was missing
the 'ready' parameter when called by the LLM. The fix makes the 'ready' parameter
optional with a default value of False, which allows the function to be called
with only the 'plan' argument.

Fixes #3466

* Change default value of 'ready' parameter to True

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-09-20 22:57:18 -03:00
João Moura
47044450c0 Adding fallback to crew settings (#3562)
* Adding fallback to crew settings

* fix: resolve ruff and mypy issues in cli/config.py

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-09-20 22:54:36 -03:00
João Moura
0ee438c39d fix version (#3557) 2025-09-20 17:14:28 -03:00
Joao Moura
cbb9965bf7 preparing new version 2025-09-20 12:27:25 -07:00
João Moura
4951d30dd9 Dix issues with getting id (#3556)
* fix issues with getting id

* ignore linter

* fix: resolve ruff linting issues in tracing utils

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-09-20 15:29:25 -03:00
Greyson LaLonde
7426969736 chore: apply ruff linting fixes and type annotations to memory module
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-09-19 22:20:13 -04:00
Greyson LaLonde
d879be8b66 chore: fix ruff linting issues in agents module
fix(agents): linting, import paths, cache key alignment, and static method
2025-09-19 22:11:21 -04:00
Greyson LaLonde
24b84a4b68 chore: apply ruff linting fixes to crews module 2025-09-19 22:02:22 -04:00
Greyson LaLonde
8e571ea8a7 chore: fix ruff linting and mypy issues in knowledge module
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-09-19 21:39:15 -04:00
Greyson LaLonde
2cfc4d37b8 chore: apply ruff linting fixes to events module
fix: apply ruff linting to events
2025-09-19 20:10:55 -04:00
Greyson LaLonde
f4abc41235 chore: apply ruff linting fixes to CLI module
fix: apply ruff fixes to CLI and update Okta provider test
2025-09-19 19:55:55 -04:00
Greyson LaLonde
de5d3c3ad1 chore: add pydantic.mypy plugin for better type checking 2025-09-19 19:23:33 -04:00
Lorenze Jay
c062826779 chore: update dependencies and versioning for CrewAI 0.193.0 (#3542)
* chore: update dependencies and versioning for CrewAI

- Bump `crewai-tools` dependency version from `0.71.0` to `0.73.0` in `pyproject.toml`.
- Update CrewAI version from `0.186.1` to `0.193.0` in `__init__.py`.
- Adjust dependency versions in CLI templates for crew, flow, and tool to reflect the new CrewAI version.

This update ensures compatibility with the latest features and improvements in CrewAI.

* remove embedchain mock

* fix: remove last embedchain mocks

* fix: remove langchain_openai from tests

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-09-19 16:01:55 -03:00
João Moura
9491fe8334 Adding Ability for user to get deeper observability (#3541)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat(tracing): enhance first-time trace display and auto-open browser

* avoinding line breaking

* set tracing if user enables it

* linted

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
2025-09-18 21:47:09 -03:00
Greyson LaLonde
6f2ea013a7 docs: update RagTool references from EmbedChain to CrewAI native RAG (#3537)
* docs: update RagTool references from EmbedChain to CrewAI native RAG

* change ref to qdrant

* docs: update RAGTool to use Qdrant and add embedding_model example
2025-09-18 16:06:44 -07:00
Greyson LaLonde
39e8792ae5 fix: add l2 distance metric support for backward compatibility (#3540) 2025-09-18 18:36:33 -04:00
Lorenze Jay
2f682e1564 feat: update ChromaDB embedding function to use OpenAI API (#3538)
- Refactor the default embedding function to utilize OpenAI's embedding function with API key support.
- Import necessary OpenAI embedding function and configure it with the environment variable for the API key.
- Ensure compatibility with existing ChromaDB configuration model.
2025-09-18 14:50:35 -07:00
Greyson LaLonde
d4aa676195 feat: add configurable search parameters for RAG, knowledge, and memory (#3531)
- Add limit and score_threshold to BaseRagConfig, propagate to clients  
- Update default search params in RAG storage, knowledge, and memory (limit=5, threshold=0.6)  
- Fix linting (ruff, mypy, PERF203) and refactor save logic  
- Update tests for new defaults and ChromaDB behavior
2025-09-18 16:58:03 -04:00
Lorenze Jay
578fa8c2e4 Lorenze/ephemeral trace ask (#3530)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
* feat(tracing): implement first-time trace handling and improve event management

- Added FirstTimeTraceHandler for managing first-time user trace collection and display.
- Enhanced TraceBatchManager to support ephemeral trace URLs and improved event buffering.
- Updated TraceCollectionListener to utilize the new FirstTimeTraceHandler.
- Refactored type annotations across multiple files for consistency and clarity.
- Improved error handling and logging for trace-related operations.
- Introduced utility functions for trace viewing prompts and first execution checks.

* brought back crew finalize batch events

* refactor(trace): move instance variables to __init__ in TraceBatchManager

- Refactored TraceBatchManager to initialize instance variables in the constructor instead of as class variables.
- Improved clarity and encapsulation of the class state.

* fix(tracing): improve error handling in user data loading and saving

- Enhanced error handling in _load_user_data and _save_user_data functions to log warnings for JSON decoding and file access issues.
- Updated documentation for trace usage to clarify the addition of tracing parameters in Crew and Flow initialization.
- Refined state management in Flow class to ensure proper handling of state IDs when persistence is enabled.

* add some tests

* fix test

* fix tests

* refactor(tracing): enhance user input handling for trace viewing

- Replaced signal-based timeout handling with threading for user input in prompt_user_for_trace_viewing function.
- Improved user experience by allowing a configurable timeout for viewing execution traces.
- Updated tests to mock threading behavior and verify timeout handling correctly.

* fix(tracing): improve machine ID retrieval with error handling

- Added error handling to the _get_machine_id function to log warnings when retrieving the machine ID fails.
- Ensured that the function continues to provide a stable, privacy-preserving machine fingerprint even in case of errors.

* refactor(flow): streamline state ID assignment in Flow class

- Replaced direct attribute assignment with setattr for improved flexibility in handling state IDs.
- Enhanced code readability by simplifying the logic for setting the state ID when persistence is enabled.
2025-09-18 10:17:34 -07:00
Rip&Tear
6f5af2b27c Update CodeQL workflow to ignore specific paths (#3534)
Code QL, when configured through the GUI, does not allow for advanced configuration. This PR upgrades from an advanced file-based config which allows us to exclude certain paths.
2025-09-18 23:26:15 +08:00
Greyson LaLonde
8ee3cf4874 test: fix flaky agent repeated tool usage test (#3533)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Make assertion resilient to race condition with max iterations in CI  
- Add investigation notes and TODOs for deterministic executor flow
2025-09-17 22:00:32 -04:00
Greyson LaLonde
f2d3fd0c0f fix(events): add missing event exports to __init__.py (#3532) 2025-09-17 21:50:27 -04:00
Greyson LaLonde
f28e78c5ba refactor: unify rag storage with instance-specific client support (#3455)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- ignore line length errors globally
- migrate knowledge/memory and crew query_knowledge to `SearchResult`
- remove legacy chromadb utils; fix empty metadata handling
- restore openai as default embedding provider; support instance-specific clients
- update and fix tests for `SearchResult` migration and rag changes
2025-09-17 14:46:54 -04:00
Greyson LaLonde
81bd81e5f5 fix: handle model parameter in OpenAI adapter initialization (#3510)
Some checks failed
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-09-12 17:31:53 -04:00
Vidit Ostwal
1b00cc71ef Dropping messages from metadata in Mem0 Storage (#3390)
* Dropped messages from metadata and added user-assistant interaction directly

* Fixed test cases for this

* Fixed static type checking issue

* Changed logic to take latest user and assistant messages

* Added default value to be string

* Linting checks

* Removed duplication of tool calling

* Fixed Linting Changes

* Ruff check

* Removed console formatter file from commit

* Linting fixed

* Linting checks

* Ignoring missing imports error

* Added suggested changes

* Fixed import untyped error
2025-09-12 15:25:29 -04:00
Greyson LaLonde
45d0c9912c chore: add type annotations and docstrings to openai agent adapters (#3505) 2025-09-12 10:41:39 -04:00
Greyson LaLonde
1f1ab14b07 fix: resolve test duration cache issues in CI workflows (#3506)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-09-12 08:38:47 -04:00
Lucas Gomide
1a70f1698e feat: add thread-safe platform context management (#3502)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-09-11 17:32:51 -04:00
Greyson LaLonde
8883fb656b feat(tests): add duration caching for pytest-split
- Cache test durations for optimized splitting
2025-09-11 15:16:05 -04:00
Greyson LaLonde
79d65e55a1 chore: add type annotations and docstrings to langgraph adapters (#3503) 2025-09-11 13:06:44 -04:00
Lorenze Jay
dde76bfec5 chore: bump CrewAI version to 0.186.1 and update dependencies in CLI templates (#3499)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Updated CrewAI version from 0.186.0 to 0.186.1 in `__init__.py`.
- Updated `crewai[tools]` dependency version in `pyproject.toml` for crew, flow, and tool templates to reflect the new CrewAI version.
2025-09-10 17:01:19 -07:00
Lorenze Jay
f554123af6 fix (#3498) 2025-09-10 16:55:25 -07:00
Lorenze Jay
4336e945b8 chore: update dependencies and version for CrewAI (#3497)
- Updated `crewai-tools` dependency from version 0.69.0 to 0.71.0 in `pyproject.toml`.
- Bumped CrewAI version from 0.177.0 to 0.186.0 in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool to reflect the new CrewAI version.
2025-09-10 16:03:58 -07:00
Lorenze Jay
75b916c85a Lorenze/fix tool call twice (#3495)
* test: add test to ensure tool is called only once during crew execution

- Introduced a new test case to validate that the counting_tool is executed exactly once during crew execution.
- Created a CountingTool class to track execution counts and log call history.
- Enhanced the test suite with a YAML cassette for consistent tool behavior verification.

* ensure tool function called only once

* refactor: simplify error handling in CrewStructuredTool

- Removed unnecessary try-except block around the tool function call to streamline execution flow.
- Ensured that the tool function is called directly, improving readability and maintainability.

* linted

* need to ignore for now as we cant infer the complex generic type within pydantic create_model_func

* fix tests
2025-09-10 15:20:21 -07:00
Greyson LaLonde
01be26ce2a chore: add build-cache, update jobs, remove redundant security check
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Build and cache uv dependencies; update type-checker, tests, and linter to use cache  
- Remove separate security-checker
- Add explicit workflow permissions for compliance  
- Remove pull_request trigger from build-cache workflow
2025-09-10 13:02:24 -04:00
Greyson LaLonde
c3ad5887ef chore: add type annotations to utilities module (#3484)
- Update to Python 3.10+ typing across LLM, callbacks, storage, and errors
- Complete typing updates for crew_chat and hitl
- Add stop attr to mock LLM, suppress test warnings
- Add type-ignore for aisuite import
2025-09-10 10:56:17 -04:00
Lucas Gomide
260b49c10a fix: support to define MPC connection timeout on CrewBase instance (#3465)
* fix: support to define MPC connection timeout on CrewBase instance

* fix: resolve linter issues

* chore: ignore specific rule N802 on CrewBase class

* fix: ignore untyped import
2025-09-10 09:58:46 -04:00
Greyson LaLonde
1dc4f2e897 chore: add typing and docstrings to base_token_process module (#3486)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-09-10 09:23:39 -04:00
Greyson LaLonde
b126ab22dd chore: refactor telemetry module with utility functions and modern typing (#3485)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-09-10 09:18:21 -04:00
Greyson LaLonde
079cb72f6e chore: update typing in types module to Python 3.10+ syntax (#3482)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-09-10 09:07:36 -04:00
Greyson LaLonde
83682d511f chore: modernize LLM interface typing and add constants (#3483)
* chore: update LLM interfaces to Python 3.10+ typing

* fix: add missing stop attribute to mock LLM and improve test infrastructure

* fix: correct type ignore comment for aisuite import
2025-09-10 08:30:49 -04:00
Samarth Rawat
6676d94ba1 Doc Fix: fixed number of memory types (#3288)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Update memory.mdx

* Update memory.mdx

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-09 14:11:56 -04:00
Greyson LaLonde
d5126d159b chore: improve typing and docs in agents leaf files (#3461)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add typing and Google-style docstrings to agents leaf files
- Add TODO notes
2025-09-08 11:57:34 -04:00
Greyson LaLonde
fa06aea8d5 chore: modernize security module typing (#3469)
- Disable E501, apply Ruff formatting
- Update typing (Self, BeforeValidator), remove dead code
- Convert Fingerprint to Pydantic dataclass and fix serialization/copy behavior
- Add TODO for dynamic namespace config
2025-09-08 11:52:59 -04:00
Greyson LaLonde
f936e0f69b chore: enhance typing and documentation in tasks module (#3467)
- Disable E501 line length linting rule
- Add Google-style docstrings to tasks leaf file
- Modernize typing and docs in task_output.py
- Improve typing and documentation in conditional_task.py
2025-09-08 11:42:23 -04:00
Greyson LaLonde
37c5e88d02 ci: configure pre-commit hooks and github actions to use uv run (#3479) 2025-09-08 11:30:28 -04:00
Kim
1a96ed7b00 fix: rebranding of Azure AI Studio (Azure OpenAI Studio) to Azure AI Foundry (#3424)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-05 20:42:05 -04:00
Tony Kipkemboi
1a1bb0ca3d docs: Docs updates (#3459)
* docs(cli): document device-code login and config reset guidance; renumber sections

* docs(cli): fix duplicate numbering (renumber Login/API Keys/Configuration sections)

* docs: Fix webhook documentation to include meta dict in all webhook payloads

- Add note explaining that meta objects from kickoff requests are included in all webhook payloads
- Update webhook examples to show proper payload structure including meta field
- Fix webhook examples to match actual API implementation
- Apply changes to English, Korean, and Portuguese documentation

Resolves the documentation gap where meta dict passing to webhooks was not documented despite being implemented in the API.

* WIP: CrewAI docs theme, changelog, GEO, localization

* docs(cli): fix merge markers; ensure mode: "wide"; convert ASCII tables to Markdown (en/pt-BR/ko)

* docs: add group icons across locales; split Automation/Integrations; update tools overviews and links
2025-09-05 17:40:11 -04:00
Mike Plachta
99b79ab20d docs: move Bedrock tool docs to integration folder and add CrewAI automation tool docs (#3403)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-05 15:12:35 -04:00
Mike Plachta
80974fec6c docs: expand webhook event types with detailed categorization and descriptions (#3369)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-05 14:57:01 -04:00
Greyson LaLonde
30b9cdd944 chore: expand ruff rules with comprehensive linting (#3453) 2025-09-05 14:38:56 -04:00
Greyson LaLonde
610c1f70c0 chore: relax mypy configuration and exclude tests from CI (#3452) 2025-09-05 10:00:05 -04:00
Greyson LaLonde
ab82da02f9 refactor: cleanup crew agent executor (#3440)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
refactor: cleanup crew agent executor & add docs

- Remove dead code, unused imports, and obsolete methods
- Modernize with updated type hints and static _format_prompt
- Add docstrings for clarity
2025-09-04 15:32:47 -04:00
Lorenze Jay
f0def350a4 chore: update crewAI and tools dependencies to latest versions (#3444)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Updated `crewai-tools` dependency from version 0.65.0 to 0.69.0 in `pyproject.toml` and `uv.lock`.
- Bumped crewAI version from 0.175.0 to 0.177.0 in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new crewAI version.
2025-09-03 17:27:05 -07:00
Lorenze Jay
f4f32b5f7f fix: suppress Pydantic deprecation warnings in initialization (#3443)
* fix: suppress Pydantic deprecation warnings in initialization

- Implemented a function to filter out Pydantic deprecation warnings, enhancing the user experience by preventing unnecessary warning messages during execution.
- Removed the previous warning filter setup to streamline the warning suppression process.
- Updated the User-Agent header formatting for consistency.

* fix type check

* dropped

* fix: update type-checker workflow and suppress warnings

- Updated the Python version matrix in the type-checker workflow to use double quotes for consistency.
- Added the `# type: ignore[assignment]` comment to the warning suppression assignment in `__init__.py` to address type checking issues.
- Ensured that the mypy command in the workflow allows for untyped calls and generics, enhancing type checking flexibility.

* better
2025-09-03 16:36:50 -07:00
Tony Kipkemboi
49a5ae0e16 Docs/release 0.175.0 docs (#3441)
* docs(install): note OpenAI SDK requirement openai>=1.13.3 for 0.175.0

* docs(cli): document device-code login and config reset guidance; renumber sections

* docs(flows): document conditional @start and resumable execution semantics

* docs(tasks): move max_retries to deprecation note under attributes table

* docs: provider-neutral RAG client config; entity memory batching; trigger payload note; tracing batch manager

* docs(cli): fix duplicate numbering (renumber Login/API Keys/Configuration sections)
2025-09-03 17:27:11 -04:00
Lucas Gomide
d31ffdbb90 docs: update Enterprise Action Auth Token section docs (#3437)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-09-02 17:36:28 -04:00
Greyson LaLonde
4555ada91e fix(ruff): remove Python 3.12+ only rules for compatibility (#3436) 2025-09-02 14:15:25 -04:00
Greyson LaLonde
92d71f7f06 chore: migrate CI workflows to uv and update dev tooling (#3426)
chore(dev): update tooling & CI workflows

- Upgrade ruff, mypy (strict), pre-commit; add hooks, stubs, config consolidation
- Add bandit to dev deps and update uv.lock
- Enhance ruff rules (modern Python style, B006 for mutable defaults)
- Update workflows to use uv, matrix strategy, and changed-file type checking
- Include tests in type checking; fix job names and add summary job for branch protection
2025-09-02 12:35:02 -04:00
ZhangYier
dada9f140f fix: README.md example link 404 (#3432)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-09-02 10:29:40 -04:00
Greyson LaLonde
878c1a649a refactor: Move events module to crewai.events (#3425)
refactor(events): relocate events module & update imports

- Move events from utilities/ to top-level events/ with types/, listeners/, utils/ structure
- Update all source/tests/docs to new import paths
- Add backwards compatibility stubs in crewai.utilities.events with deprecation warnings
- Restore test mocks and fix related test imports
2025-09-02 10:06:42 -04:00
Greyson LaLonde
1b1a8fdbf4 fix: replace mutable default arguments with None (#3429)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-31 18:57:45 -04:00
Lorenze Jay
2633b33afc fix: enhance LLM event handling with task and agent metadata (#3422)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: enhance LLM event handling with task and agent metadata

- Added `from_task` and `from_agent` parameters to LLM event emissions for improved traceability.
- Updated `_send_events_to_backend` method in TraceBatchManager to return status codes for better error handling.
- Modified `CREWAI_BASE_URL` to remove trailing slash for consistency.
- Improved logging and graceful failure handling in event sending process.

* drop print
2025-08-29 13:48:49 -07:00
Greyson LaLonde
e4c4b81e63 chore: refactor parser & constants, improve tools_handler, update tests
- Move parser constants to dedicated module with pre-compiled regex
- Refactor CrewAgentParser to module functions; remove unused params
- Improve tools_handler with instance attributes
- Update tests to use module-level parser functions
2025-08-29 14:35:08 -04:00
Greyson LaLonde
ec1eff02a8 fix: achieve parity between rag package and current impl (#3418)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Sanitize ChromaDB collection names and use original dir naming
- Add persistent client with file locking to the ChromaDB factory
- Add upsert support to the ChromaDB client
- Suppress ChromaDB deprecation warnings for `model_fields`
- Extract `suppress_logging` into shared `logger_utils`
- Update tests to reflect upsert behavior
- Docs: add additional note
2025-08-28 11:22:36 -04:00
Lorenze Jay
0f1b764c3e chore: update crewAI version and dependencies to 0.175.0 and tools to 0.65.0 (#3417)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Bump crewAI version from 0.165.1 to 0.175.0 in __init__.py.
* Update tools dependency from 0.62.1 to 0.65.0 in pyproject.toml and uv.lock files.
* Reflect changes in CLI templates for crew, flow, and tool configurations.
2025-08-27 19:33:32 -07:00
Lorenze Jay
6ee9db1d4a fix: enhance PlusAPI and TraceBatchManager with timeout handling and graceful failure logging (#3416)
* Added timeout parameters to PlusAPI trace event methods for improved reliability.
* Updated TraceBatchManager to handle None responses gracefully, logging warnings instead of errors.
* Improved logging messages to provide clearer context during trace batch initialization and event sending failures.
2025-08-27 18:43:03 -07:00
Greyson LaLonde
109de91d08 fix: batch entity memory items to reduce redundant operations (#3409)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: batch save entity memory items to reduce redundant operations

* test: update memory event count after entity batch save implementation
2025-08-27 10:47:20 -04:00
Erika Shorten
92b70e652d Add hybrid search alpha parameter to the docs (#3397)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-08-27 10:36:39 -04:00
Heitor Carvalho
fc3f2c49d2 chore: remove auth0 and the need of typing the email on 'crewai login' (#3408)
* Remove the need of typing the email on 'crewai login'

* Remove auth0 constants, update tests
2025-08-27 10:12:57 -04:00
Lucas Gomide
88d2968fd5 chore: add deprecation notices to Task.max_retries (#3379)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-26 17:24:58 -04:00
Lorenze Jay
7addda9398 Lorenze/better tracing events (#3382)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: implement tool usage limit exception handling

- Introduced `ToolUsageLimitExceeded` exception to manage maximum usage limits for tools.
- Enhanced `CrewStructuredTool` to check and raise this exception when the usage limit is reached.
- Updated `_run` and `_execute` methods to include usage limit checks and handle exceptions appropriately, improving reliability and user feedback.

* feat: enhance PlusAPI and ToolUsage with task metadata

- Removed the `send_trace_batch` method from PlusAPI to streamline the API.
- Added timeout parameters to trace event methods in PlusAPI for improved reliability.
- Updated ToolUsage to include task metadata (task name and ID) in event emissions, enhancing traceability and context during tool usage.
- Refactored event handling in LLM and ToolUsage events to ensure task information is consistently captured.

* feat: enhance memory and event handling with task and agent metadata

- Added task and agent metadata to various memory and event classes, improving traceability and context during memory operations.
- Updated the `ContextualMemory` and `Memory` classes to associate tasks and agents, allowing for better context management.
- Enhanced event emissions in `LLM`, `ToolUsage`, and memory events to include task and agent information, facilitating improved debugging and monitoring.
- Refactored event handling to ensure consistent capture of task and agent details across the system.

* drop

* refactor: clean up unused imports in memory and event modules

- Removed unused TYPE_CHECKING imports from long_term_memory.py to streamline the code.
- Eliminated unnecessary import from memory_events.py, enhancing clarity and maintainability.

* fix memory tests

* fix task_completed payload

* fix: remove unused test agent variable in external memory tests

* refactor: remove unused agent parameter from Memory class save method

- Eliminated the agent parameter from the save method in the Memory class to streamline the code and improve clarity.
- Updated the TraceBatchManager class by moving initialization of attributes into the constructor for better organization and readability.

* refactor: enhance ExecutionState and ReasoningEvent classes with optional task and agent identifiers

- Added optional `current_agent_id` and `current_task_id` attributes to the `ExecutionState` class for better tracking of agent and task states.
- Updated the `from_task` attribute in the `ReasoningEvent` class to use `Optional[Any]` instead of a specific type, improving flexibility in event handling.

* refactor: update ExecutionState class by removing unused agent and task identifiers

- Removed the `current_agent_id` and `current_task_id` attributes from the `ExecutionState` class to simplify the code and enhance clarity.
- Adjusted the import statements to include `Optional` for better type handling.

* refactor: streamline LLM event handling in LiteAgent

- Removed unused LLM event emissions (LLMCallStartedEvent, LLMCallCompletedEvent, LLMCallFailedEvent) from the LiteAgent class to simplify the code and improve performance.
- Adjusted the flow of LLM response handling by eliminating unnecessary event bus interactions, enhancing clarity and maintainability.

* flow ownership and not emitting events when a crew is done

* refactor: remove unused agent parameter from ShortTermMemory save method

- Eliminated the agent parameter from the save method in the ShortTermMemory class to streamline the code and improve clarity.
- This change enhances the maintainability of the memory management system by reducing unnecessary complexity.

* runtype check fix

* fixing tests

* fix lints

* fix: update event assertions in test_llm_emits_event_with_lite_agent

- Adjusted the expected counts for completed and started events in the test to reflect the correct behavior of the LiteAgent.
- Updated assertions for agent roles and IDs to match the expected values after recent changes in event handling.

* fix: update task name assertions in event tests

- Modified assertions in `test_stream_llm_emits_event_with_task_and_agent_info` and `test_llm_emits_event_with_task_and_agent_info` to use `task.description` as a fallback for `task.name`. This ensures that the tests correctly validate the task name even when it is not explicitly set.

* fix: update test assertions for output values and improve readability

- Updated assertions in `test_output_json_dict_hierarchical` to reflect the correct expected score value.
- Enhanced readability of assertions in `test_output_pydantic_to_another_task` and `test_key` by formatting the error messages for clarity.
- These changes ensure that the tests accurately validate the expected outputs and improve overall code quality.

* test fixes

* fix crew_test

* added another fixture

* fix: ensure agent and task assignments in contextual memory are conditional

- Updated the ContextualMemory class to check for the existence of short-term, long-term, external, and extended memory before assigning agent and task attributes. This prevents potential attribute errors when memory types are not initialized.
2025-08-26 09:09:46 -07:00
Greyson LaLonde
4b4a119a9f refactor: simplify rag client initialization (#3401)
* Simplified Qdrant and ChromaDB client initialization
* Refactored factory structure and updated tests accordingly
2025-08-26 08:54:51 -04:00
Greyson LaLonde
869bb115c8 Qdrant RAG Provider Support (#3400)
* Added Qdrant provider support with factory, config, and protocols
* Improved default embeddings and type definitions
* Fixed ChromaDB factory embedding assignment
2025-08-26 08:44:02 -04:00
Greyson LaLonde
7ac482c7c9 feat: rag configuration with optional dependency support (#3394)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
### RAG Config System

* Added ChromaDB client creation via config with sensible defaults
* Introduced optional imports and shared RAG config utilities/schema
* Enabled embedding function support with ChromaDB provider integration
* Refactored configs for immutability and stronger type safety
* Removed unused code and expanded test coverage
2025-08-26 00:00:22 -04:00
Greyson LaLonde
2e4bd3f49d feat: qdrant generic client (#3377)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
### Qdrant Client

* Add core client with collection, search, and document APIs (sync + async)
* Refactor utilities, types, and vector params (default 384-dim)
* Improve error handling with `ClientMethodMismatchError`
* Add score normalization, async embeddings, and optional `qdrant-client` dep
* Expand tests and type safety throughout
2025-08-25 16:02:25 -04:00
Greyson LaLonde
c02997d956 Add import utilities for optional dependencies (#3389)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-24 22:57:44 -04:00
Heitor Carvalho
f96b779df5 feat: reset tokens on crewai config reset (#3365)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-22 16:16:42 -04:00
Greyson LaLonde
842bed4e9c feat: chromadb generic client (#3374)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Add ChromaDB client implementation with async support

- Implement core collection operations (create, get_or_create, delete)
- Add search functionality with cosine similarity scoring
- Include both sync and async method variants
- Add type safety with NamedTuples and TypeGuards
- Extract utility functions to separate modules
- Default to cosine distance metric for text similarity
- Add comprehensive test coverage

TODO:
- l2, ip score calculations are not settled on
2025-08-21 18:18:46 -04:00
Lucas Gomide
1217935b31 feat: add docs about Automation triggers (#3375)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-20 22:02:47 -04:00
Greyson LaLonde
641c156c17 fix: address flaky tests (#3363)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
fix: resolve flaky tests and race conditions in test suite

- Fix telemetry/event tests by patching class methods instead of instances
- Use unique temp files/directories to prevent CI race conditions
- Reset singleton state between tests
- Mock embedchain.Client.setup() to prevent JSON corruption
- Rename test files to test_*.py convention
- Move agent tests to tests/agents directory
- Fix repeated tool usage detection
- Remove database-dependent tools causing initialization errors
2025-08-20 13:34:09 -04:00
Tony Kipkemboi
7fdf9f9290 docs: fix API Reference OpenAPI sources and redirects (#3368)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* docs: fix API Reference OpenAPI sources and redirects; clarify training data usage; add Mermaid diagram; correct CLI usage and notes

* docs(mintlify): use explicit openapi {source, directory} with absolute paths to fix branch deployment routing

* docs(mintlify): add explicit endpoint MDX pages and include in nav; keep OpenAPI auto-gen as fallback

* docs(mintlify): remove OpenAPI Endpoints groups; add localized MDX endpoint pages for pt-BR and ko
2025-08-20 11:55:35 -04:00
Greyson LaLonde
c0d2bf4c12 fix: flow listener resumability for HITL and cyclic flows (#3322)
* fix: flow listener resumability for HITL and cyclic flows

- Add resumption context flag to distinguish HITL resumption from cyclic execution
- Skip method re-execution only during HITL resumption, not for cyclic flows
- Ensure cyclic flows like test_cyclic_flow continue to work correctly

* fix: prevent duplicate execution of conditional start methods in flows

* fix: resolve type error in flow.py line 1040 assignment
2025-08-20 10:06:18 -04:00
Greyson LaLonde
ed187b495b feat: centralize embedding types and create base client (#3246)
feat: add RAG system foundation with generic vector store support

- Add BaseClient protocol for vector stores
- Move BaseRAGStorage to rag/core
- Centralize embedding types in embeddings/types.py
- Remove unused storage models
2025-08-20 09:35:27 -04:00
Wajeeh ul Hassan
2773996b49 fix: revert pin openai<1.100.0 to openai>=1.13.3 (#3364) 2025-08-20 09:16:26 -04:00
Damian Silbergleith
95923b78c6 feat: display task name in verbose output (#3308)
* feat: display task name in verbose output

- Modified event_listener.py to pass task names to the formatter
- Updated console_formatter.py to display task names when available
- Maintains backward compatibility by showing UUID for tasks without names
- Makes verbose output more informative and readable

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove unnecessary f-string prefixes in console formatter

Remove extraneous f prefixes from string literals without placeholders
in console_formatter.py to resolve ruff F541 linting errors.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-20 08:43:05 -04:00
Lucas Gomide
7065ad4336 feat: adding additional parameter to Flow' start methods (#3356)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: adding additional parameter to Flow' start methods

When the `crewai_trigger_payload` parameter exists in the input Flow, we will add it in the start Flow methods as parameter

* fix: support crewai_trigger_payload in async Flow start methods
2025-08-19 17:32:19 -04:00
Lorenze Jay
d6254918fd Lorenze/max retry defaults tools (#3362)
* feat: enhance BaseTool and CrewStructuredTool with usage tracking

This commit introduces a mechanism to track the usage count of tools within the CrewAI framework. The `BaseTool` class now includes a `_increment_usage_count` method that updates the current usage count, which is also reflected in the associated `CrewStructuredTool`. Additionally, a new test has been added to ensure that the maximum usage count is respected when invoking tools, enhancing the overall reliability and functionality of the tool system.

* feat: add max usage count feature to tools documentation

This commit introduces a new section in the tools overview documentation that explains the maximum usage count feature for tools within the CrewAI framework. Users can now set a limit on how many times a tool can be used, enhancing control over tool usage. An example of implementing the `FileReadTool` with a maximum usage count is also provided, improving the clarity and usability of the documentation.

* undo field string
2025-08-19 10:44:55 -07:00
Heitor Carvalho
95e3d6db7a fix: add 'tool' section migration when running crewai update (#3341)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-19 08:11:30 -04:00
Lorenze Jay
d7f8002baa chore: update crewAI version to 0.165.1 and tools dependency in templates (#3359) (#3359)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-19 00:06:31 -03:00
Lorenze Jay
d743e12a06 refactor: streamline tracing condition checks and clean up deprecated warnings (#3358)
This commit simplifies the conditions for enabling tracing in both the Crew and Flow classes by removing the redundant call to `on_first_execution_tracing_confirmation()`. Additionally, it removes deprecated warning filters related to Pydantic in the KnowledgeStorage and RAGStorage classes, improving code clarity and maintainability.
2025-08-18 19:56:00 -07:00
Lorenze Jay
6068fe941f chore: update crewAI version to 0.165.0 and tools dependency to 0.62.1 (#3357) 2025-08-18 18:25:59 -07:00
Lucas Gomide
2a0cefc98b feat: pin openai<1.100.0 due ResponseTextConfigParam import issue (#3355)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-18 18:31:18 -04:00
Lucas Gomide
a4f65e4870 chore: renaming inject_trigger_input to allow_crewai_trigger_context (#3353)
* chore: renaming inject_trigger_input to allow_crewai_trigger_context

* test: add missing cassetes
2025-08-18 17:57:21 -04:00
Lorenze Jay
a1b3edd79c Refactor tracing logic to consolidate conditions for enabling tracing… (#3347)
* Refactor tracing logic to consolidate conditions for enabling tracing in Crew class and update TraceBatchManager to handle ephemeral batches more effectively. Added tests for trace listener handling of both ephemeral and authenticated user batches.

* drop print

* linted

* refactor: streamline ephemeral handling in TraceBatchManager

This commit removes the ephemeral parameter from the _send_events_to_backend and _finalize_backend_batch methods, replacing it with internal logic that checks the current batch's ephemeral status. This change simplifies the method signatures and enhances the clarity of the code by directly using the is_current_batch_ephemeral attribute for conditional logic.
2025-08-18 14:16:51 -07:00
Lucas Gomide
80b3d9689a Auto inject crewai_trigger_payload (#3351)
* feat: add props to inject trigger payload

* feat: auto-inject trigger_input in the first crew task
2025-08-18 16:36:08 -04:00
Vini Brasil
ec03a53121 Add example to Tool Repository docs (#3352) 2025-08-18 13:19:35 -07:00
Vini Brasil
2fdf3f3a6a Move Chroma lockfile to db/ (#3342)
This commit fixes an issue where using Chroma would spam lockfiles over
the root path of the crew.
2025-08-18 11:00:50 -07:00
Greyson LaLonde
1d3d7ebf5e fix: convert XMLSearchTool config values to strings for configparser compatibility (#3344) 2025-08-18 13:23:58 -04:00
Gabe Milani
2c2196f415 fix: flaky test with PytestUnraisableExceptionWarning (#3346) 2025-08-18 14:07:51 -03:00
Gabe Milani
c9f30b175c chore: ignore deprecation warning from chromadb (#3328)
* chore: ignore deprecation warning from chromadb

* adding TODO: in the comment
2025-08-18 13:24:11 -03:00
Greyson LaLonde
a17b93a7f8 Mock telemetry in pytest tests (#3340)
* Add telemetry mocking for pytest tests

- Mock telemetry by default for all tests except telemetry-specific tests
- Add @pytest.mark.telemetry marker for real telemetry tests
- Reduce test overhead and improve isolation

* Fix telemetry test isolation

- Properly isolate telemetry tests from mocking environment
- Preserve API keys and other necessary environment variables
- Ensure telemetry tests can run with real telemetry instances
2025-08-18 11:55:30 -04:00
namho kim
0d3e462791 fix: Revised Korean translation and sentence structure improvement (#3337)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-18 10:46:13 -04:00
Greyson LaLonde
947c9552f0 chore: remove AgentOps integration (#3334)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-17 23:07:41 -04:00
Lorenze Jay
04a03d332f Lorenze/emphemeral tracing (#3323)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* for ephemeral traces

* default false

* simpler and consolidated

* keep raising exception but catch it and continue if its for trace batches

* cleanup

* more cleanup

* not using logger

* refactor: rename TEMP_TRACING_RESOURCE to EPHEMERAL_TRACING_RESOURCE for clarity and consistency in PlusAPI; update related method calls accordingly

* default true

* drop print
2025-08-15 13:37:16 -07:00
Vidit Ostwal
992e093610 Update Docs: Added Mem0 integration with Short Term and Entity Memory (#3293)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Added Mem0 integration with Short Term and Entity Memory

* Flaky test case of telemetry
2025-08-14 22:50:24 -04:00
Lucas Gomide
07f8e73958 feat: include exchanged agent messages into ExternalMemory metadata (#3290)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-14 09:41:09 -04:00
Lorenze Jay
66c2fa1623 chore: update crewAI and tools dependencies to version 0.159.0 and 0.62.0 respectively (#3318)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Bump crewAI version from 0.157.0 to 0.159.0
- Update tools dependency from 0.60.0 to 0.62.0 in pyproject.toml and uv.lock
- Ensure compatibility with the latest features and improvements in the tools package
2025-08-13 16:52:58 -07:00
Greyson LaLonde
7a52cc9667 fix: comment out listener resumability check (#3316) 2025-08-13 19:04:16 -04:00
Greyson LaLonde
8b686fb0c6 feat: add flow resumability support (#3312)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Add reload() method to restore flow state from execution data
- Add FlowExecutionData type definitions
- Track completed methods for proper flow resumption
- Support OpenTelemetry baggage context for flow inputs
2025-08-13 13:45:08 -04:00
Tony Kipkemboi
dc6771ae95 docs: Fix API Reference, add RBAC, revamp Examples/Cookbooks (EN/PT-BR/KO) (#3314)
* docs: add RBAC docs and other chores

* docs: fix API Reference rendering; per-locale OpenAPI; add Enterprise RBAC; restructure Examples (EN/PT-BR/KO) + Cookbooks; update nav and links

* docs(i18n): add RBAC docs for pt-BR and ko; update Enterprise Features nav
2025-08-13 13:13:24 -04:00
Tony Kipkemboi
e9b1e5a8f6 docs: add RBAC docs and other chores (#3313) 2025-08-13 12:08:42 -04:00
rishiraj
57c787f919 Docs update/add truefoundry (#3245)
* Add TrueFoundry observability integration documentation

- Added comprehensive TrueFoundry integration guide for CrewAI
- Included AI Gateway overview with key features
- Added technical architecture details for Traceloop SDK integration
- Provided step-by-step setup instructions
- Added advanced configuration examples
- Included tracing dashboard screenshot
- Added support contact and documentation links

* Update TrueFoundry integration documentation

Major improvements and fixes:
- Fixed integration pattern to follow LLM provider approach (base_url + api_key)
- Added technical architecture details showing LLM provider and observability flows
- Updated model names to use correct TrueFoundry format (openai-main/gpt-4o, anthropic/claude-3.5-sonnet)
- Added unified-code-tfy.png image for visual code example
- Reorganized document structure with better section placement
- Moved Additional Tracing section to better position
- Added link to TrueFoundry quick start guide
- Added comprehensive observability details and dashboard explanation
- Removed complex tracing setup in favor of simpler LLM provider integration

* Finalize TrueFoundry integration documentation

Key improvements:
- Updated base_url references to use placeholder from code snippet
- Added gateway-metrics.png image for observability dashboard
- Formatted metrics description with proper bullet points and bold headers
- Added link to TrueFoundry tracing overview documentation
- Improved readability and consistency throughout the documentation
- Updated Portuguese translation (pt-BR) version

* added truefoundry.mdx

* updated tfy mdx

* Update docs/en/observability/truefoundry.mdx

Co-authored-by: Nikhil Popli <97437109+nikp1172@users.noreply.github.com>

* Update truefoundry.mdx

* Update truefoundry.mdx

-minor updates

* Update truefoundry.mdx

* updated truefoundry.mdx PT-BR

---------

Co-authored-by: Nikhil Popli <97437109+nikp1172@users.noreply.github.com>
2025-08-13 10:08:15 -04:00
Daniel Barreto
a0eadf783b Add Korean translations (#3307)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-12 15:58:12 -07:00
Lorenze Jay
251ae00b8b Lorenze/tracing-improvements-cleanup (#3291)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add tracing support to Crew and Flow classes

- Introduced a new `tracing` optional field in both the `Crew` and `Flow` classes to enable tracing functionality.
- Updated the initialization logic to conditionally set up the `TraceCollectionListener` based on the `tracing` flag or the `CREWAI_TRACING_ENABLED` environment variable.
- Removed the obsolete `interfaces.py` file related to tracing.
- Enhanced the `TraceCollectionListener` to accept a `tracing` parameter and adjusted its internal logic accordingly.
- Added tests to verify the correct setup of the trace listener when tracing is enabled.

This change improves the observability of the crew execution process and allows for better debugging and performance monitoring.

* fix flow name

* refactor: replace _send_batch method with finalize_batch calls in TraceCollectionListener

- Updated the TraceCollectionListener to use the batch_manager's finalize_batch method instead of the deprecated _send_batch method for handling trace events.
- This change improves the clarity of the code and ensures that batch finalization is consistently managed through the batch manager.
- Removed the obsolete _send_batch method to streamline the listener's functionality.

* removed comments

* refactor: enhance tracing functionality by introducing utility for tracing checks

- Added a new utility function `is_tracing_enabled` to streamline the logic for checking if tracing is enabled based on the `CREWAI_TRACING_ENABLED` environment variable.
- Updated the `Crew` and `Flow` classes to utilize this utility for improved readability and maintainability.
- Refactored the `TraceCollectionListener` to simplify tracing checks and ensure consistent behavior across components.
- Introduced a new module for tracing utilities to encapsulate related functions, enhancing code organization.

* refactor: remove unused imports from crew and flow modules

- Removed unnecessary `os` imports from both `crew.py` and `flow.py` files to enhance code cleanliness and maintainability.
2025-08-08 13:42:25 -07:00
Tony Kipkemboi
a221295394 WIP: docs updates (#3296) 2025-08-08 13:05:43 -07:00
Lucas Gomide
a92211f0ba fix: use correct endpoint to get auth/parameters on enterprise configuration (#3295)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-08-08 10:56:44 -04:00
Lucas Gomide
f9481cf10d feat: add enterprise configure command (#3289)
* feat: add enterprise configure command

* refactor: renaming EnterpriseCommand to EnterpriseConfigureCommand
2025-08-08 08:50:01 -04:00
633WHU
915857541e feat: improve LLM message formatting performance (#3251)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* optimize: improve LLM message formatting performance

Replace inefficient copy+append operations with list concatenation
in _format_messages_for_provider method. This optimization reduces
memory allocation and improves performance for large conversation
histories.

**Changes:**
- Mistral models: Use list concatenation instead of copy() + append()
- Ollama models: Use list concatenation instead of copy() + append()
- Add comprehensive performance tests to verify improvements

**Performance impact:**
- Reduces memory allocations for large message lists
- Improves processing speed by 2-25% depending on message list size
- Maintains exact same functionality with better efficiency

cliu_whu@yeah.net

* remove useless comment

---------

Co-authored-by: chiliu <chiliu@paypal.com>
2025-08-07 09:07:47 -04:00
Lorenze Jay
7c162411b7 chore: update crewai to 0.157.0 and crewai-tools dependency to version 0.60.0 (#3281)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update crewai-tools dependency to version 0.60.0

- Updated the `pyproject.toml` and `uv.lock` files to reflect the new version of `crewai-tools`.
- This change ensures compatibility with the latest features and improvements in the tools package.

* chore: bump CrewAI version to 0.157.0

- Updated the version in `__init__.py` to reflect the new release.
- Adjusted dependency versions in `pyproject.toml` files for crew, flow, and tool templates to ensure compatibility with the latest features and improvements in CrewAI.
- This change maintains consistency across the project and prepares for upcoming enhancements.
2025-08-06 14:47:50 -07:00
Lorenze Jay
8f4a6cc61c Lorenze/tracing v1 (#3279)
* initial setup

* feat: enhance CrewKickoffCompletedEvent to include total token usage

- Added total_tokens attribute to CrewKickoffCompletedEvent for better tracking of token usage during crew execution.
- Updated Crew class to emit total token usage upon kickoff completion.
- Removed obsolete context handler and execution context tracker files to streamline event handling.

* cleanup

* remove print statements for loggers

* feat: add CrewAI base URL and improve logging in tracing

- Introduced `CREWAI_BASE_URL` constant for easy access to the CrewAI application URL.
- Replaced print statements with logging in the `TraceSender` class for better error tracking.
- Enhanced the `TraceBatchManager` to provide default values for flow names and removed unnecessary comments.
- Implemented singleton pattern in `TraceCollectionListener` to ensure a single instance is used.
- Added a new test case to verify that the trace listener correctly collects events during crew execution.

* clear

* fix: update datetime serialization in tracing interfaces

- Removed the 'Z' suffix from datetime serialization in TraceSender and TraceEvent to ensure consistent ISO format.
- Added new test cases to validate the functionality of the TraceBatchManager and event collection during crew execution.
- Introduced fixtures to clear event bus listeners before each test to maintain isolation.

* test: enhance tracing tests with mock authentication token

- Added a mock authentication token to the tracing tests to ensure proper setup and event collection.
- Updated test methods to include the mock token, improving isolation and reliability of tests related to the TraceListener and BatchManager.
- Ensured that the tests validate the correct behavior of event collection during crew execution.

* test: refactor tracing tests to improve mock usage

- Moved the mock authentication token patching inside the test class to enhance readability and maintainability.
- Updated test methods to remove unnecessary mock parameters, streamlining the test signatures.
- Ensured that the tests continue to validate the correct behavior of event collection during crew execution while improving isolation.

* test: refactor tracing tests for improved mock usage and consistency

- Moved mock authentication token patching into individual test methods for better clarity and maintainability.
- Corrected the backstory string in the `Agent` instantiation to fix a typo.
- Ensured that all tests validate the correct behavior of event collection during crew execution while enhancing isolation and readability.

* test: add new tracing test for disabled trace listener

- Introduced a new test case to verify that the trace listener does not make HTTP calls when tracing is disabled via environment variables.
- Enhanced existing tests by mocking PlusAPI HTTP calls to avoid authentication and network requests, improving test isolation and reliability.
- Updated the test setup to ensure proper initialization of the trace listener and its components during crew execution.

* refactor: update LLM class to utilize new completion function and improve cost calculation

- Replaced direct calls to `litellm.completion` with a new import for better clarity and maintainability.
- Introduced a new optional attribute `completion_cost` in the LLM class to track the cost of completions.
- Updated the handling of completion responses to ensure accurate cost calculations and improved error handling.
- Removed outdated test cassettes for gemini models to streamline test suite and avoid redundancy.
- Enhanced existing tests to reflect changes in the LLM class and ensure proper functionality.

* test: enhance tracing tests with additional request and response scenarios

- Added new test cases to validate the behavior of the trace listener and batch manager when handling 404 responses from the tracing API.
- Updated existing test cassettes to include detailed request and response structures, ensuring comprehensive coverage of edge cases.
- Improved mock setup to avoid unnecessary network calls and enhance test reliability.
- Ensured that the tests validate the correct behavior of event collection during crew execution, particularly in scenarios where the tracing service is unavailable.

* feat: enable conditional tracing based on environment variable

- Added support for enabling or disabling the trace listener based on the `CREWAI_TRACING_ENABLED` environment variable.
- Updated the `Crew` class to conditionally set up the trace listener only when tracing is enabled, improving performance and resource management.
- Refactored test cases to ensure proper cleanup of event bus listeners before and after each test, enhancing test reliability and isolation.
- Improved mock setup in tracing tests to validate the behavior of the trace listener when tracing is disabled.

* fix: downgrade litellm version from 1.74.9 to 1.74.3

- Updated the `pyproject.toml` and `uv.lock` files to reflect the change in the `litellm` dependency version.
- This downgrade addresses compatibility issues and ensures stability in the project environment.

* refactor: improve tracing test setup by moving mock authentication token patching

- Removed the module-level patch for the authentication token and implemented a fixture to mock the token for all tests in the class, enhancing test isolation and readability.
- Updated the event bus clearing logic to ensure original handlers are restored after tests, improving reliability of the test environment.
- This refactor streamlines the test setup and ensures consistent behavior across tracing tests.

* test: enhance tracing test setup with comprehensive mock authentication

- Expanded the mock authentication token patching to cover all instances where `get_auth_token` is used across different modules, ensuring consistent behavior in tests.
- Introduced a new fixture to reset tracing singleton instances between tests, improving test isolation and reliability.
- This update enhances the overall robustness of the tracing tests by ensuring that all necessary components are properly mocked and reset, leading to more reliable test outcomes.

* just drop the test for now

* refactor: comment out completion-related code in LLM and LLM event classes

- Commented out the `completion` and `completion_cost` imports and their usage in the `LLM` class to prevent potential issues during execution.
- Updated the `LLMCallCompletedEvent` class to comment out the `response_cost` attribute, ensuring consistency with the changes in the LLM class.
- This refactor aims to streamline the code and prepare for future updates without affecting current functionality.

* refactor: update LLM response handling in LiteAgent

- Commented out the `response_cost` attribute in the LLM response handling to align with recent refactoring in the LLM class.
- This change aims to maintain consistency in the codebase and prepare for future updates without affecting current functionality.

* refactor: remove commented-out response cost attributes in LLM and LiteAgent

- Commented out the `response_cost` attribute in both the `LiteAgent` and `LLM` classes to maintain consistency with recent refactoring efforts.
- This change aligns with previous updates aimed at streamlining the codebase and preparing for future enhancements without impacting current functionality.

* bring back litellm upgrade version
2025-08-06 14:05:14 -07:00
633WHU
7dc86dc79a perf: optimize string operations with partition() over split()[0] (#3255)
Replace inefficient split()[0] operations with partition()[0] for better performance
when extracting the first part of a string before a delimiter.

Key improvements:
• Agent role processing: 29% faster with partition()
• Model provider extraction: 16% faster
• Console formatting: Improved responsiveness
• Better readability and explicit intent

Changes:
- agent_utils.py: Use partition('\n')[0] for agent role extraction
- console_formatter.py: Optimize agent role processing in logging
- llm_utils.py: Improve model provider parsing
- llm.py: Optimize model name parsing

Performance impact: 15-30% improvement in string processing operations
that are frequently used in agent execution and console output.

cliu_whu@yeah.net

Co-authored-by: chiliu <chiliu@paypal.com>
2025-08-06 15:04:53 -04:00
Vidit Ostwal
7ce20cfcc6 Dropping User Memory (#3225)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Dropping User Memory

* Dropping checks for user memory

* changed memory.mdx documentation removed user memory.

* Flaky Test Case Maybe

* Drop memory_config

* Fixed test cases

* Fixed some test cases

* Changed docs

* Changed BR docs

* Docs fixing

* Fix minor doc

* Fix minor doc

* Fix minor doc

* Added fallback mechanism in Mem0
2025-08-06 13:08:10 -04:00
Mrunmay Shelar
1d9523c98f docs: add LangDB integration documentation (#3228)
docs: update LangDB links in observability documentation

- Removed references to the AI Gateway features in both English and Portuguese documentation.
- Updated the Model Catalog links to point to the new app.langdb.ai domain.
- Ensured consistency across both language versions of the documentation.
2025-08-06 11:13:58 -04:00
Lucas Gomide
9f1d7d1aa9 fix: allow persist Flow state with BaseModel entries (#3276)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-08-06 09:04:59 -04:00
Lucas Gomide
79b375f6fa build: bump LiteLLM to 1.74.9 (#3278)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-08-05 17:10:23 -04:00
Lucas Gomide
75752479c2 docs: add CLI config docs (#3275) 2025-08-05 15:24:34 -04:00
Lucas Gomide
477bc1f09e feat: add default value for crew.name (#3252)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-08-05 12:25:50 -04:00
Lucas Gomide
66567bdc2f Support Device authorization with Okta (#3271)
* feat: support oauth2 config for authentication

* refactor: improve OAuth2 settings management

The CLI now supports seamless integration with other authentication providers, since the client_id, issue, domain are now manage by the user

* feat: support okta Device Authorization flow

* chore: resolve linter issues

* test: fix tests

* test: adding tests for auth providers

* test: fix broken test

* refator: adding WorkOS paramenters as default settings auth

* chore: improve oauth2 attributes description

* refactor: simplify WorkOS getting values

* fix: ensure Auth0 parameters is set when overrinding default auth provider

* chore: remove TODO Auth0 no longer provides default values

---------

Co-authored-by: Heitor Carvalho <heitor.scz@gmail.com>
2025-08-05 12:16:21 -04:00
Lucas Gomide
0b31bbe957 fix: enable word wrapping for long input tool (#3274) 2025-08-05 11:05:38 -04:00
Lucas Gomide
246cf588cd docs: updating MCP docs with connect_timeout attribute (#3273) 2025-08-05 10:27:18 -04:00
Heitor Carvalho
88ed91561f feat: add crewai config command group and tests (#3206)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-31 10:38:51 -04:00
Lorenze Jay
9a347ad458 chore: update crewai-tools dependency to version 0.59.0 and bump CrewAI version to 0.152.0 (#3244)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Updated `crewai-tools` dependency from `0.58.0` to `0.59.0` in `pyproject.toml` and `uv.lock`.
- Bumped the version of the CrewAI library from `0.150.0` to `0.152.0` in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new CrewAI version.
2025-07-30 14:38:24 -07:00
Lucas Gomide
34c3075fdb fix: support to add memories to Mem0 with agent_id (#3217)
* fix: support to add memories to Mem0 with agent_id

* feat: removing memory_type checkings from Mem0Storage

* feat: ensure agent_id is always present while saving memory into Mem0

* fix: use OR operator when querying Mem0 memories with both user_id and agent_id
2025-07-30 11:56:46 -04:00
Vidit Ostwal
498e8dc6e8 Changed the import error to show missing module files (#2423)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Fix issue #2421: Handle missing google.genai dependency gracefully

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting in test file

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting with ruff

Co-Authored-By: Joe Moura <joao@crewai.com>

* Removed unwatned test case

* Added dynamic catching for all the embedder function

* Dropped the comment

* Added test case

* Fixed Linting Issue

* Flaky test case in 3.13

* Test Case fixed

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-30 10:01:17 -04:00
Lorenze Jay
cb522cf500 Enhance Flow class to support custom flow names (#3234)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Added an optional `name` attribute to the Flow class for better identification.
- Updated event emissions to utilize the new `name` attribute, ensuring accurate flow naming in events.
- Added tests to verify the correct flow name is set and emitted during flow execution.
2025-07-29 15:41:30 -07:00
Vini Brasil
017acc74f5 Add timezone to event timestamps (#3231)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Events were lacking timezone information, making them naive datetimes,
which can be ambiguous.
2025-07-28 17:09:06 -03:00
Greyson LaLonde
fab86d197a Refactor: Move RAG components to dedicated top-level module (#3222)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Move RAG components to top-level module

- Create src/crewai/rag directory structure
- Move embeddings configurator from utilities to rag module
- Update imports across codebase and documentation
- Remove deprecated embedding files

* Remove empty knowledge/embedder directory
2025-07-25 10:55:31 -04:00
Vidit Ostwal
864e9bfb76 Changed the default value in Mem0 config (#3216)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Changed the default value in Mem0 config

* Added regression test for this

* Fixed Linting issues
2025-07-24 13:20:18 -04:00
Lucas Gomide
d3b45d197c fix: remove crewai signup references, replaced by crewai login (#3213) 2025-07-24 07:47:35 -04:00
Manuka Yasas
579153b070 docs: fix incorrect model naming in Google Vertex AI documentation (#3189)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Change model format from "gemini/gemini-1.5-pro-latest" to "gemini-1.5-pro-latest"
  in Vertex AI section examples
- Update both English and Portuguese documentation files
- Fixes incorrect provider prefix usage for Vertex AI models
- Ensures consistency with Vertex AI provider requirements

Files changed:
- docs/en/concepts/llms.mdx (line 272)
- docs/pt-BR/concepts/llms.mdx (line 270)

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-07-23 16:58:57 -04:00
Lorenze Jay
b1fdcdfa6e chore: update dependencies and version in project files (#3212)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Updated `crewai-tools` dependency from `0.55.0` to `0.58.0` in `pyproject.toml` and `uv.lock`.
- Added new packages `anthropic`, `browserbase`, `playwright`, `pyee`, and `stagehand` with their respective versions in `uv.lock`.
- Bumped the version of the CrewAI library from `0.148.0` to `0.150.0` in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new CrewAI version.
2025-07-23 11:03:50 -07:00
Mike Plachta
18d76a270c docs: add SerperScrapeWebsiteTool documentation and reorganize SerperDevTool setup instructions (#3211) 2025-07-23 12:12:59 -04:00
Vidit Ostwal
30541239ad Changed Mem0 Storage v1.1 -> v2 (#2893)
* Changed v1.1 -> v2

* Fixed Test Cases:

* Fixed linting issues

* Changed docs

* Refractored the storage

* Fixed test cases

* Fixing run-time checks

* Fixed Test Case

* Updated docs and added test case for custom categories

* Add the TODO back

* Minor Changes

* Added output_format in search

* Minor changes

* Added output_format and version in both search and save

* Small change

* Minor bugs

* Fixed test cases

* Changed docs

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-23 08:30:52 -04:00
Tony Kipkemboi
9a65573955 Feature/update docs (#3205)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs: add create_directory parameter

* docs: remove string guardrails to focus on function guardrails

* docs: remove get help from docs.json

* docs: update pt-BR docs.json changes
2025-07-22 13:55:27 -04:00
Lucas Gomide
27623a1d01 feat: remove duplicate print on LLM call error (#3183)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
By improving litellm handler error / outputs

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-07-21 22:08:07 -04:00
João Moura
2593242234 Adding Support to adhoc tool calling using the internal LLM class (#3195)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Adding Support to adhoc tool calling using the internal LLM class

* fix type
2025-07-21 19:36:48 -03:00
Greyson LaLonde
2ab6c31544 chore: add deprecation notices to UserMemory (#3201)
- Mark UserMemory and UserMemoryItem for removal in v0.156.0 or 2025-08-04
- Update all references with deprecation warnings
- Users should migrate to ExternalMemory
2025-07-21 15:26:34 -04:00
Lucas Gomide
3c55c8a22a fix: append user message when last message is from assistent when using Ollama models (#3200)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Ollama doesn't supports last message to be 'assistant'
We can drop this commit after merging https://github.com/BerriAI/litellm/pull/10917
2025-07-21 13:30:40 -04:00
Ranuga Disansa
424433ff58 docs: Add Tavily Search & Extractor tools to Search-Research suite (#3146)
* docs: Add Tavily Search and Extractor tools documentation

* docs: Add Tavily Search and Extractor tools to the documentation

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-07-21 12:01:29 -04:00
Lucas Gomide
2fd99503ed build: upgrade LiteLLM to 1.74.3 (#3199) 2025-07-21 09:58:47 -04:00
Vidit Ostwal
942014962e fixed save method, changed the test cases (#3187)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fixed save method, changed the test cases

* Linting fixed
2025-07-18 15:10:26 -04:00
Lucas Gomide
2ab79a7dd5 feat: drop unsupported stop parameter for LLM models automatically (#3184) 2025-07-18 13:54:28 -04:00
Lucas Gomide
27c449c9c4 test: remove workaround related to SQLite without FTS5 (#3179)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
For more details check out [here](actions/runner-images#12576)
2025-07-18 09:37:15 -04:00
Vini Brasil
9737333ffd Use file lock around Chroma client initialization (#3181)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This commit fixes a bug with concurrent processess and Chroma where
`table collections already exists` (and similar) were raised.

https://cookbook.chromadb.dev/core/system_constraints/
2025-07-17 11:50:45 -03:00
Lucas Gomide
bf248d5118 docs: fix neatlogs documentation (#3171)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-16 21:18:04 -04:00
Lorenze Jay
2490e8cd46 Update CrewAI version to 0.148.0 in project templates and dependencies (#3172)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Update CrewAI version to 0.148.0 in project templates and dependencies

* Update crewai-tools dependency to version 0.55.0 in pyproject.toml and uv.lock for improved functionality and performance.
2025-07-16 12:36:43 -07:00
Lucas Gomide
9b67e5a15f Emit events about Agent eval (#3168)
* feat: emit events abou Agent Eval

We are triggering events when an evaluation has started/completed/failed

* style: fix type checking issues
2025-07-16 13:18:59 -04:00
Lucas Gomide
6ebb6c9b63 Supporting eval single Agent/LiteAgent (#3167)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* refactor: rely on task completion event to evaluate agents

* feat: remove Crew dependency to evaluate agent

* feat: drop execution_context in AgentEvaluator

* chore: drop experimental Agent Eval feature from stable crew.test

* feat: support eval LiteAgent

* resolve linter issues
2025-07-15 09:22:41 -04:00
Lucas Gomide
53f674be60 chore: remove evaluation folder (#3159)
This folder was moved to `experimental` folder
2025-07-15 08:30:20 -04:00
Paras Sakarwal
11717a5213 docs: added integration with neatlogs (#3138)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-14 11:08:24 -04:00
Lucas Gomide
b6d699f764 Implement thread-safe AgentEvaluator (#3157)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* refactor: implement thread-safe AgentEvaluator with hybrid state management

* chore: remove useless comments
2025-07-14 10:05:42 -04:00
Lucas Gomide
5b15061b87 test: add test helper to assert Agent Experiments (#3156) 2025-07-14 09:24:49 -04:00
Lucas Gomide
1b6b2b36d9 Introduce Evaluator Experiment (#3133)
* feat: add exchanged messages in LLMCallCompletedEvent

* feat: add GoalAlignment metric for Agent evaluation

* feat: add SemanticQuality metric for Agent evaluation

* feat: add Tool Metrics for Agent evaluation

* feat: add Reasoning Metrics for Agent evaluation, still in progress

* feat: add AgentEvaluator class

This class will evaluate Agent' results and report to user

* fix: do not evaluate Agent by default

This is a experimental feature we still need refine it further

* test: add Agent eval tests

* fix: render all feedback per iteration

* style: resolve linter issues

* style: fix mypy issues

* fix: allow messages be empty on LLMCallCompletedEvent

* feat: add Experiment evaluation framework with baseline comparison

* fix: reset evaluator for each experiement iteraction

* fix: fix track of new test cases

* chore: split Experimental evaluation classes

* refactor: remove unused method

* refactor: isolate Console print in a dedicated class

* fix: make crew required to run an experiment

* fix: use time-aware to define experiment result

* test: add tests for Evaluator Experiment

* style: fix linter issues

* fix: encode string before hashing

* style: resolve linter issues

* feat: add experimental folder for beta features (#3141)

* test: move tests to experimental folder
2025-07-14 09:06:45 -04:00
devin-ai-integration[bot]
3ada4053bd Fix #3149: Add missing create_directory parameter to Task class (#3150)
* Fix #3149: Add missing create_directory parameter to Task class

- Add create_directory field with default value True for backward compatibility
- Update _save_file method to respect create_directory parameter
- Add comprehensive tests covering all scenarios
- Maintain existing behavior when create_directory=True (default)

The create_directory parameter was documented but missing from implementation.
Users can now control directory creation behavior:
- create_directory=True (default): Creates directories if they don't exist
- create_directory=False: Raises RuntimeError if directory doesn't exist

Fixes issue where users got TypeError when trying to use the documented
create_directory parameter.

Co-Authored-By: Jo\u00E3o <joao@crewai.com>

* Fix lint: Remove unused import os from test_create_directory_true

- Removes F401 lint error: 'os' imported but unused
- All lint checks should now pass

Co-Authored-By: Jo\u00E3o <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Jo\u00E3o <joao@crewai.com>
2025-07-14 08:15:41 -04:00
Vidit Ostwal
e7a5747c6b Comparing BaseLLM class instead of LLM (#3120)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Compaing BaseLLM class instead of LLM

* Fixed test cases

* Fixed Linting Issues

* removed last line

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-11 20:50:36 -04:00
Vidit Ostwal
eec1262d4f Fix agent knowledge (#2831)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Added add_sources()

* Fixed the agent knowledge querying

* Added test cases

* Fixed linting issue

* Fixed logic

* Seems like a falky test case

* Minor changes

* Added knowledge attriute to the crew documentation

* Flaky test

* fixed spaces

* Flaky Test Case

* Seems like a flaky test case

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-11 13:52:26 -04:00
Tony Kipkemboi
c6caa763d7 docs: Add guardrail attribute documentation and examples (#3139)
- Document string-based guardrails in tasks
- Add guardrail examples to YAML configuration
- Fix Python code formatting in PT-BR CLI docs
2025-07-11 13:32:59 -04:00
Lucas Gomide
08fa3797ca Introducing Agent evaluation (#3130)
* feat: add exchanged messages in LLMCallCompletedEvent

* feat: add GoalAlignment metric for Agent evaluation

* feat: add SemanticQuality metric for Agent evaluation

* feat: add Tool Metrics for Agent evaluation

* feat: add Reasoning Metrics for Agent evaluation, still in progress

* feat: add AgentEvaluator class

This class will evaluate Agent' results and report to user

* fix: do not evaluate Agent by default

This is a experimental feature we still need refine it further

* test: add Agent eval tests

* fix: render all feedback per iteration

* style: resolve linter issues

* style: fix mypy issues

* fix: allow messages be empty on LLMCallCompletedEvent
2025-07-11 13:18:03 -04:00
Greyson LaLonde
bf8fa3232b Add SQLite FTS5 support to test workflow (#3140)
* Add SQLite FTS5 support to test workflow

* Add explanatory comment for SQLite FTS5 workaround
2025-07-11 12:01:25 -04:00
Heitor Carvalho
a6e60a5d42 fix: use production workos environment id (#3129)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-07-09 17:09:01 -04:00
Lorenze Jay
7b0f3aabd9 chore: update crewAI and dependencies to version 0.141.0 and 0.51.0 (#3128)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
- Bump crewAI version to 0.141.0 in __init__.py for alignment with updated dependencies.
- Update `crewai-tools` dependency version to 0.51.0 in pyproject.toml and related template files.
- Add new testing dependencies: pytest-split and pytest-xdist for improved test execution.
- Ensure compatibility with the latest package versions in uv.lock and template files.
2025-07-09 10:37:06 -07:00
Lucas Gomide
f071966951 docs: add docs about Agent.kickoff usage (#3121)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-07-08 16:15:40 -04:00
Lucas Gomide
318310bb7a docs: add docs about Agent repository (#3122) 2025-07-08 15:56:08 -04:00
Greyson LaLonde
34a03f882c feat: add crew context tracking for LLM guardrail events (#3111)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Add crew context tracking using OpenTelemetry baggage for thread-safe propagation. Context is set during kickoff and cleaned up in finally block. Added thread safety tests with mocked agent execution.
2025-07-07 16:33:07 -04:00
Greyson LaLonde
a0fcc0c8d1 Speed up GitHub Actions tests with parallelization (#3107)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add pytest-xdist and pytest-split to dev dependencies for parallel test execution
- Split tests into 8 parallel groups per Python version for better distribution
- Enable CPU-level parallelization with -n auto to maximize resource usage
- Add fail-fast strategy and maxfail=3 to stop early on failures
- Add job name to match branch protection rules
- Reduce test timeout from default to 30s for faster failure detection
- Remove redundant cache configuration
2025-07-03 21:08:00 -04:00
Lorenze Jay
748c25451c Lorenze/new version 0.140.0 (#3106)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: clean up whitespace and update dependencies

* Removed unnecessary whitespace in multiple files for consistency.
* Updated `crewai-tools` dependency version to `0.49.0` in `pyproject.toml` and related template files.
* Bumped CrewAI version to `0.140.0` in `__init__.py` for alignment with updated dependencies.

* chore: update pyproject.toml to exclude documentation from build targets

* Added exclusions for the `docs` directory in both wheel and sdist build targets to streamline the build process and reduce unnecessary file inclusion.

* chore: update uv.lock for dependency resolution and Python version compatibility

* Incremented revision to 2.
* Updated resolution markers to include support for Python 3.13 and adjusted platform checks for better compatibility.
* Added new wheel URLs for zstandard version 0.23.0 to ensure availability across various platforms.

* chore: pin json-repair dependency version in pyproject.toml and uv.lock

* Updated json-repair dependency from a range to a specific version (0.25.2) for consistency and to avoid potential compatibility issues.
* Adjusted related entries in uv.lock to reflect the pinned version, ensuring alignment across project files.

* chore: pin agentops dependency version in pyproject.toml and uv.lock

* Updated agentops dependency from a range to a specific version (0.3.18) for consistency and to avoid potential compatibility issues.
* Adjusted related entries in uv.lock to reflect the pinned version, ensuring alignment across project files.

* test: enhance cache call assertions in crew tests

* Improved the test for cache hitting between agents by filtering mock calls to ensure they include the expected 'tool' and 'input' keywords.
* Added assertions to verify the number of cache calls and their expected arguments, enhancing the reliability of the test.
* Cleaned up whitespace and improved readability in various test cases for better maintainability.
2025-07-02 15:22:18 -07:00
Heitor Carvalho
a77dcdd419 feat: add multiple provider support (#3089)
* Remove `crewai signup` command, update docs

* Add `Settings.clear()` and clear settings before each login

* Add pyjwt

* Remove print statement from ToolCommand.login()

* Remove auth0 dependency

* Update docs
2025-07-02 16:44:47 -04:00
Greyson LaLonde
68f5bdf0d9 feat: add console logging for LLM guardrail events (#3105)
* feat: add console logging for memory events

* fix: emit guardrail events in correct order and handle exceptions

* fix: remove unreachable elif conditions in guardrail event listener

* fix: resolve mypy type errors in guardrail event handler
2025-07-02 16:19:22 -04:00
Irineu Brito
7f83947020 fix: correct code example language inconsistency in pt-BR docs (#3088)
* fix: correct code example language inconsistency in pt-BR docs

* fix: fix: fully standardize code example language and naming in pt-BR docs

* fix: fix: fully standardize code example language and naming in pt-BR docs fixed variables

* fix: fix: fully standardize code example language and naming in pt-BR docs fixed params

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-07-02 12:18:32 -04:00
Lucas Gomide
ceb310bcde docs: add docs about Memory Events (#3104)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-07-02 11:10:45 -04:00
Lucas Gomide
ae57e5723c feat: add console logging for memory system usage (#3103) 2025-07-02 11:00:26 -04:00
Lucas Gomide
ab39753a75 Introduce MemoryEvents to monitor their usage (#3098)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: emit events about memory usage

* test: add tests about memory events usage

* fixed linter issues

* test: use scoped_handlers while listener Memory events
2025-07-01 22:50:39 -04:00
Tony Kipkemboi
640e1a7bc2 Add docs redirects and development tools (#3096)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Add Reo.dev tracking script to documentation

* Comprehensive docs improvements and development tools

- Add comprehensive .cursorrules with CrewAI and Flow development patterns
- Add redirect rules for old doc links without /en/ prefix
- Replace changelog pages with direct GitHub releases links
- Fix installation page directory tree rendering issue
- Fix broken Visual Studio Build Tools link formatting
- Remove obsolete changelog files to reduce maintenance overhead

These changes improve developer experience and ensure all old documentation links continue working.
2025-07-01 14:41:34 -04:00
Lorenze Jay
e544ff8ba3 refactor: streamline collection handling in RAGStorage (#3097)
Replaced the try-except block for collection retrieval with a single call to get_or_create_collection, simplifying the code and improving readability. Added logging to confirm whether the collection was found or created.
2025-07-01 10:14:39 -07:00
Lucas Gomide
49c0144154 feat: improve data training for models up to 7B parameters (#3085)
* feat: improve data training for models up to 7B parameters.

* docs: training considerations for small models to the documentation
2025-07-01 11:47:47 -04:00
Tony Kipkemboi
2ab002a5bf Add Reo.dev tracking script to documentation (#3094)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-07-01 10:29:28 -04:00
Lucas Gomide
b7bf15681e feat: add capability to track LLM calls by task and agent (#3087)
* feat: add capability to track LLM calls by task and agent

This makes it possible to filter or scope LLM events by specific agents or tasks, which can be very useful for debugging or analytics in real-time application

* feat: add docs about LLM tracking by Agents and Tasks

* fix incompatible BaseLLM.call method signature

* feat: support to filter LLM Events from Lite Agent
2025-07-01 09:30:16 -04:00
Tony Kipkemboi
af9c01f5d3 Add Scarf analytics tracking (#3086)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add Scarf analytics tracking

* Fix bandit security warning for urlopen

* Fix linting errors

* Refactor telemetry: reuse existing logic and simplify exceptions
2025-06-30 17:48:45 -04:00
Irineu Brito
5a12b51ba2 fix: Correct typo 'depployments' to 'deployments' in documentation 'instalation' (#3081) 2025-06-30 12:19:31 -04:00
Michael Juliano
576b8ff836 Updated LiteLLM dependency. (#3047)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Updated LiteLLM dependency.

This moves to the latest stable release. Critically, this includes a fix
from https://github.com/BerriAI/litellm/pull/11563 which is required to
use grok-3-mini with crewAI.

* Ran `uv sync` as requested.
2025-06-27 09:54:12 -04:00
Lucas Gomide
b35c3e8024 fix: ensure env-vars are written in upper case (#3072)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
When creating a Crew via the CLI and selecting the Azure provider, the generated .env file had environment variables in lowercase.
This commit ensures that all environment variables are written in uppercase.
2025-06-26 12:29:06 -04:00
Mr. Ånand
b09796cd3f Adding Nebius to docs (#3070)
* Adding Nebius to docs

Submitting this PR on behalf of Nebius AI Studio to add Nebius models to the CrewAI documentation.

I tested with the latest CrewAI + Nebius setup to ensure compatibility.

cc @tonykipkemboi

* updated LiteLLM page

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 11:10:19 -04:00
devin-ai-integration[bot]
e0b46492fa Fix: Normalize project names by stripping trailing slashes in crew creation (#3060)
* fix: normalize project names by stripping trailing slashes in crew creation

- Strip trailing slashes from project names in create_folder_structure
- Add comprehensive tests for trailing slash scenarios
- Fixes #3059

The issue occurred because trailing slashes in project names like 'hello/'
were directly incorporated into pyproject.toml, creating invalid package
names and script entries. This fix silently normalizes project names by
stripping trailing slashes before processing, maintaining backward
compatibility while fixing the invalid template generation.

Co-Authored-By: João <joao@crewai.com>

* trigger CI re-run to check for flaky test issue

Co-Authored-By: João <joao@crewai.com>

* fix: resolve circular import in CLI authentication module

- Move ToolCommand import to be local inside _poll_for_token method
- Update test mock to patch ToolCommand at correct location
- Resolves Python 3.11 test collection failure in CI

Co-Authored-By: João <joao@crewai.com>

* feat: add comprehensive class name validation for Python identifiers

- Ensure generated class names are always valid Python identifiers
- Handle edge cases: names starting with numbers, special characters, keywords, built-ins
- Add sanitization logic to remove invalid characters and prefix with 'Crew' when needed
- Add comprehensive test coverage for class name validation edge cases
- Addresses GitHub PR comment from lucasgomide about class name validity

Fixes include:
- Names starting with numbers: '123project' -> 'Crew123Project'
- Python built-ins: 'True' -> 'TrueCrew', 'False' -> 'FalseCrew'
- Special characters: 'hello@world' -> 'HelloWorld'
- Empty/whitespace: '   ' -> 'DefaultCrew'
- All generated class names pass isidentifier() and keyword checks

Co-Authored-By: João <joao@crewai.com>

* refactor: change class name validation to raise errors instead of generating defaults

- Remove default value generation (Crew prefix/suffix, DefaultCrew fallback)
- Raise ValueError with descriptive messages for invalid class names
- Update tests to expect validation errors instead of default corrections
- Addresses GitHub comment feedback from lucasgomide about strict validation

Co-Authored-By: João <joao@crewai.com>

* fix: add working directory safety checks to prevent test interference

Co-Authored-By: João <joao@crewai.com>

* fix: standardize working directory handling in tests to prevent corruption

Co-Authored-By: João <joao@crewai.com>

* fix: eliminate os.chdir() usage in tests to prevent working directory corruption

- Replace os.chdir() with parent_folder parameter for create_folder_structure tests
- Mock create_folder_structure directly for create_crew tests to avoid directory changes
- All 12 tests now pass locally without working directory corruption
- Should resolve the 103 failing tests in Python 3.12 CI

Co-Authored-By: João <joao@crewai.com>

* fix: remove unused os import to resolve lint failure

- Remove unused 'import os' statement from test_create_crew.py
- All tests still pass locally after removing unused import
- Should resolve F401 lint error in CI

Co-Authored-By: João <joao@crewai.com>

* feat: add folder name validation for Python module names

- Implement validation to ensure folder_name is valid Python identifier
- Check that folder names don't start with digits
- Validate folder names are not Python keywords
- Sanitize invalid characters from folder names
- Raise ValueError with descriptive messages for invalid cases
- Update tests to validate both folder and class name requirements
- Addresses GitHub comment requiring folder names to be valid Python module names

Co-Authored-By: João <joao@crewai.com>

* fix: correct folder name validation logic to match test expectations

- Fix validation regex to catch names starting with invalid characters like '@#/'
- Ensure validation properly raises ValueError for cases expected by tests
- Maintain support for valid cases like 'my.project/' -> 'myproject'
- Address lucasgomide's comment about valid Python module names

Co-Authored-By: João <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 10:11:16 -04:00
Greyson LaLonde
ece13fbda0 refactor: implement PEP 621 dynamic versioning (#3068) 2025-06-26 10:02:26 -04:00
kilavvy
94a62d84e1 Update test_lite_agent.py (#3040)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 09:55:53 -04:00
Lucas Gomide
cdf8388b18 docs: update CLI LLM's documentation (#3071)
This change aims to be more generic, so we don’t have to constantly reflect all available LLM options suggested by the CLI when creating a crew.
2025-06-26 09:31:43 -04:00
Lorenze Jay
0f861338ef chore: update version to 0.134.0 across project files (#3067)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-25 16:06:43 -07:00
Lucas Gomide
4d1aabf620 feat: enhance CrewBase MCP tools support to allow selecting multiple tools per agent (#3065)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: enhance CrewBase MCP tools support to allow selecting multiple tools per agent

* docs: clarify how to access MCP tools

* build: upgrade crewai-tools
2025-06-25 14:59:55 -04:00
Daniel Barreto
a50fae3a4b Add pt-BR docs translation (#3039)
* docs: add pt-br translations

Powered by a CrewAI Flow https://github.com/danielfsbarreto/docs_translator

* Update mcp/overview.mdx brazilian docs

Its en-US counterpart was updated after I did a pass,
so now it includes the new section about @CrewBase
2025-06-25 11:52:33 -04:00
Lucas Gomide
f6dfec61d6 feat: add official way to use MCP Tools within a CrewBase (#3058)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-24 15:14:59 -04:00
Akshit Madan
060c486948 Updated Docs for maxim observability (#3003)
* docs: added Maxim support for Agent Observability

* enhanced the maxim integration doc page as per the github PR reviewer bot suggestions

* Update maxim-observability.mdx

* Update maxim-observability.mdx

- Fixed Python version, >=3.10
- added expected_output field in Task
- Removed marketing links and added github link

* added maxim in observability

* updated the maxim docs page

* fixed image paths

* removed demo link

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-24 14:36:51 -04:00
Lucas Gomide
8b176d0598 feat: improve Crew search while resetting their memories (#3057)
* test: add tests to test get_crews

* feat: improve Crew search while resetting their memories

Some memories couldn't be reset due to their reliance on relative external sources like `PDFKnowledge`. This was caused by the need to run the reset memories command from the `src` directory, which could break when external files weren't accessible from that path.

This commit allows the reset command to be executed from the root of the project — the same location typically used to run a crew — improving compatibility and reducing friction.

* feat: skip cli/templates folder while looking for Crew

* refactor: use console.print instead of print
2025-06-24 11:48:59 -04:00
Rostyslav Borovyk
c96d4a6823 Add Oxylabs Web Scraping tools (#2905)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add Oxylabs tools

* Review updates

* Review updates

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-23 13:58:16 -04:00
Lucas Gomide
59032817c7 docs: update recommendation filters for MCP and Enterprise tools (#3041)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-20 13:35:26 -04:00
Lucas Gomide
e9d8a853ea feat: support to initialize a tool from defined Tool attributes (#3023)
* feat: support to initialize a tool from defined Tool attributes

* fix: ensure Agent is able to load a list of Tools dynamically
2025-06-20 10:53:37 -04:00
Vidit Ostwal
463ea2b97f Fixed type annotation in task (#3021)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Added Union of List of Task, None, NotSpecified

* Seems like a flaky test

* Fixed run time issue

* Fixed Linting issues

* fix pydantic error

* aesthetic changes

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-19 14:37:46 -04:00
Jannik Maierhöfer
ec2903e5ee fix: upgrade langfuse code examples to langfuse python sdk v3 (#3030)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-19 12:18:33 -04:00
Daniel Barreto
4364585ebc Remove mkdocs from project dependencies (#3036)
CrewAI has been using https://mintlify.com/
to serve its docs
2025-06-19 11:21:08 -04:00
Lorenze Jay
0a6b7c655b docs: add comprehensive integration documentation for various services (#2999)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Introduced detailed documentation for integrations including Asana, Box, ClickUp, GitHub, Gmail, Google Calendar, Google Sheets, HubSpot, Jira, Linear, Notion, Salesforce, Shopify, Slack, Stripe, and Zendesk.
- Updated main docs.json to include a new "Integration Docs" section, organizing the documentation for easy access.
- Each integration includes setup instructions, available actions, and example tasks to streamline user onboarding and usage.
2025-06-18 10:21:18 -04:00
Lucas Gomide
db1e9e9b9a fix: fix pydantic support to 2.7.x (#3016)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Pydantic 2.7.x does not support a second parameter in model validators with mode="after"
2025-06-16 16:20:10 -04:00
Lucas Gomide
d92382b6cf fix: SSL error while getting LLM data from GH (#3014)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
When running behind cloud-based security users are struggling to donwload LLM data from Github. Usually the following error is raised

```
SSL certificate verification failed: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /BerriAI/litellm/main/model_prices_and_context_window.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1010)')))
Current CA bundle path: /usr/local/etc///.pem
```

This commit ensures the SSL config is beign provided while requesting data
2025-06-16 11:34:04 -04:00
Lucas Gomide
7c8f2a1325 docs: add missing docs about LLMGuardrail events (#3013) 2025-06-16 11:05:36 -04:00
2182 changed files with 303425 additions and 50177 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

1429
.cursorrules Normal file

File diff suppressed because it is too large Load Diff

28
.github/codeql/codeql-config.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality

63
.github/security.md vendored
View File

@@ -1,27 +1,50 @@
## CrewAI Security Vulnerability Reporting Policy
## CrewAI Security Policy
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
### Scope
Email all vulnerability reports directly to:
**security@crewai.com**
We welcome reports for vulnerabilities that could impact:
### Required Information
To help us quickly validate and remediate the issue, your report must include:
- CrewAI-maintained source code and repositories
- CrewAI-operated infrastructure and services
- Official CrewAI releases, packages, and distributions
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
### How to Report
### Reward Notice
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
### What to Include
Providing comprehensive information enables us to validate the issue quickly:
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
### Our Commitment
- **Acknowledgement:** We aim to acknowledge your report within two business days.
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
### Coordinated Disclosure
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
### Safe Harbor
We will not pursue or support legal action against individuals who, in good faith:
- Follow this policy and refrain from violating any applicable laws
- Avoid privacy violations, data destruction, or service disruption
- Limit testing to systems in scope and respect rate limits and terms of service
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.

48
.github/workflows/build-uv-cache.yml vendored Normal file
View File

@@ -0,0 +1,48 @@
name: Build uv cache
on:
push:
branches:
- main
paths:
- "uv.lock"
- "pyproject.toml"
schedule:
- cron: "0 0 */5 * *" # Run every 5 days at midnight UTC to prevent cache expiration
workflow_dispatch:
permissions:
contents: read
jobs:
build-cache:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies and populate cache
run: |
echo "Building global UV cache for Python ${{ matrix.python-version }}..."
uv sync --all-groups --all-extras --no-install-project
echo "Cache populated successfully"
- name: Save uv caches
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

103
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,103 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
push:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
pull_request:
branches: [ "main" ]
paths-ignore:
- "lib/crewai/src/crewai/cli/templates/**"
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: actions
build-mode: none
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
config-file: ./.github/codeql/codeql-config.yml
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo 'If you are using a "manual" build mode for one or more of the' \
'languages you are analyzing, replace this with the commands to build' \
'your code, for example:'
echo ' make bootstrap'
echo ' make release'
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -2,6 +2,9 @@ name: Lint
on: [pull_request]
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
@@ -15,8 +18,27 @@ jobs:
- name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1
- name: Install Ruff
run: pip install ruff
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py3.11-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.11"
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras --no-install-project
- name: Get Changed Python Files
id: changed-files
@@ -30,7 +52,18 @@ jobs:
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| xargs -I{} ruff check "{}"
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| grep -v '/tests/' \
| xargs -I{} uv run ruff check "{}"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}

View File

@@ -1,45 +0,0 @@
name: Deploy MkDocs
on:
release:
types: [published]
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Calculate requirements hash
id: req-hash
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
- name: Setup cache
uses: actions/cache@v4
with:
key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Requirements
run: |
sudo apt-get update &&
sudo apt-get install pngquant &&
pip install mkdocs-material mkdocs-material-extensions pillow cairosvg
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
- name: Build and deploy MkDocs
run: mkdocs gh-deploy --force

81
.github/workflows/publish.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Publish to PyPI
on:
release:
types: [ published ]
workflow_dispatch:
jobs:
build:
name: Build packages
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Build packages
run: |
uv build --all-packages
rm dist/.gitignore
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
publish:
name: Publish to PyPI
needs: build
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/crewai
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: "3.12"
enable-cache: false
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: dist
path: dist
- name: Publish to PyPI
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
failed=0
for package in dist/*; do
if [[ "$package" == *"crewai_devtools"* ]]; then
echo "Skipping private package: $package"
continue
fi
echo "Publishing $package"
if ! uv publish "$package"; then
echo "Failed to publish $package"
failed=1
fi
done
if [ $failed -eq 1 ]; then
echo "Some packages failed to publish"
exit 1
fi

View File

@@ -1,23 +0,0 @@
name: Security Checker
on: [pull_request]
jobs:
security-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11.9"
- name: Install dependencies
run: pip install bandit
- name: Run Bandit
run: bandit -c pyproject.toml -r src/ -ll

View File

@@ -3,32 +3,116 @@ name: Run Tests
on: [pull_request]
permissions:
contents: write
contents: read
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
BRAVE_API_KEY: fake-brave-key
SNOWFLAKE_USER: fake-snowflake-user
SNOWFLAKE_PASSWORD: fake-snowflake-password
SNOWFLAKE_ACCOUNT: fake-snowflake-account
SNOWFLAKE_WAREHOUSE: fake-snowflake-warehouse
SNOWFLAKE_DATABASE: fake-snowflake-database
SNOWFLAKE_SCHEMA: fake-snowflake-schema
EMBEDCHAIN_DB_URI: sqlite:///test.db
jobs:
tests:
name: tests (${{ matrix.python-version }})
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
fail-fast: true
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
group: [1, 2, 3, 4, 5, 6, 7, 8]
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper diff
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v3
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --dev --all-extras
run: uv sync --all-groups --all-extras
- name: Run tests
run: uv run pytest --block-network --timeout=60 -vv
- name: Restore test durations
uses: actions/cache/restore@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Run tests (group ${{ matrix.group }} of 8)
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
DURATION_FILE="../../.test_durations_py${PYTHON_VERSION_SAFE}"
# Temporarily always skip cached durations to fix test splitting
# When durations don't match, pytest-split runs duplicate tests instead of splitting
echo "Using even test splitting (duration cache disabled until fix merged)"
DURATIONS_ARG=""
# Original logic (disabled temporarily):
# if [ ! -f "$DURATION_FILE" ]; then
# echo "No cached durations found, tests will be split evenly"
# DURATIONS_ARG=""
# elif git diff origin/${{ github.base_ref }}...HEAD --name-only 2>/dev/null | grep -q "^tests/.*\.py$"; then
# echo "Test files have changed, skipping cached durations to avoid mismatches"
# DURATIONS_ARG=""
# else
# echo "No test changes detected, using cached test durations for optimal splitting"
# DURATIONS_ARG="--durations-path=${DURATION_FILE}"
# fi
cd lib/crewai && uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
$DURATIONS_ARG \
--durations=10 \
-n auto \
--maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8)
run: |
cd lib/crewai-tools && uv run pytest \
--block-network \
--timeout=30 \
-vv \
--splits 8 \
--group ${{ matrix.group }} \
--durations=10 \
-n auto \
--maxfail=3
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

View File

@@ -3,24 +3,99 @@ name: Run Type Checks
on: [pull_request]
permissions:
contents: write
contents: read
jobs:
type-checker:
type-checker-matrix:
name: type-checker (${{ matrix.python-version }})
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11.9"
fetch-depth: 0 # Fetch all history for proper diff
- name: Install Requirements
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install dependencies
run: uv sync --all-groups --all-extras
- name: Get changed Python files
id: changed-files
run: |
pip install mypy
# Get the list of changed Python files compared to the base branch
echo "Fetching changed files..."
git diff --name-only --diff-filter=ACMRT origin/${{ github.base_ref }}...HEAD -- '*.py' > changed_files.txt
- name: Run type checks
run: mypy src
# Filter for files in src/ directory only (excluding tests/)
grep -E "^src/" changed_files.txt > filtered_changed_files.txt || true
# Check if there are any changed files
if [ -s filtered_changed_files.txt ]; then
echo "Changed Python files in src/:"
cat filtered_changed_files.txt
echo "has_changes=true" >> $GITHUB_OUTPUT
# Convert newlines to spaces for mypy command
echo "files=$(cat filtered_changed_files.txt | tr '\n' ' ')" >> $GITHUB_OUTPUT
else
echo "No Python files changed in src/"
echo "has_changes=false" >> $GITHUB_OUTPUT
fi
- name: Run type checks on changed files
if: steps.changed-files.outputs.has_changes == 'true'
run: |
echo "Running mypy on changed files with Python ${{ matrix.python-version }}..."
uv run mypy ${{ steps.changed-files.outputs.files }}
- name: No files to check
if: steps.changed-files.outputs.has_changes == 'false'
run: echo "No Python files in src/ were modified - skipping type checks"
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
# Summary job to provide single status for branch protection
type-checker:
name: type-checker
runs-on: ubuntu-latest
needs: type-checker-matrix
if: always()
steps:
- name: Check matrix results
run: |
if [ "${{ needs.type-checker-matrix.result }}" == "success" ] || [ "${{ needs.type-checker-matrix.result }}" == "skipped" ]; then
echo "✅ All type checks passed"
else
echo "❌ Type checks failed"
exit 1
fi

View File

@@ -0,0 +1,71 @@
name: Update Test Durations
on:
push:
branches:
- main
paths:
- 'tests/**/*.py'
workflow_dispatch:
permissions:
contents: read
jobs:
update-durations:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Restore global uv cache
id: cache-restore
uses: actions/cache/restore@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
restore-keys: |
uv-main-py${{ matrix.python-version }}-
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
python-version: ${{ matrix.python-version }}
enable-cache: false
- name: Install the project
run: uv sync --all-groups --all-extras
- name: Run all tests and store durations
run: |
PYTHON_VERSION_SAFE=$(echo "${{ matrix.python-version }}" | tr '.' '_')
uv run pytest --store-durations --durations-path=.test_durations_py${PYTHON_VERSION_SAFE} -n auto
continue-on-error: true
- name: Save durations to cache
if: always()
uses: actions/cache/save@v4
with:
path: .test_durations_py*
key: test-durations-py${{ matrix.python-version }}
- name: Save uv caches
if: steps.cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
path: |
~/.cache/uv
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}

5
.gitignore vendored
View File

@@ -2,7 +2,6 @@
.pytest_cache
__pycache__
dist/
lib/
.env
assets/*
.idea
@@ -21,9 +20,9 @@ crew_tasks_output.json
.mypy_cache
.ruff_cache
.venv
agentops.log
test_flow.html
crewairules.mdc
plan.md
conceptual_plan.md
build_image
build_image
chromadb-*.lock

View File

@@ -1,7 +1,27 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.2
- repo: local
hooks:
- id: ruff
args: ["--fix"]
name: ruff
entry: bash -c 'source .venv/bin/activate && uv run ruff check --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: ruff-format
name: ruff-format
entry: bash -c 'source .venv/bin/activate && uv run ruff format --config pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
- id: mypy
name: mypy
entry: bash -c 'source .venv/bin/activate && uv run mypy --config-file pyproject.toml "$@"' --
language: system
pass_filenames: true
types: [python]
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:
- id: uv-lock

View File

@@ -1,4 +0,0 @@
exclude = [
"templates",
"__init__.py",
]

View File

@@ -62,9 +62,9 @@
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI Enterprise Suite
# CrewAI AMP Suite
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
@@ -76,9 +76,9 @@ You can try one part of the suite the [Crew Control Plane for free](https://app.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI Enterprise is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
## Table of contents
@@ -418,10 +418,10 @@ Choose CrewAI to easily build powerful, adaptable, and production-ready AI autom
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis)
### Quick Tutorial
@@ -429,19 +429,19 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
@@ -674,9 +674,9 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b
### Enterprise Features
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-amp-offer)
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-amp-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-amp-for-free)
### Q: What exactly is CrewAI?
@@ -732,17 +732,17 @@ A: Check out practical examples in the [CrewAI-examples repository](https://gith
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI Enterprise offer?
### Q: What additional features does CrewAI AMP offer?
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI Enterprise for free?
### Q: Can I try CrewAI AMP for free?
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
@@ -762,7 +762,7 @@ A: CrewAI is highly scalable, supporting simple automations and large-scale ente
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?

File diff suppressed because it is too large Load Diff

View File

@@ -1,473 +0,0 @@
---
title: Changelog
description: View the latest updates and changes to CrewAI
icon: timeline
---
<Update label="2024-05-22" description="v0.121.0" tags={["Latest"]}>
## Release Highlights
<Frame>
<img src="/images/releases/v01210.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.121.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed encoding error when creating tools
- Fixed failing llama test
- Updated logging configuration for consistency
- Enhanced telemetry initialization and event handling
**New Features & Enhancements**
- Added **markdown attribute** to the Task class
- Added **reasoning attribute** to the Agent class
- Added **inject_date flag** to Agent for automatic date injection
- Implemented **HallucinationGuardrail** (no-op with test coverage)
**Documentation & Guides**
- Added documentation for **StagehandTool** and improved MDX structure
- Added documentation for **MCP integration** and updated enterprise docs
- Documented knowledge events and updated reasoning docs
- Added stop parameter documentation
- Fixed import references in doc examples (before_kickoff, after_kickoff)
- General docs updates and restructuring for clarity
</Update>
<Update label="2024-05-15" description="v0.120.1">
## Release Highlights
<Frame>
<img src="/images/releases/v01201.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.120.1">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed **interpolation with hyphens**
</Update>
<Update label="2024-05-14" description="v0.120.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01200.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.120.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Enabled **full Ruff rule set** by default for stricter linting
- Addressed race condition in FilteredStream using context managers
- Fixed agent knowledge reset issue
- Refactored agent fetching logic into utility module
**New Features & Enhancements**
- Added support for **loading an Agent directly from a repository**
- Enabled setting an empty context for Task
- Enhanced Agent repository feedback and fixed Tool auto-import behavior
- Introduced direct initialization of knowledge (bypassing knowledge_sources)
**Documentation & Guides**
- Updated security.md for current security practices
- Cleaned up Google setup section for clarity
- Added link to AI Studio when entering Gemini key
- Updated Arize Phoenix observability guide
- Refreshed flow documentation
</Update>
<Update label="2024-05-08" description="v0.119.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01190.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.119.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Improved test reliability by enhancing pytest handling for flaky tests
- Fixed memory reset crash when embedding dimensions mismatch
- Enabled parent flow identification for Crew and LiteAgent
- Prevented telemetry-related crashes when unavailable
- Upgraded **LiteLLM version** for better compatibility
- Fixed llama converter tests by removing skip_external_api
**New Features & Enhancements**
- Introduced **knowledge retrieval prompt re-writing** in Agent for improved tracking and debugging
- Made LLM setup and quickstart guides model-agnostic
**Documentation & Guides**
- Added advanced configuration docs for the RAG tool
- Updated Windows troubleshooting guide
- Refined documentation examples for better clarity
- Fixed typos across docs and config files
</Update>
<Update label="2024-04-28" description="v0.118.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01180.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.118.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed issues with missing prompt or system templates
- Removed global logging configuration to avoid unintended overrides
- Renamed **TaskGuardrail to LLMGuardrail** for improved clarity
- Downgraded litellm to version 1.167.1 for compatibility
- Added missing init.py files to ensure proper module initialization
**New Features & Enhancements**
- Added support for **no-code Guardrail creation** to simplify AI behavior controls
**Documentation & Guides**
- Removed CrewStructuredTool from public documentation to reflect internal usage
- Updated enterprise documentation and YouTube embed for improved onboarding experience
</Update>
<Update label="2024-04-20" description="v0.117.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01170.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.117.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Added `result_as_answer` parameter support in `@tool` decorator.
- Introduced support for new language models: GPT-4.1, Gemini-2.0, and Gemini-2.5 Pro.
- Enhanced knowledge management capabilities.
- Added Huggingface provider option in CLI.
- Improved compatibility and CI support for Python 3.10+.
**Core Improvements & Fixes**
- Fixed issues with incorrect template parameters and missing inputs.
- Improved asynchronous flow handling with coroutine condition checks.
- Enhanced memory management with isolated configuration and correct memory object copying.
- Fixed initialization of lite agents with correct references.
- Addressed Python type hint issues and removed redundant imports.
- Updated event placement for improved tool usage tracking.
- Raised explicit exceptions when flows fail.
- Removed unused code and redundant comments from various modules.
- Updated GitHub App token action to v2.
**Documentation & Guides**
- Enhanced documentation structure, including enterprise deployment instructions.
- Automatically create output folders for documentation generation.
- Fixed broken link in WeaviateVectorSearchTool documentation.
- Fixed guardrail documentation usage and import paths for JSON search tools.
- Updated documentation for CodeInterpreterTool.
- Improved SEO, contextual navigation, and error handling for documentation pages.
</Update>
<Update label="2024-04-25" description="v0.117.1">
## Release Highlights
<Frame>
<img src="/images/releases/v01171.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.117.1">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Upgraded **crewai-tools** to latest version
- Upgraded **liteLLM** to latest version
- Fixed **Mem0 OSS**
</Update>
<Update label="2024-04-07" description="v0.114.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01140.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.114.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Agents as an atomic unit. (`Agent(...).kickoff()`)
- Support for [Custom LLM implementations](https://docs.crewai.com/guides/advanced/custom-llm).
- Integrated External Memory and [Opik observability](https://docs.crewai.com/how-to/opik-observability).
- Enhanced YAML extraction.
- Multimodal agent validation.
- Added Secure fingerprints for agents and crews.
**Core Improvements & Fixes**
- Improved serialization, agent copying, and Python compatibility.
- Added wildcard support to `emit()`
- Added support for additional router calls and context window adjustments.
- Fixed typing issues, validation, and import statements.
- Improved method performance.
- Enhanced agent task handling, event emissions, and memory management.
- Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs.
**Documentation & Guides**
- Improved documentation structure, theme, and organization.
- Added guides for Local NVIDIA NIM with WSL2, W&B Weave, and Arize Phoenix.
- Updated tool configuration examples, prompts, and observability docs.
- Guide on using singular agents within Flows.
</Update>
<Update label="2024-03-17" description="v0.108.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01080.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.108.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Converted tabs to spaces in `crew.py` template
- Enhanced LLM Streaming Response Handling and Event System
- Included `model_name`
- Enhanced Event Listener with rich visualization and improved logging
- Added fingerprints
**Bug Fixes**
- Fixed Mistral issues
- Fixed a bug in documentation
- Fixed type check error in fingerprint property
**Documentation Updates**
- Improved tool documentation
- Updated installation guide for the `uv` tool package
- Added instructions for upgrading crewAI with the `uv` tool
- Added documentation for `ApifyActorsTool`
</Update>
<Update label="2024-03-10" description="v0.105.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01050.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.105.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Fixed issues with missing template variables and user memory configuration
- Improved async flow support and addressed agent response formatting
- Enhanced memory reset functionality and fixed CLI memory commands
- Fixed type issues, tool calling properties, and telemetry decoupling
**New Features & Enhancements**
- Added Flow state export and improved state utilities
- Enhanced agent knowledge setup with optional crew embedder
- Introduced event emitter for better observability and LLM call tracking
- Added support for Python 3.10 and ChatOllama from langchain_ollama
- Integrated context window size support for the o3-mini model
- Added support for multiple router calls
**Documentation & Guides**
- Improved documentation layout and hierarchical structure
- Added QdrantVectorSearchTool guide and clarified event listener usage
- Fixed typos in prompts and updated Amazon Bedrock model listings
</Update>
<Update label="2024-02-12" description="v0.102.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01020.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.102.0">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models
- Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks
- Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class
- Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types
**New Features & Enhancements**
- Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support
- Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation
- Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files
- General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues
- Adding new tool: `QdrantVectorSearchTool`
**Documentation & Guides**
- Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation
- Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation
- Fixed Various Typos & Formatting Issues
</Update>
<Update label="2024-01-28" description="v0.100.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01000.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.100.0">View on GitHub</a>
</div>
**Features**
- Add Composio docs
- Add SageMaker as a LLM provider
**Fixes**
- Overall LLM connection issues
- Using safe accessors on training
- Add version check to crew_chat.py
**Documentation**
- New docs for crewai chat
- Improve formatting and clarity in CLI and Composio Tool docs
</Update>
<Update label="2024-01-20" description="v0.98.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0980.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.98.0">View on GitHub</a>
</div>
**Features**
- Conversation crew v1
- Add unique ID to flow states
- Add @persist decorator with FlowPersistence interface
**Integrations**
- Add SambaNova integration
- Add NVIDIA NIM provider in cli
- Introducing VoyageAI
**Fixes**
- Fix API Key Behavior and Entity Handling in Mem0 Integration
- Fixed core invoke loop logic and relevant tests
- Make tool inputs actual objects and not strings
- Add important missing parts to creating tools
- Drop litellm version to prevent windows issue
- Before kickoff if inputs are none
- Fixed typos, nested pydantic model issue, and docling issues
</Update>
<Update label="2024-01-04" description="v0.95.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0950.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.95.0">View on GitHub</a>
</div>
**New Features**
- Adding Multimodal Abilities to Crew
- Programatic Guardrails
- HITL multiple rounds
- Gemini 2.0 Support
- CrewAI Flows Improvements
- Add Workflow Permissions
- Add support for langfuse with litellm
- Portkey Integration with CrewAI
- Add interpolate_only method and improve error handling
- Docling Support
- Weviate Support
**Fixes**
- output_file not respecting system path
- disk I/O error when resetting short-term memory
- CrewJSONEncoder now accepts enums
- Python max version
- Interpolation for output_file in Task
- Handle coworker role name case/whitespace properly
- Add tiktoken as explicit dependency and document Rust requirement
- Include agent knowledge in planning process
- Change storage initialization to None for KnowledgeStorage
- Fix optional storage checks
- include event emitter in flows
- Docstring, Error Handling, and Type Hints Improvements
- Suppressed userWarnings from litellm pydantic issues
</Update>
<Update label="2024-12-05" description="v0.86.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0860.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.86.0">View on GitHub</a>
</div>
**Changes**
- Remove all references to pipeline and pipeline router
- Add Nvidia NIM as provider in Custom LLM
- Add knowledge demo + improve knowledge docs
- Add HITL multiple rounds of followup
- New docs about yaml crew with decorators
- Simplify template crew
</Update>
<Update label="2024-12-04" description="v0.85.0">
## Release Highlights
<Frame>
<img src="/images/releases/v0850.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.85.0">View on GitHub</a>
</div>
**Features**
- Added knowledge to agent level
- Feat/remove langchain
- Improve typed task outputs
- Log in to Tool Repository on crewai login
**Fixes**
- Fixes issues with result as answer not properly exiting LLM loop
- Fix missing key name when running with ollama provider
- Fix spelling issue found
**Documentation**
- Update readme for running mypy
- Add knowledge to mint.json
- Update Github actions
- Update Agents docs to include two approaches for creating an agent
- Improvements to LLM Configuration and Usage
</Update>
<Update label="2024-11-25" description="v0.83.0">
**New Features**
- New before_kickoff and after_kickoff crew callbacks
- Support to pre-seed agents with Knowledge
- Add support for retrieving user preferences and memories using Mem0
**Fixes**
- Fix Async Execution
- Upgrade chroma and adjust embedder function generator
- Update CLI Watson supported models + docs
- Reduce level for Bandit
- Fixing all tests
**Documentation**
- Update Docs
</Update>
<Update label="2024-11-13" description="v0.80.0">
**Fixes**
- Fixing Tokens callback replacement bug
- Fixing Step callback issue
- Add cached prompt tokens info on usage metrics
- Fix crew_train_success test
</Update>

View File

@@ -1,67 +0,0 @@
---
title: Training
description: Learn how to train your CrewAI agents by giving them feedback early on and get consistent results.
icon: dumbbell
---
## Overview
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
This helps the agents improve their understanding, decision-making, and problem-solving abilities.
### Training Your Crew Using the CLI
To use the training feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
```shell
crewai train -n <n_iterations> <filename> (optional)
```
<Tip>
Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`.
</Tip>
### Training Your Crew Programmatically
To train your crew programmatically, use the following steps:
1. Define the number of iterations for training.
2. Specify the input parameters for the training process.
3. Execute the training command within a try-except block to handle potential errors.
```python Code
n_iterations = 2
inputs = {"topic": "CrewAI Training"}
filename = "your_model.pkl"
try:
YourCrewName_Crew().crew().train(
n_iterations=n_iterations,
inputs=inputs,
filename=filename
)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
```
### Key Points to Note
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
- **Filename Requirement:** Ensure that the filename ends with `.pkl`. The code will raise a `ValueError` if this condition is not met.
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.
Happy training with CrewAI! 🚀

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,8 @@
---
title: "GET /inputs"
description: "Get required inputs for your crew"
openapi: "/enterprise-api.en.yaml GET /inputs"
mode: "wide"
---

View File

@@ -1,28 +1,29 @@
---
title: "Introduction"
description: "Complete reference for the CrewAI Enterprise REST API"
description: "Complete reference for the CrewAI AMP REST API"
icon: "code"
mode: "wide"
---
# CrewAI Enterprise API
# CrewAI AMP API
Welcome to the CrewAI Enterprise API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
Welcome to the CrewAI AMP API reference. This API allows you to programmatically interact with your deployed crews, enabling integration with your applications, workflows, and services.
## Quick Start
<Steps>
<Step title="Get Your API Credentials">
Navigate to your crew's detail page in the CrewAI Enterprise dashboard and copy your Bearer Token from the Status tab.
Navigate to your crew's detail page in the CrewAI AMP dashboard and copy your Bearer Token from the Status tab.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
</Step>
<Step title="Monitor Progress">
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
</Step>
@@ -45,7 +46,7 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
You can find both token types in the Status tab of your crew's detail page in the CrewAI AMP dashboard.
</Tip>
## Base URL
@@ -61,7 +62,7 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
## Typical Workflow
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
4. **Results**: Extract the final output from the completed response
@@ -81,12 +82,12 @@ The API uses standard HTTP status codes:
## Interactive Testing
<Info>
**Why no "Send" button?** Since each CrewAI Enterprise user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
**Why no "Send" button?** Since each CrewAI AMP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
</Info>
Each endpoint page shows you:
- ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases
- ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
- ✅ **Authentication examples** with proper Bearer token format
@@ -103,7 +104,7 @@ Each endpoint page shows you:
**Example workflow:**
1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard
4. **Run the request** in your terminal or API client

View File

@@ -0,0 +1,8 @@
---
title: "POST /kickoff"
description: "Start a crew execution"
openapi: "/enterprise-api.en.yaml POST /kickoff"
mode: "wide"
---

View File

@@ -0,0 +1,6 @@
---
title: "POST /resume"
description: "Resume crew execution with human feedback"
openapi: "/enterprise-api.en.yaml POST /resume"
mode: "wide"
---

View File

@@ -0,0 +1,8 @@
---
title: "GET /status/{kickoff_id}"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
mode: "wide"
---

1917
docs/en/changelog.mdx Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,7 @@
title: Agents
description: Detailed guide on creating and managing agents within the CrewAI framework.
icon: robot
mode: "wide"
---
## Overview of an Agent
@@ -19,7 +20,7 @@ Think of an agent as a specialized team member with specific skills, expertise,
</Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)
@@ -71,7 +72,7 @@ There are two ways to create agents in CrewAI: using **YAML configuration (recom
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
@@ -312,7 +313,7 @@ multimodal_agent = Agent(
<Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
@@ -425,7 +426,7 @@ strict_agent = Agent(
```python Code
# Perfect for document processing
document_processor = Agent(
role="Document Analyst",
role="Document Analyst",
goal="Extract insights from large research papers",
backstory="Expert at analyzing extensive documentation",
respect_context_window=True, # Handle large documents gracefully
@@ -526,6 +527,103 @@ agent = Agent(
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
</Note>
## Direct Agent Interaction with `kickoff()`
Agents can be used directly without going through a task or crew workflow using the `kickoff()` method. This provides a simpler way to interact with an agent when you don't need the full crew orchestration capabilities.
### How `kickoff()` Works
The `kickoff()` method allows you to send messages directly to an agent and get a response, similar to how you would interact with an LLM but with all the agent's capabilities (tools, reasoning, etc.).
```python Code
from crewai import Agent
from crewai_tools import SerperDevTool
# Create an agent
researcher = Agent(
role="AI Technology Researcher",
goal="Research the latest AI developments",
tools=[SerperDevTool()],
verbose=True
)
# Use kickoff() to interact directly with the agent
result = researcher.kickoff("What are the latest developments in language models?")
# Access the raw response
print(result.raw)
```
### Parameters and Return Values
| Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties:
- `raw`: String containing the raw output text
- `pydantic`: Parsed Pydantic model (if a `response_format` was provided)
- `agent_role`: Role of the agent that produced the output
- `usage_metrics`: Token usage metrics for the execution
### Structured Output
You can get structured output by providing a Pydantic model as the `response_format`:
```python Code
from pydantic import BaseModel
from typing import List
class ResearchFindings(BaseModel):
main_points: List[str]
key_technologies: List[str]
future_predictions: str
# Get structured output
result = researcher.kickoff(
"Summarize the latest developments in AI for 2025",
response_format=ResearchFindings
)
# Access structured data
print(result.pydantic.main_points)
print(result.pydantic.future_predictions)
```
### Multiple Messages
You can also provide a conversation history as a list of message dictionaries:
```python Code
messages = [
{"role": "user", "content": "I need information about large language models"},
{"role": "assistant", "content": "I'd be happy to help with that! What specifically would you like to know?"},
{"role": "user", "content": "What are the latest developments in 2025?"}
]
result = researcher.kickoff(messages)
```
### Async Support
An asynchronous version is available via `kickoff_async()` with the same parameters:
```python Code
import asyncio
async def main():
result = await researcher.kickoff_async("What are the latest developments in AI?")
print(result.raw)
asyncio.run(main())
```
<Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
</Note>
## Important Considerations and Best Practices
### Security and Code Execution

View File

@@ -2,8 +2,11 @@
title: CLI
description: Learn how to use the CrewAI CLI to interact with CrewAI.
icon: terminal
mode: "wide"
---
<Warning>Since release 0.140.0, CrewAI AMP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
## Overview
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
@@ -86,7 +89,7 @@ crewai replay [OPTIONS]
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell Terminal
```shell Terminal
crewai replay -t task_123456
```
@@ -132,7 +135,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell Terminal
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -149,7 +152,7 @@ Starting from version 0.103.0, the `crewai run` command can be used to run both
</Note>
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
@@ -183,13 +186,10 @@ def crew(self) -> Crew:
### 10. Deploy
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
```shell Terminal
crewai signup
```
If you already have an account, you can login with:
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
You can login or create an account with:
```shell Terminal
crewai login
```
@@ -203,7 +203,7 @@ Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
### 11. Organization Management
Manage your CrewAI Enterprise organizations.
Manage your CrewAI AMP organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
@@ -227,17 +227,17 @@ crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI Enterprise to use these organization management commands.
You must be authenticated to CrewAI AMP to use these organization management commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI Enterprise platform.
```
- Initiates the deployment process on the CrewAI AMP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
@@ -262,7 +262,7 @@ You must be authenticated to CrewAI Enterprise to use these organization managem
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI Enterprise platform.
This deletes the deployment from the CrewAI AMP platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
@@ -270,28 +270,44 @@ You must be authenticated to CrewAI Enterprise to use these organization managem
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
<iframe
width="100%"
height="400"
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
allowFullScreen
></iframe>
### 11. API Keys
### 11. Login
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
Authenticate with CrewAI AMP using a secure device code flow (no email entry required).
Once you've selected an LLM provider, you will be prompted for API keys.
```shell Terminal
crewai login
```
#### Initial API key providers
What happens:
- A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL
- Enter/confirm the code to complete authentication
The CLI will initially prompt for API keys for the following services:
Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider and model, you will be prompted for API keys.
#### Available LLM Providers
Here's a list of the most popular LLM providers suggested by the CLI:
* OpenAI
* Groq
@@ -299,11 +315,11 @@ The CLI will initially prompt for API keys for the following services:
* Google Gemini
* SambaNova
When you select a provider, the CLI will prompt you to enter your API key.
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
#### Other Options
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
If you select "other", you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
@@ -311,5 +327,85 @@ See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management
Manage CLI configuration settings for CrewAI.
```shell Terminal
crewai config [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: Display all CLI configuration parameters
```shell Terminal
crewai config list
```
- `set`: Set a CLI configuration parameter
```shell Terminal
crewai config set <key> <value>
```
- `reset`: Reset all CLI configuration parameters to default values
```shell Terminal
crewai config reset
```
#### Available Configuration Parameters
- `enterprise_base_url`: Base URL of the CrewAI AMP instance
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
- `oauth2_domain`: OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens
#### Examples
Display current configuration:
```shell Terminal
crewai config list
```
Example output:
| Setting | Value | Description |
| :------------------ | :----------------------- | :---------------------------------------------------------- |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
Set the enterprise base URL:
```shell Terminal
crewai config set enterprise_base_url https://my-enterprise.crewai.com
```
Set OAuth2 provider:
```shell Terminal
crewai config set oauth2_provider auth0
```
Set OAuth2 domain:
```shell Terminal
crewai config set oauth2_domain my-company.auth0.com
```
Reset all configuration to defaults:
```shell Terminal
crewai config reset
```
<Tip>
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
<Tip>
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
</Tip>
<Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
</Note>

View File

@@ -2,6 +2,7 @@
title: Collaboration
description: How to enable agents to work together, delegate tasks, and communicate effectively within CrewAI teams.
icon: screen-users
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Crews
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
icon: people-group
mode: "wide"
---
## Overview
@@ -20,8 +21,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). | |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
@@ -32,6 +32,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
<Tip>
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -45,7 +46,7 @@ There are two ways to create crews in CrewAI: using **YAML configuration (recomm
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
After creating your CrewAI project as outlined in the [Installation](/en/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
#### Example Crew Class with Decorators
@@ -66,8 +67,8 @@ class YourCrewName:
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff
def prepare_inputs(self, inputs):
@@ -111,7 +112,7 @@ class YourCrewName:
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the @agent decorator
tasks=self.tasks, # Automatically collected by the @task decorator.
tasks=self.tasks, # Automatically collected by the @task decorator.
process=Process.sequential,
verbose=True,
)

View File

@@ -2,6 +2,7 @@
title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring'
icon: spinner
mode: "wide"
---
## Overview
@@ -19,7 +20,7 @@ CrewAI uses an event bus architecture to emit events throughout the execution li
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png)
@@ -44,12 +45,12 @@ To create a custom event listener, you need to:
Here's a simple example of a custom event listener class:
```python
from crewai.utilities.events import (
from crewai.events import (
CrewKickoffStartedEvent,
CrewKickoffCompletedEvent,
AgentExecutionCompletedEvent,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.events import BaseEventListener
class MyCustomListener(BaseEventListener):
def __init__(self):
@@ -146,7 +147,7 @@ my_project/
```python
# my_custom_listener.py
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.events import BaseEventListener
# ... import events ...
class MyCustomListener(BaseEventListener):
@@ -177,14 +178,7 @@ class MyCustomCrew:
# Your crew implementation...
```
This is exactly how CrewAI's built-in `agentops_listener` is registered. In the CrewAI codebase, you'll find:
```python
# src/crewai/utilities/events/third_party/__init__.py
from .agentops_listener import agentops_listener
```
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
This is how third-party event listeners are registered in the CrewAI codebase.
## Available Event Types
@@ -233,6 +227,11 @@ CrewAI provides a wide range of events that you can listen for:
- **KnowledgeQueryFailedEvent**: Emitted when a knowledge query fails
- **KnowledgeSearchQueryFailedEvent**: Emitted when a knowledge search query fails
### LLM Guardrail Events
- **LLMGuardrailStartedEvent**: Emitted when a guardrail validation starts. Contains details about the guardrail being applied and retry count.
- **LLMGuardrailCompletedEvent**: Emitted when a guardrail validation completes. Contains details about validation success/failure, results, and error messages if any.
### Flow Events
- **FlowCreatedEvent**: Emitted when a Flow is created
@@ -250,6 +249,17 @@ CrewAI provides a wide range of events that you can listen for:
- **LLMCallFailedEvent**: Emitted when an LLM call fails
- **LLMStreamChunkEvent**: Emitted for each chunk received during streaming LLM responses
### Memory Events
- **MemoryQueryStartedEvent**: Emitted when a memory query is started. Contains the query, limit, and optional score threshold.
- **MemoryQueryCompletedEvent**: Emitted when a memory query is completed successfully. Contains the query, results, limit, score threshold, and query execution time.
- **MemoryQueryFailedEvent**: Emitted when a memory query fails. Contains the query, limit, score threshold, and error message.
- **MemorySaveStartedEvent**: Emitted when a memory save operation is started. Contains the value to be saved, metadata, and optional agent role.
- **MemorySaveCompletedEvent**: Emitted when a memory save operation is completed successfully. Contains the saved value, metadata, agent role, and save execution time.
- **MemorySaveFailedEvent**: Emitted when a memory save operation fails. Contains the value, metadata, agent role, and error message.
- **MemoryRetrievalStartedEvent**: Emitted when memory retrieval for a task prompt starts. Contains the optional task ID.
- **MemoryRetrievalCompletedEvent**: Emitted when memory retrieval for a task prompt completes successfully. Contains the task ID, memory content, and retrieval execution time.
## Event Handler Structure
Each event handler receives two parameters:
@@ -264,84 +274,13 @@ The structure of the event object depends on the event type, but all events inhe
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Real-World Example: Integration with AgentOps
CrewAI includes an example of a third-party integration with [AgentOps](https://github.com/AgentOps-AI/agentops), a monitoring and observability platform for AI agents. Here's how it's implemented:
```python
from typing import Optional
from crewai.utilities.events import (
CrewKickoffCompletedEvent,
ToolUsageErrorEvent,
ToolUsageStartedEvent,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.utilities.events.crew_events import CrewKickoffStartedEvent
from crewai.utilities.events.task_events import TaskEvaluationEvent
try:
import agentops
AGENTOPS_INSTALLED = True
except ImportError:
AGENTOPS_INSTALLED = False
class AgentOpsListener(BaseEventListener):
tool_event: Optional["agentops.ToolEvent"] = None
session: Optional["agentops.Session"] = None
def __init__(self):
super().__init__()
def setup_listeners(self, crewai_event_bus):
if not AGENTOPS_INSTALLED:
return
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_kickoff_started(source, event: CrewKickoffStartedEvent):
self.session = agentops.init()
for agent in source.agents:
if self.session:
self.session.create_agent(
name=agent.role,
agent_id=str(agent.id),
)
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_kickoff_completed(source, event: CrewKickoffCompletedEvent):
if self.session:
self.session.end_session(
end_state="Success",
end_state_reason="Finished Execution",
)
@crewai_event_bus.on(ToolUsageStartedEvent)
def on_tool_usage_started(source, event: ToolUsageStartedEvent):
self.tool_event = agentops.ToolEvent(name=event.tool_name)
if self.session:
self.session.record(self.tool_event)
@crewai_event_bus.on(ToolUsageErrorEvent)
def on_tool_usage_error(source, event: ToolUsageErrorEvent):
agentops.ErrorEvent(exception=event.error, trigger_event=self.tool_event)
```
This listener initializes an AgentOps session when a Crew starts, registers agents with AgentOps, tracks tool usage, and ends the session when the Crew completes.
The AgentOps listener is registered in CrewAI's event system through the import in `src/crewai/utilities/events/third_party/__init__.py`:
```python
from .agentops_listener import agentops_listener
```
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:
```python
from crewai.utilities.events import crewai_event_bus, CrewKickoffStartedEvent
from crewai.events import crewai_event_bus, CrewKickoffStartedEvent
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStartedEvent)

View File

@@ -2,6 +2,7 @@
title: Flows
description: Learn how to create and manage AI workflows using CrewAI Flows.
icon: arrow-progress
mode: "wide"
---
## Overview
@@ -97,7 +98,13 @@ The state's unique ID and stored data can be useful for tracking flow executions
### @start()
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
The `@start()` decorator marks entry points for a Flow. You can:
- Declare multiple unconditional starts: `@start()`
- Gate a start on a prior method or router label: `@start("method_or_label")`
- Provide a callable condition to control when a start should fire
All satisfied `@start()` methods will execute (often in parallel) when the Flow begins or resumes.
### @listen()
@@ -868,14 +875,13 @@ By exploring these examples, you can gain insights into how to leverage CrewAI F
Also, check out our YouTube video on how to use flows in CrewAI below!
<iframe
width="560"
height="315"
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/MTb5my6VOT8"
title="YouTube video player"
frameborder="0"
title="CrewAI Flows overview"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
referrerPolicy="strict-origin-when-cross-origin"
allowFullScreen
></iframe>
## Running Flows

View File

@@ -2,6 +2,7 @@
title: Knowledge
description: What is knowledge in CrewAI and how to use it.
icon: book
mode: "wide"
---
## Overview
@@ -24,6 +25,41 @@ For file-based Knowledge Sources, make sure to place your files in a `knowledge`
Also, use relative paths from the `knowledge` directory when creating the source.
</Tip>
### Vector store (RAG) client configuration
CrewAI exposes a provider-neutral RAG client abstraction for vector stores. The default provider is ChromaDB, and Qdrant is supported as well. You can switch providers using configuration utilities.
Supported today:
- ChromaDB (default)
- Qdrant
```python Code
from crewai.rag.config.utils import set_rag_config, get_rag_client, clear_rag_config
# ChromaDB (default)
from crewai.rag.chromadb.config import ChromaDBConfig
set_rag_config(ChromaDBConfig())
chromadb_client = get_rag_client()
# Qdrant
from crewai.rag.qdrant.config import QdrantConfig
set_rag_config(QdrantConfig())
qdrant_client = get_rag_client()
# Example operations (same API for any provider)
client = qdrant_client # or chromadb_client
client.create_collection(collection_name="docs")
client.add_documents(
collection_name="docs",
documents=[{"id": "1", "content": "CrewAI enables collaborative AI agents."}],
)
results = client.search(collection_name="docs", query="collaborative agents", limit=3)
clear_rag_config() # optional reset
```
This RAG client is separate from Knowledges built-in storage. Use it when you need direct vector-store control or custom retrieval pipelines.
### Basic String Knowledge Example
```python Code
@@ -681,11 +717,11 @@ CrewAI emits events during the knowledge retrieval process that you can listen f
#### Example: Monitoring Knowledge Retrieval
```python
from crewai.utilities.events import (
from crewai.events import (
KnowledgeRetrievalStartedEvent,
KnowledgeRetrievalCompletedEvent,
BaseEventListener,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
class KnowledgeMonitorListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus):

View File

@@ -2,11 +2,12 @@
title: 'LLMs'
description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects'
icon: 'microchip-ai'
mode: "wide"
---
## Overview
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
## What are LLMs?
@@ -112,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
<AccordionGroup>
<Accordion title="OpenAI">
Set the following environment variables in your `.env` file:
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
```toml Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
OPENAI_BASE_URL=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4", # call model by provider/model_name
temperature=0.8,
max_tokens=150,
model="openai/gpt-4o",
api_key="your-api-key", # Or set OPENAI_API_KEY
temperature=0.7,
max_tokens=4000
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4o",
api_key="your-api-key",
base_url="https://api.openai.com/v1", # Optional custom endpoint
organization="org-...", # Optional organization ID
project="proj_...", # Optional project ID
temperature=0.7,
max_tokens=4000,
max_completion_tokens=4000, # For newer models
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
seed=42, # For reproducible outputs
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3, # Maximum retry attempts
logprobs=True, # Return log probabilities
top_logprobs=5, # Number of most likely tokens
reasoning_effort="medium" # For o1 models: low, medium, high
)
```
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
**Structured Outputs:**
```python Code
from pydantic import BaseModel
from crewai import LLM
class ResponseFormat(BaseModel):
name: str
age: int
summary: str
llm = LLM(
model="openai/gpt-4o",
)
```
**Supported Environment Variables:**
- `OPENAI_API_KEY`: Your OpenAI API key (required)
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
**Features:**
- Native function calling support (except o1 models)
- Structured outputs with JSON schema
- Streaming support for real-time responses
- Token usage tracking
- Stop sequences support (except o1 models)
- Log probabilities for token-level insights
- Reasoning effort control for o1 models
**Supported Models:**
| Model | Context Window | Best For |
|---------------------|------------------|-----------------------------------------------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
| o1-mini | 128,000 tokens | Efficient reasoning model |
| o3-mini | 200,000 tokens | Lightweight reasoning model |
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
**Note:** To use OpenAI, install the required dependencies:
```bash
uv add "crewai[openai]"
```
</Accordion>
<Accordion title="Meta-Llama">
@@ -186,69 +247,186 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Anthropic">
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
max_tokens=4096 # Required for Anthropic
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key",
base_url="https://api.anthropic.com", # Optional custom endpoint
temperature=0.7,
max_tokens=4096, # Required parameter
top_p=0.9,
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3 # Maximum retry attempts
)
```
**Supported Environment Variables:**
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
**Features:**
- Native tool use support for Claude 3+ models
- Streaming support for real-time responses
- Automatic system message handling
- Stop sequences for controlled output
- Token usage tracking
- Multi-turn tool use conversations
**Important Notes:**
- `max_tokens` is a **required** parameter for all Anthropic models
- Claude uses `stop_sequences` instead of `stop`
- System messages are handled separately from conversation messages
- First message must be from the user (automatically handled)
- Messages must alternate between user and assistant
**Supported Models:**
| Model | Context Window | Best For |
|------------------------------|----------------|-----------------------------------------------|
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
| claude-2 | 100,000 tokens | Versatile model for various tasks |
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
**Note:** To use Anthropic, install the required dependencies:
```bash
uv add "crewai[anthropic]"
```
</Accordion>
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
```toml .env
# https://ai.google.dev/gemini-api/docs/api-key
# Required (one of the following)
GOOGLE_API_KEY=<your-api-key>
GEMINI_API_KEY=<your-api-key>
# Optional - for Vertex AI
GOOGLE_CLOUD_PROJECT=<your-project-id>
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
temperature=0.7
)
```
### Gemini models
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.5-flash",
api_key="your-api-key",
temperature=0.7,
top_p=0.9,
top_k=40, # Top-k sampling parameter
max_output_tokens=8192,
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
safety_settings={
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
}
)
```
**Vertex AI Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro",
project="your-gcp-project-id",
location="us-central1" # GCP region
)
```
**Supported Environment Variables:**
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
**Features:**
- Native function calling support for Gemini 1.5+ and 2.x models
- Streaming support for real-time responses
- Multimodal capabilities (text, images, video)
- Safety settings configuration
- Support for both Gemini API and Vertex AI
- Automatic system instruction handling
- Token usage tracking
**Gemini Models:**
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
**Gemma Models:**
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window | Best For |
|----------------|----------------|------------------------------------|
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
| gemma-3-27b | 128,000 tokens | High-performance tasks |
**Note:** To use Google Gemini, install the required dependencies:
```bash
uv add "crewai[google-genai]"
```
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
@@ -270,7 +448,7 @@ In this section, you'll find detailed examples that help you select, configure,
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro-latest",
model="gemini-1.5-pro-latest", # or vertex_ai/gemini-1.5-pro-latest
temperature=0.7,
vertex_credentials=vertex_credentials_json
)
@@ -290,43 +468,146 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Azure">
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
```toml Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
AZURE_ENDPOINT=<your-endpoint-url>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
```
Example usage in your CrewAI project:
**Endpoint URL Formats:**
For Azure OpenAI deployments:
```
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
```
For Azure AI Inference endpoints:
```
https://<resource-name>.inference.azure.com
```
**Basic Usage:**
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
api_key="<your-api-key>", # Or set AZURE_API_KEY
endpoint="<your-endpoint-url>",
api_version="2024-06-01"
)
```
**Advanced Configuration:**
```python Code
llm = LLM(
model="azure/gpt-4o",
temperature=0.7,
max_tokens=4000,
top_p=0.9,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["END"],
stream=True,
timeout=60.0,
max_retries=3
)
```
**Supported Environment Variables:**
- `AZURE_API_KEY`: Your Azure API key (required)
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
**Features:**
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
- Streaming support for real-time responses
- Automatic endpoint URL validation and correction
- Comprehensive error handling with retry logic
- Token usage tracking
**Note:** To use Azure AI Inference, install the required dependencies:
```bash
uv add "crewai[azure-ai-inference]"
```
</Accordion>
<Accordion title="AWS Bedrock">
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
```toml Code
# Required
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
# Optional
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
region_name="us-east-1"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
**Advanced Configuration:**
```python Code
from crewai import LLM
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
llm = LLM(
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
aws_session_token="your-session-token", # For temporary credentials
region_name="us-east-1",
temperature=0.7,
max_tokens=4096,
top_p=0.9,
top_k=250, # For Claude models
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
guardrail_config={ # Optional content filtering
"guardrailIdentifier": "your-guardrail-id",
"guardrailVersion": "1"
},
additional_model_request_fields={ # Model-specific parameters
"top_k": 250
}
)
```
**Supported Environment Variables:**
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
**Features:**
- Native tool calling support via Converse API
- Streaming and non-streaming responses
- Comprehensive error handling with retry logic
- Guardrail configuration for content filtering
- Model-specific parameters via `additional_model_request_fields`
- Token usage tracking and stop reason logging
- Support for all Bedrock foundation models
- Automatic conversation format handling
**Important Notes:**
- Uses the modern Converse API for unified model access
- Automatic handling of model-specific conversation requirements
- System messages are handled separately from conversation
- First message must be from user (automatically handled)
- Some models (like Cohere) require conversation to end with user message
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
@@ -356,7 +637,12 @@ In this section, you'll find detailed examples that help you select, configure,
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
**Note:** To use AWS Bedrock, install the required dependencies:
```bash
uv add "crewai[bedrock]"
```
</Accordion>
<Accordion title="Amazon SageMaker">
@@ -684,6 +970,28 @@ In this section, you'll find detailed examples that help you select, configure,
- openrouter/deepseek/deepseek-chat
</Info>
</Accordion>
<Accordion title="Nebius AI Studio">
Set the following environment variables in your `.env` file:
```toml Code
NEBIUS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="nebius/Qwen/Qwen3-30B-A3B"
)
```
<Info>
Nebius AI Studio features:
- Large collection of open source models
- Higher rate limits
- Competitive pricing
- Good balance of speed and quality
</Info>
</Accordion>
</AccordionGroup>
## Streaming Responses
@@ -711,10 +1019,10 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
CrewAI emits events for each chunk received during streaming:
```python
from crewai.utilities.events import (
from crewai.events import (
LLMStreamChunkEvent
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.events import BaseEventListener
class MyCustomListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@@ -727,9 +1035,58 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
```
<Tip>
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
</Tip>
</Tab>
<Tab title="Agent & Task Tracking">
All LLM events in CrewAI include agent and task information, allowing you to track and filter LLM interactions by specific agents or tasks:
```python
from crewai import LLM, Agent, Task, Crew
from crewai.events import LLMStreamChunkEvent
from crewai.events import BaseEventListener
class MyCustomListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(LLMStreamChunkEvent)
def on_llm_stream_chunk(source, event):
if researcher.id == event.agent_id:
print("\n==============\n Got event:", event, "\n==============\n")
my_listener = MyCustomListener()
llm = LLM(model="gpt-4o-mini", temperature=0, stream=True)
researcher = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
llm=llm,
)
search = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[search])
result = crew.kickoff(
inputs={"question": "..."}
)
```
<Info>
This feature is particularly useful for:
- Debugging specific agent behaviors
- Logging LLM usage by task type
- Auditing which agents are making what types of LLM calls
- Performance monitoring of specific tasks
</Info>
</Tab>
</Tabs>
## Structured LLM Calls
@@ -825,9 +1182,9 @@ Learn how to get the most out of your LLM configuration:
Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance.
</Info>
</Accordion>
<Accordion title="Drop Additional Parameters">
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
```python
@@ -843,6 +1200,52 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Transport Interceptors">
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
**Supported Providers:**
- ✅ OpenAI
- ✅ Anthropic
**Basic Usage:**
```python
import httpx
from crewai import LLM
from crewai.llms.hooks import BaseInterceptor
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
"""Custom interceptor to modify requests and responses."""
def on_outbound(self, request: httpx.Request) -> httpx.Request:
"""Print request before sending to the LLM provider."""
print(request)
return request
def on_inbound(self, response: httpx.Response) -> httpx.Response:
"""Process response after receiving from the LLM provider."""
print(f"Status: {response.status_code}")
print(f"Response time: {response.elapsed}")
return response
# Use the interceptor with an LLM
llm = LLM(
model="openai/gpt-4o",
interceptor=CustomInterceptor()
)
```
**Important Notes:**
- Both methods must return the received object or type of object.
- Modifying received objects may result in unexpected behavior or application crashes.
- Not all providers support interceptors - check the supported providers list above
<Info>
Interceptors operate at the transport layer. This is particularly useful for:
- Message transformation and filtering
- Debugging API interactions
</Info>
</Accordion>
</AccordionGroup>
## Common Issues and Solutions

View File

@@ -2,15 +2,15 @@
title: Memory
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
icon: database
mode: "wide"
---
## Overview
The CrewAI framework provides a sophisticated memory system designed to significantly enhance AI agent capabilities. CrewAI offers **three distinct memory approaches** that serve different use cases:
The CrewAI framework provides a sophisticated memory system designed to significantly enhance AI agent capabilities. CrewAI offers **two distinct memory approaches** that serve different use cases:
1. **Basic Memory System** - Built-in short-term, long-term, and entity memory
2. **User Memory** - User-specific memory with Mem0 integration (legacy approach)
3. **External Memory** - Standalone external memory providers (new approach)
2. **External Memory** - Standalone external memory providers
## Memory System Components
@@ -19,7 +19,7 @@ The CrewAI framework provides a sophisticated memory system designed to signific
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, `ExternalMemory` and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## 1. Basic Memory System (Recommended)
@@ -62,7 +62,7 @@ By default, CrewAI uses the `appdirs` library to determine storage locations fol
```
~/Library/Application Support/CrewAI/{project_name}/
├── knowledge/ # Knowledge base ChromaDB files
├── short_term_memory/ # Short-term memory ChromaDB files
├── short_term_memory/ # Short-term memory ChromaDB files
├── long_term_memory/ # Long-term memory ChromaDB files
├── entities/ # Entity memory ChromaDB files
└── long_term_memory_storage.db # SQLite database
@@ -202,7 +202,7 @@ crew = Crew(
tasks=[task],
memory=True,
embedder={
"provider": "anthropic", # Match your LLM provider
"provider": "anthropic", # Match your LLM provider
"config": {
"api_key": "your-anthropic-key",
"model": "text-embedding-3-small"
@@ -252,7 +252,7 @@ chroma_path = os.path.join(storage_path, "knowledge")
if os.path.exists(chroma_path):
client = chromadb.PersistentClient(path=chroma_path)
collections = client.list_collections()
print("ChromaDB Collections:")
for collection in collections:
print(f" - {collection.name}: {collection.count()} documents")
@@ -269,7 +269,7 @@ crew = Crew(agents=[...], tasks=[...], memory=True)
# Reset specific memory types
crew.reset_memories(command_type='short') # Short-term memory
crew.reset_memories(command_type='long') # Long-term memory
crew.reset_memories(command_type='long') # Long-term memory
crew.reset_memories(command_type='entity') # Entity memory
crew.reset_memories(command_type='knowledge') # Knowledge storage
```
@@ -540,16 +540,71 @@ crew = Crew(
)
```
### Mem0 Provider
Short-Term Memory and Entity Memory both supports a tight integration with both Mem0 OSS and Mem0 Client as a provider. Here is how you can use Mem0 as a provider.
```python
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.entity_entity_memory import EntityMemory
mem0_oss_embedder_config = {
"provider": "mem0",
"config": {
"user_id": "john",
"local_mem0_config": {
"vector_store": {"provider": "qdrant","config": {"host": "localhost", "port": 6333}},
"llm": {"provider": "openai","config": {"api_key": "your-api-key", "model": "gpt-4"}},
"embedder": {"provider": "openai","config": {"api_key": "your-api-key", "model": "text-embedding-3-small"}}
},
"infer": True # Optional defaults to True
},
}
mem0_client_embedder_config = {
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory
},
}
short_term_memory_mem0_oss = ShortTermMemory(embedder_config=mem0_oss_embedder_config) # Short Term Memory with Mem0 OSS
short_term_memory_mem0_client = ShortTermMemory(embedder_config=mem0_client_embedder_config) # Short Term Memory with Mem0 Client
entity_memory_mem0_oss = EntityMemory(embedder_config=mem0_oss_embedder_config) # Entity Memory with Mem0 OSS
entity_memory_mem0_client = EntityMemory(embedder_config=mem0_client_embedder_config) # Short Term Memory with Mem0 Client
crew = Crew(
memory=True,
short_term_memory=short_term_memory_mem0_oss, # or short_term_memory_mem0_client
entity_memory=entity_memory_mem0_oss # or entity_memory_mem0_client
)
```
### Choosing the Right Embedding Provider
| Provider | Best For | Pros | Cons |
|:---------|:----------|:------|:------|
| **OpenAI** | General use, reliability | High quality, well-tested | Cost, requires API key |
| **Ollama** | Privacy, cost savings | Free, local, private | Requires local setup |
| **Google AI** | Google ecosystem | Good performance | Requires Google account |
| **Azure OpenAI** | Enterprise, compliance | Enterprise features | Complex setup |
| **Cohere** | Multilingual content | Great language support | Specialized use case |
| **VoyageAI** | Retrieval tasks | Optimized for search | Newer provider |
When selecting an embedding provider, consider factors like performance, privacy, cost, and integration needs.
Below is a comparison to help you decide:
| Provider | Best For | Pros | Cons |
| -------------- | ------------------------------ | --------------------------------- | ------------------------- |
| **OpenAI** | General use, high reliability | High quality, widely tested | Paid service, API key required |
| **Ollama** | Privacy-focused, cost savings | Free, runs locally, fully private | Requires local installation/setup |
| **Google AI** | Integration in Google ecosystem| Strong performance, good support | Google account required |
| **Azure OpenAI** | Enterprise & compliance needs| Enterprise-grade features, security | More complex setup process |
| **Cohere** | Multilingual content handling | Excellent language support | More niche use cases |
| **VoyageAI** | Information retrieval & search | Optimized for retrieval tasks | Relatively new provider |
| **Mem0** | Per-user personalization | Search-optimized embeddings | Paid service, API key required |
### Environment Variable Configuration
@@ -596,7 +651,7 @@ providers_to_test = [
{
"name": "Ollama",
"config": {
"provider": "ollama",
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
}
}
@@ -604,7 +659,7 @@ providers_to_test = [
for provider in providers_to_test:
print(f"\nTesting {provider['name']} embeddings...")
# Create crew with specific embedder
crew = Crew(
agents=[...],
@@ -612,7 +667,7 @@ for provider in providers_to_test:
memory=True,
embedder=provider['config']
)
# Run your test and measure performance
result = crew.kickoff()
print(f"{provider['name']} completed successfully")
@@ -623,7 +678,7 @@ for provider in providers_to_test:
**Model not found errors:**
```python
# Verify model availability
from crewai.utilities.embedding_configurator import EmbeddingConfigurator
from crewai.rag.embeddings.configurator import EmbeddingConfigurator
configurator = EmbeddingConfigurator()
try:
@@ -655,17 +710,17 @@ import time
def test_embedding_performance(embedder_config, test_text="This is a test document"):
start_time = time.time()
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder=embedder_config
)
# Simulate memory operation
crew.kickoff()
end_time = time.time()
return end_time - start_time
@@ -676,7 +731,7 @@ openai_time = test_embedding_performance({
})
ollama_time = test_embedding_performance({
"provider": "ollama",
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
})
@@ -684,67 +739,29 @@ print(f"OpenAI: {openai_time:.2f}s")
print(f"Ollama: {ollama_time:.2f}s")
```
## 2. User Memory with Mem0 (Legacy)
### Entity Memory batching behavior
<Warning>
**Legacy Approach**: While fully functional, this approach is considered legacy. For new projects requiring user-specific memory, consider using External Memory instead.
</Warning>
Entity Memory supports batching when saving multiple entities at once. When you pass a list of `EntityMemoryItem`, the system:
User Memory integrates with [Mem0](https://mem0.ai/) to provide user-specific memory that persists across sessions and integrates with the crew's contextual memory system.
- Emits a single MemorySaveStartedEvent with `entity_count`
- Saves each entity internally, collecting any partial errors
- Emits MemorySaveCompletedEvent with aggregate metadata (saved count, errors)
- Raises a partial-save exception if some entities failed (includes counts)
### Prerequisites
```bash
pip install mem0ai
```
This improves performance and observability when writing many entities in one operation.
### Mem0 Cloud Configuration
## 2. External Memory
External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing.
### Basic External Memory with Mem0
```python
import os
from crewai import Crew, Process
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
# Set your Mem0 API key
os.environ["MEM0_API_KEY"] = "m0-your-api-key"
crew = Crew(
agents=[...],
tasks=[...],
memory=True, # Required for contextual memory integration
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
"user_memory": {} # Required - triggers user memory initialization
},
process=Process.sequential,
verbose=True
)
```
### Advanced Mem0 Configuration
```python
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
memory_config={
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
},
"user_memory": {}
}
)
```
### Local Mem0 Configuration
```python
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
memory_config={
# Create external memory instance with local Mem0 Configuration
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {
"user_id": "john",
@@ -761,37 +778,60 @@ crew = Crew(
"provider": "openai",
"config": {"api_key": "your-api-key", "model": "text-embedding-3-small"}
}
}
},
"infer": True # Optional defaults to True
},
"user_memory": {}
}
)
```
## 3. External Memory (New Approach)
External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing.
### Basic External Memory with Mem0
```python
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
os.environ["MEM0_API_KEY"] = "your-api-key"
# Create external memory instance
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {"user_id": "U-123"}
}
)
crew = Crew(
agents=[...],
tasks=[...],
external_memory=external_memory, # Separate from basic memory
external_memory=external_memory, # Separate from basic memory
process=Process.sequential,
verbose=True
)
```
### Advanced External Memory with Mem0 Client
When using Mem0 Client, you can customize the memory configuration further, by using parameters like 'includes', 'excludes', 'custom_categories', 'infer' and 'run_id' (this is only for short-term memory).
You can find more details in the [Mem0 documentation](https://docs.mem0.ai/).
```python
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
new_categories = [
{"lifestyle_management_concerns": "Tracks daily routines, habits, hobbies and interests including cooking, time management and work-life balance"},
{"seeking_structure": "Documents goals around creating routines, schedules, and organized systems in various life areas"},
{"personal_information": "Basic information about the user including name, preferences, and personality traits"}
]
os.environ["MEM0_API_KEY"] = "your-api-key"
# Create external memory instance with Mem0 Client
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory
},
}
)
crew = Crew(
agents=[...],
tasks=[...],
external_memory=external_memory, # Separate from basic memory
process=Process.sequential,
verbose=True
)
@@ -808,8 +848,8 @@ class CustomStorage(Storage):
def save(self, value, metadata=None, agent=None):
self.memories.append({
"value": value,
"metadata": metadata,
"value": value,
"metadata": metadata,
"agent": agent
})
@@ -830,17 +870,18 @@ crew = Crew(
)
```
## Memory System Comparison
## 🧠 Memory System Comparison
| **Category** | **Feature** | **Basic Memory** | **External Memory** |
|---------------------|------------------------|-----------------------------|------------------------------|
| **Ease of Use** | Setup Complexity | Simple | Moderate |
| | Integration | Built-in (contextual) | Standalone |
| **Persistence** | Storage | Local files | Custom / Mem0 |
| | Cross-session Support | ✅ | ✅ |
| **Personalization** | User-specific Memory | ❌ | ✅ |
| | Custom Providers | Limited | Any provider |
| **Use Case Fit** | Recommended For | Most general use cases | Specialized / custom needs |
| Feature | Basic Memory | User Memory (Legacy) | External Memory |
|---------|-------------|---------------------|----------------|
| **Setup Complexity** | Simple | Medium | Medium |
| **Integration** | Built-in contextual | Contextual + User-specific | Standalone |
| **Storage** | Local files | Mem0 Cloud/Local | Custom/Mem0 |
| **Cross-session** | ✅ | ✅ | ✅ |
| **User-specific** | ❌ | ✅ | ✅ |
| **Custom providers** | Limited | Mem0 only | Any provider |
| **Recommended for** | Most use cases | Legacy projects | Specialized needs |
## Supported Embedding Providers
@@ -986,7 +1027,201 @@ crew = Crew(
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
## Memory Events
CrewAI's event system provides powerful insights into memory operations. By leveraging memory events, you can monitor, debug, and optimize your memory system's performance and behavior.
### Available Memory Events
CrewAI emits the following memory-related events:
| Event | Description | Key Properties |
| :---- | :---------- | :------------- |
| **MemoryQueryStartedEvent** | Emitted when a memory query begins | `query`, `limit`, `score_threshold` |
| **MemoryQueryCompletedEvent** | Emitted when a memory query completes successfully | `query`, `results`, `limit`, `score_threshold`, `query_time_ms` |
| **MemoryQueryFailedEvent** | Emitted when a memory query fails | `query`, `limit`, `score_threshold`, `error` |
| **MemorySaveStartedEvent** | Emitted when a memory save operation begins | `value`, `metadata`, `agent_role` |
| **MemorySaveCompletedEvent** | Emitted when a memory save operation completes successfully | `value`, `metadata`, `agent_role`, `save_time_ms` |
| **MemorySaveFailedEvent** | Emitted when a memory save operation fails | `value`, `metadata`, `agent_role`, `error` |
| **MemoryRetrievalStartedEvent** | Emitted when memory retrieval for a task prompt starts | `task_id` |
| **MemoryRetrievalCompletedEvent** | Emitted when memory retrieval completes successfully | `task_id`, `memory_content`, `retrieval_time_ms` |
### Practical Applications
#### 1. Memory Performance Monitoring
Track memory operation timing to optimize your application:
```python
from crewai.events import (
BaseEventListener,
MemoryQueryCompletedEvent,
MemorySaveCompletedEvent
)
import time
class MemoryPerformanceMonitor(BaseEventListener):
def __init__(self):
super().__init__()
self.query_times = []
self.save_times = []
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
self.query_times.append(event.query_time_ms)
print(f"Memory query completed in {event.query_time_ms:.2f}ms. Query: '{event.query}'")
print(f"Average query time: {sum(self.query_times)/len(self.query_times):.2f}ms")
@crewai_event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
self.save_times.append(event.save_time_ms)
print(f"Memory save completed in {event.save_time_ms:.2f}ms")
print(f"Average save time: {sum(self.save_times)/len(self.save_times):.2f}ms")
# Create an instance of your listener
memory_monitor = MemoryPerformanceMonitor()
```
#### 2. Memory Content Logging
Log memory operations for debugging and insights:
```python
from crewai.events import (
BaseEventListener,
MemorySaveStartedEvent,
MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent
)
import logging
# Configure logging
logger = logging.getLogger('memory_events')
class MemoryLogger(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemorySaveStartedEvent)
def on_memory_save_started(source, event: MemorySaveStartedEvent):
if event.agent_role:
logger.info(f"Agent '{event.agent_role}' saving memory: {event.value[:50]}...")
else:
logger.info(f"Saving memory: {event.value[:50]}...")
@crewai_event_bus.on(MemoryQueryStartedEvent)
def on_memory_query_started(source, event: MemoryQueryStartedEvent):
logger.info(f"Memory query started: '{event.query}' (limit: {event.limit})")
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
def on_memory_retrieval_completed(source, event: MemoryRetrievalCompletedEvent):
if event.task_id:
logger.info(f"Memory retrieved for task {event.task_id} in {event.retrieval_time_ms:.2f}ms")
else:
logger.info(f"Memory retrieved in {event.retrieval_time_ms:.2f}ms")
logger.debug(f"Memory content: {event.memory_content}")
# Create an instance of your listener
memory_logger = MemoryLogger()
```
#### 3. Error Tracking and Notifications
Capture and respond to memory errors:
```python
from crewai.events import (
BaseEventListener,
MemorySaveFailedEvent,
MemoryQueryFailedEvent
)
import logging
from typing import Optional
# Configure logging
logger = logging.getLogger('memory_errors')
class MemoryErrorTracker(BaseEventListener):
def __init__(self, notify_email: Optional[str] = None):
super().__init__()
self.notify_email = notify_email
self.error_count = 0
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemorySaveFailedEvent)
def on_memory_save_failed(source, event: MemorySaveFailedEvent):
self.error_count += 1
agent_info = f"Agent '{event.agent_role}'" if event.agent_role else "Unknown agent"
error_message = f"Memory save failed: {event.error}. {agent_info}"
logger.error(error_message)
if self.notify_email and self.error_count % 5 == 0:
self._send_notification(error_message)
@crewai_event_bus.on(MemoryQueryFailedEvent)
def on_memory_query_failed(source, event: MemoryQueryFailedEvent):
self.error_count += 1
error_message = f"Memory query failed: {event.error}. Query: '{event.query}'"
logger.error(error_message)
if self.notify_email and self.error_count % 5 == 0:
self._send_notification(error_message)
def _send_notification(self, message):
# Implement your notification system (email, Slack, etc.)
print(f"[NOTIFICATION] Would send to {self.notify_email}: {message}")
# Create an instance of your listener
error_tracker = MemoryErrorTracker(notify_email="admin@example.com")
```
### Integrating with Analytics Platforms
Memory events can be forwarded to analytics and monitoring platforms to track performance metrics, detect anomalies, and visualize memory usage patterns:
```python
from crewai.events import (
BaseEventListener,
MemoryQueryCompletedEvent,
MemorySaveCompletedEvent
)
class MemoryAnalyticsForwarder(BaseEventListener):
def __init__(self, analytics_client):
super().__init__()
self.client = analytics_client
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
# Forward query metrics to analytics platform
self.client.track_metric({
"event_type": "memory_query",
"query": event.query,
"duration_ms": event.query_time_ms,
"result_count": len(event.results) if hasattr(event.results, "__len__") else 0,
"timestamp": event.timestamp
})
@crewai_event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
# Forward save metrics to analytics platform
self.client.track_metric({
"event_type": "memory_save",
"agent_role": event.agent_role,
"duration_ms": event.save_time_ms,
"timestamp": event.timestamp
})
```
### Best Practices for Memory Event Listeners
1. **Keep handlers lightweight**: Avoid complex processing in event handlers to prevent performance impacts
2. **Use appropriate logging levels**: Use INFO for normal operations, DEBUG for details, ERROR for issues
3. **Batch metrics when possible**: Accumulate metrics before sending to external systems
4. **Handle exceptions gracefully**: Ensure your event handlers don't crash due to unexpected data
5. **Consider memory consumption**: Be mindful of storing large amounts of event data
## Conclusion
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.

View File

@@ -2,6 +2,7 @@
title: Planning
description: Learn how to add planning to your CrewAI Crew and improve their performance.
icon: ruler-combined
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Processes
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
icon: bars-staggered
mode: "wide"
---
## Overview

View File

@@ -2,6 +2,7 @@
title: Reasoning
description: "Learn how to enable and use agent reasoning to improve task execution."
icon: brain
mode: "wide"
---
## Overview

Some files were not shown because too many files have changed in this diff Show More