Compare commits

...

44 Commits

Author SHA1 Message Date
João Moura
4f6054d439 new version
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-04-28 07:39:38 -07:00
Dev Khant
a86a1213c7 Fix Mem0 OSS (#2604)
* Fix Mem0 OSS

* add test

* fix lint and tests

* fix

* add tests

* drop test

* changed to class comparision

* fixed test cases

* Update src/crewai/memory/storage/mem0_storage.py

* Update src/crewai/memory/storage/mem0_storage.py

* fix

* fix lock file

---------

Co-authored-by: Vidit-Ostwal <viditostwal@gmail.com>
2025-04-28 10:37:31 -04:00
Lucas Gomide
566935fb94 upgrade liteLLM to latest version (#2684)
* build(litellm): upgrade LiteLLM to latest version

* fix: update filtered logs from LiteLLM

* Fix for a missing backtick

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-28 09:46:40 -04:00
Lucas Gomide
3a66746a99 build: upgrade crewai-tools (#2705)
* build: upgrade crewai-tools

* build: prepare new version
2025-04-28 06:38:56 -07:00
João Moura
337a6d5719 preparing new version
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-04-27 23:56:22 -07:00
Tony Kipkemboi
51eb5e9998 docs: add CrewAI Enterprise docs (#2691)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add enterprise deployment documentation to CLI docs

* Update CrewAI Enterprise documentation with comprehensive guides for Traces, Tool Repository, Webhook Streaming, and FAQ structure

* Add Enterprise documentation images

* Update Enterprise introduction with visual CardGroups and Steps components
2025-04-25 13:59:44 -07:00
Lucas Gomide
b2969e9441 style: fix linter issue (#2686)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-25 09:34:00 -04:00
João Moura
5b9606e8b6 fix contenxt windown 2025-04-24 23:09:23 -07:00
Kunal Lunia
685d20f46c added gpt-4.1 models and gemini-2.0 and 2.5 pro models (#2609)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* test: add missing cassettes

* test: ignore authorization key from gemini/gemma3 request

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-23 11:20:32 -07:00
Lucas Gomide
9ebf3aa043 docs(CodeInterpreterTool): update docs (#2675) 2025-04-23 10:27:25 -07:00
Tony Kipkemboi
2e4c97661a Add enterprise deployment documentation to CLI docs (#2670)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-22 13:27:58 -07:00
Tony Kipkemboi
16eb4df556 docs: update docs.json with contextual options, SEO, and 404 redirect (#2654)
* docs: 0.114.0 release notes, navigation restructure, new guides, deploy video, and cleanup

- Add v0.114.0 release notes with highlights image and doc links
- Restructure docs navigation (Strategy group, Releases tab, navbar links)
- Update quickstart with deployment video and clearer instructions
- Add/rename guides (Custom Manager Agent, Custom LLM)
- Remove legacy concept/tool docs
- Add new images and tool docs
- Minor formatting and content improvements throughout

* docs: update docs.json with contextual options, SEO indexing, and 404 redirect settings
2025-04-22 09:52:27 -07:00
Vini Brasil
3d9000495c Change CLI tool publish message (#2662) 2025-04-22 13:09:30 -03:00
Tony Kipkemboi
6d0039b117 docs: 0.114.0 release notes, navigation restructure, new guides, deploy video, and cleanup (#2653)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
- Add v0.114.0 release notes with highlights image and doc links
- Restructure docs navigation (Strategy group, Releases tab, navbar links)
- Update quickstart with deployment video and clearer instructions
- Add/rename guides (Custom Manager Agent, Custom LLM)
- Remove legacy concept/tool docs
- Add new images and tool docs
- Minor formatting and content improvements throughout
2025-04-21 19:18:21 -04:00
Lorenze Jay
311a078ca6 Enhance knowledge management in CrewAI (#2637)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Enhance knowledge management in CrewAI

- Added `KnowledgeConfig` class to configure knowledge retrieval parameters such as `limit` and `score_threshold`.
- Updated `Agent` and `Crew` classes to utilize the new knowledge configuration for querying knowledge sources.
- Enhanced documentation to clarify the addition of knowledge sources at both agent and crew levels.
- Introduced new tips in documentation to guide users on knowledge source management and configuration.

* Refactor knowledge configuration parameters in CrewAI

- Renamed `limit` to `results_limit` in `KnowledgeConfig`, `query_knowledge`, and `query` methods for consistency and clarity.
- Updated related documentation to reflect the new parameter name, ensuring users understand the configuration options for knowledge retrieval.

* Refactor agent tests to utilize mock knowledge storage

- Updated test cases in `agent_test.py` to use `KnowledgeStorage` for mocking knowledge sources, enhancing test reliability and clarity.
- Renamed `limit` to `results_limit` in `KnowledgeConfig` for consistency with recent changes.
- Ensured that knowledge queries are properly mocked to return expected results during tests.

* Add VCR support for agent tests with query limits and score thresholds

- Introduced `@pytest.mark.vcr` decorator in `agent_test.py` for tests involving knowledge sources, ensuring consistent recording of HTTP interactions.
- Added new YAML cassette files for `test_agent_with_knowledge_sources_with_query_limit_and_score_threshold` and `test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_default`, capturing the expected API responses for these tests.
- Enhanced test reliability by utilizing VCR to manage external API calls during testing.

* Update documentation to format parameter names in code style

- Changed the formatting of `results_limit` and `score_threshold` in the documentation to use code style for better clarity and emphasis.
- Ensured consistency in documentation presentation to enhance user understanding of configuration options.

* Enhance KnowledgeConfig with field descriptions

- Updated `results_limit` and `score_threshold` in `KnowledgeConfig` to use Pydantic's `Field` for improved documentation and clarity.
- Added descriptions to both parameters to provide better context for their usage in knowledge retrieval configuration.

* docstrings added
2025-04-18 18:33:04 -07:00
Vidit Ostwal
371f19f3cd Support set max_execution_time to Agent (#2610)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
* Fixed fake max_execution_time paramenter
---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-04-17 16:03:00 -04:00
Lorenze Jay
870dffbb89 Feat/byoa (#2523)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
* feat: add OpenAI agent adapter implementation

- Introduced OpenAIAgentAdapter class to facilitate interaction with OpenAI Assistants.
- Implemented methods for task execution, tool configuration, and response processing.
- Added support for converting CrewAI tools to OpenAI format and handling delegation tools.

* created an adapter for the delegate and ask_question tools

* delegate and ask_questions work and it delegates to crewai agents*

* refactor: introduce OpenAIAgentToolAdapter for tool management

- Created OpenAIAgentToolAdapter class to encapsulate tool configuration and conversion for OpenAI Assistant.
- Removed tool configuration logic from OpenAIAgentAdapter and integrated it into the new adapter.
- Enhanced the tool conversion process to ensure compatibility with OpenAI's requirements.

* feat: implement BaseAgentAdapter for agent integration

- Introduced BaseAgentAdapter as an abstract base class for agent adapters in CrewAI.
- Defined common interface and methods for configuring tools and structured output.
- Updated OpenAIAgentAdapter to inherit from BaseAgentAdapter, enhancing its structure and functionality.

* feat: add LangGraph agent and tool adapter for CrewAI integration

- Introduced LangGraphAgentAdapter to facilitate interaction with LangGraph agents.
- Implemented methods for task execution, context handling, and tool configuration.
- Created LangGraphToolAdapter to convert CrewAI tools into LangGraph-compatible format.
- Enhanced error handling and logging for task execution and streaming processes.

* feat: enhance LangGraphToolAdapter and improve conversion instructions

- Added type hints for better clarity and type checking in LangGraphToolAdapter.
- Updated conversion instructions to ensure compatibility with optional LLM checks.

* feat: integrate structured output handling in LangGraph and OpenAI agents

- Added LangGraphConverterAdapter for managing structured output in LangGraph agents.
- Enhanced LangGraphAgentAdapter to utilize the new converter for system prompt and task execution.
- Updated LangGraphToolAdapter to use StructuredTool for better compatibility.
- Introduced OpenAIConverterAdapter for structured output management in OpenAI agents.
- Improved task execution flow in OpenAIAgentAdapter to incorporate structured output configuration and post-processing.

* feat: implement BaseToolAdapter for tool integration

- Introduced BaseToolAdapter as an abstract base class for tool adapters in CrewAI.
- Updated LangGraphToolAdapter and OpenAIAgentToolAdapter to inherit from BaseToolAdapter, enhancing their structure and functionality.
- Improved tool configuration methods to support better integration with various frameworks.
- Added type hints and documentation for clarity and maintainability.

* feat: enhance OpenAIAgentAdapter with configurable agent properties

- Refactored OpenAIAgentAdapter to accept agent configuration as an argument.
- Introduced a method to build a system prompt for the OpenAI agent, improving task execution context.
- Updated initialization to utilize role, goal, and backstory from kwargs, enhancing flexibility in agent setup.
- Improved tool handling and integration within the adapter.

* feat: enhance agent adapters with structured output support

- Introduced BaseConverterAdapter as an abstract class for structured output handling.
- Implemented LangGraphConverterAdapter and OpenAIConverterAdapter to manage structured output in their respective agents.
- Updated BaseAgentAdapter to accept an agent configuration dictionary during initialization.
- Enhanced LangGraphAgentAdapter to utilize the new converter and improved tool handling.
- Added methods for configuring structured output and enhancing system prompts in converter adapters.

* refactor: remove _parse_tools method from OpenAIAgentAdapter and BaseAgent

- Eliminated the _parse_tools method from OpenAIAgentAdapter and its abstract declaration in BaseAgent.
- Cleaned up related test code in MockAgent to reflect the removal of the method.

* also removed _parse_tools here as not used

* feat: add dynamic import handling for LangGraph dependencies

- Implemented conditional imports for LangGraph components to handle ImportError gracefully.
- Updated LangGraphAgentAdapter initialization to check for LangGraph availability and raise an informative error if dependencies are missing.
- Enhanced the agent adapter's robustness by ensuring it only initializes components when the required libraries are present.

* fix: improve error handling for agent adapters

- Updated LangGraphAgentAdapter to raise an ImportError with a clear message if LangGraph dependencies are not installed.
- Refactored OpenAIAgentAdapter to include a similar check for OpenAI dependencies, ensuring robust initialization and user guidance for missing libraries.
- Enhanced overall error handling in agent adapters to prevent runtime issues when dependencies are unavailable.

* refactor: enhance tool handling in agent adapters

- Updated BaseToolAdapter to initialize original and converted tools in the constructor.
- Renamed method `all_tools` to `tools` for clarity in BaseToolAdapter.
- Added `sanitize_tool_name` method to ensure tool names are API compatible.
- Modified LangGraphAgentAdapter to utilize the updated tool handling and ensure proper tool configuration.
- Refactored LangGraphToolAdapter to streamline tool conversion and ensure consistent naming conventions.

* feat: emit AgentExecutionCompletedEvent in agent adapters

- Added emission of AgentExecutionCompletedEvent in both LangGraphAgentAdapter and OpenAIAgentAdapter to signal task completion.
- Enhanced event handling to include agent, task, and output details for better tracking of execution results.

* docs: Enhance BaseConverterAdapter documentation

- Added a detailed docstring to the BaseConverterAdapter class, outlining its purpose and the expected functionality for all converter adapters.
- Updated the post_process_result method's docstring to specify the expected format of the result as a string.

* docs: Add comprehensive guide for bringing custom agents into CrewAI

- Introduced a new documentation file detailing the process of integrating custom agents using the BaseAgentAdapter, BaseToolAdapter, and BaseConverter.
- Included step-by-step instructions for creating custom adapters, configuring tools, and handling structured output.
- Provided examples for implementing adapters for various frameworks, enhancing the usability of CrewAI for developers.

* feat: Introduce adapted_agent flag in BaseAgent and update BaseAgentAdapter initialization

- Added an `adapted_agent` boolean field to the BaseAgent class to indicate if the agent is adapted.
- Updated the BaseAgentAdapter's constructor to pass `adapted_agent=True` to the superclass, ensuring proper initialization of the new field.

* feat: Enhance LangGraphAgentAdapter to support optional agent configuration

- Updated LangGraphAgentAdapter to conditionally apply agent configuration when creating the agent graph, allowing for more flexible initialization.
- Modified LangGraphToolAdapter to ensure only instances of BaseTool are converted, improving tool compatibility and handling.

* feat: Introduce OpenAIConverterAdapter for structured output handling

- Added OpenAIConverterAdapter to manage structured output conversion for OpenAI agents, enhancing their ability to process and format results.
- Updated OpenAIAgentAdapter to utilize the new converter for configuring structured output and post-processing results.
- Removed the deprecated get_output_converter method from OpenAIAgentAdapter.
- Added unit tests for BaseAgentAdapter and BaseToolAdapter to ensure proper functionality and integration of new features.

* feat: Enhance tool adapters to support asynchronous execution

- Updated LangGraphToolAdapter and OpenAIAgentToolAdapter to handle asynchronous tool execution by checking if the output is awaitable.
- Introduced `inspect` import to facilitate the awaitability check.
- Refactored tool wrapper functions to ensure proper handling of both synchronous and asynchronous tool results.

* fix: Correct method definition syntax and enhance tool adapter implementation

- Updated the method definition for `configure_structured_output` to include the `def` keyword for clarity.
- Added an asynchronous tool wrapper to ensure tools can operate in both synchronous and asynchronous contexts.
- Modified the constructor of the custom converter adapter to directly assign the agent adapter, improving clarity and functionality.

* linted

* refactor: Improve tool processing logic in BaseAgent

- Added a check to return an empty list if no tools are provided.
- Simplified the tool attribute validation by using a list of required attributes.
- Removed commented-out abstract method definition for clarity.

* refactor: Simplify tool handling in agent adapters

- Changed default value of `tools` parameter in LangGraphAgentAdapter to None for better handling of empty tool lists.
- Updated tool initialization in both LangGraphAgentAdapter and OpenAIAgentAdapter to directly pass the `tools` parameter, removing unnecessary list handling.
- Cleaned up commented-out code in OpenAIConverterAdapter to improve readability.

* refactor: Remove unused stream_task method from LangGraphAgentAdapter

- Deleted the `stream_task` method from LangGraphAgentAdapter to streamline the code and eliminate unnecessary complexity.
- This change enhances maintainability by focusing on essential functionalities within the agent adapter.
2025-04-17 09:22:48 -07:00
Lucas Gomide
ced3c8f0e0 Unblock LLM(stream=True) to work with tools (#2582)
* feat: unblock LLM(stream=True) to work with tools

* feat: replace pytest-vcr by pytest-recording

1. pytest-vcr does not support httpx - which LiteLLM uses for streaming responses.
2. pytest-vcr is no longer maintained, last commit 6 years ago :fist::skin-tone-4:
3. pytest-recording supports modern request libraries (including httpx) and actively maintained

* refactor: remove @skip_streaming_in_ci

Since we have fixed streaming response issue we can remove this @skip_streaming_in_ci

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-17 11:58:52 -04:00
Greyson LaLonde
8e555149f7 fix: docs import path for json search tool (#2631)
- updated import path to crewai-tools
- removed old comment
2025-04-17 07:51:20 -07:00
Lucas Gomide
a96a27f064 docs: fix guardrail documentation usage (#2630) 2025-04-17 10:34:50 -04:00
Vidit Ostwal
a2f3566cd9 Pr branch (#2312)
* Adjust checking for callable crew object.

Changes back to how it was being done before.
Fixes #2307

* Fix specific memory reset errors.

When not initiated, the function should raise
the "memory system is not initialized" RuntimeError.

* Remove print statement

* Fixes test case

---------

Co-authored-by: Carlos Souza <carloshrsouza@gmail.com>
2025-04-17 08:59:15 -04:00
Greyson LaLonde
e655412aca refactor: create constants.py & use in telemetry (#2627)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
- created `constants.py` for telemetry base url and service name
- updated `telemetry.py` to reflect changes
- ran ruff --fix to apply lint fixes
2025-04-16 12:46:15 -07:00
Lorenze Jay
1d91ab5d1b fix: pass original agent reference to lite agent initialization (#2625)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-16 10:05:09 -07:00
Vini Brasil
37359a34f0 Remove redundant comment from sqlite.py (#2622) 2025-04-16 11:25:41 -03:00
Vini Brasil
6eb4045339 Update .github/workflows/notify-downstream.yml (#2621) 2025-04-16 10:39:51 -03:00
Vini Brasil
aebbc75dea Notify downstream repo of changes (#2615)
* Notify downstream repo of changes

* Add permissions block
2025-04-16 10:18:26 -03:00
Lucas Gomide
bc91e94f03 fix: add type hints and ignore type checks for config access (#2603) 2025-04-14 16:58:09 -04:00
devin-ai-integration[bot]
d659151dca Fix #2551: Add Huggingface to provider list in CLI (#2552)
* Fix #2551: Add Huggingface to provider list in CLI

Co-Authored-By: Joe Moura <joao@crewai.com>

* Update Huggingface API key name to HF_TOKEN and remove base URL prompt

Co-Authored-By: Joe Moura <joao@crewai.com>

* Update Huggingface API key name to HF_TOKEN in documentation

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting in test_constants.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import order in test_constants.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import formatting in test_constants.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Skip failing tests in Python 3.11 due to VCR cassette issues

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import order in knowledge_test.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Revert skip decorators to check if tests are flaky

Co-Authored-By: Joe Moura <joao@crewai.com>

* Restore skip decorators for tests with VCR cassette issues in Python 3.11

Co-Authored-By: Joe Moura <joao@crewai.com>

* revert skip pytest decorators

* Remove import sys and skip decorators from test files

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-04-14 16:28:04 -04:00
Lucas Gomide
9dffd42e6d feat: Enhance memory system with isolated memory configuration (#2597)
* feat: support defining any memory in an isolated way

This change makes it easier to use a specific memory type without unintentionally enabling all others.

Previously, setting memory=True would implicitly configure all available memories (like LTM and STM), which might not be ideal in all cases. For example, when building a chatbot that only needs an external memory, users were forced to also configure LTM and STM — which rely on default OpenAPI embeddings — even if they weren’t needed.

With this update, users can now define a single memory in isolation, making the configuration process simpler and more flexible.

* feat: add tests to ensure we are able to use contextual memory by set individual memories

* docs: enhance memory documentation

* feat: warn when long-term memory is defined but entity memory is not
2025-04-14 15:48:48 -04:00
devin-ai-integration[bot]
88455cd52c fix: Correctly copy memory objects during crew training (fixes #2593) (#2594)
* fix: Correctly copy memory objects during crew training (#2593)

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: Fix import order in tests/crew_test.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Rely on validator for memory copy, update test assertions

Removes manual deep copy of memory objects in Crew.copy().
The Pydantic model_validator 'create_crew_memory' handles the
initialization of new memory instances for the copied crew.

Updates test_crew_copy_with_memory assertions to verify that
the private memory attributes (_short_term_memory, etc.) are
correctly initialized as new instances in the copied crew.

Co-Authored-By: Joe Moura <joao@crewai.com>

* Revert "fix: Rely on validator for memory copy, update test assertions"

This reverts commit 8702bf1e34.

* fix: Re-add manual deep copy for all memory types in Crew.copy

Addresses feedback on PR #2594 to ensure all memory objects
(short_term, long_term, entity, external, user) are correctly
deep copied using model_copy(deep=True).

Also simplifies the test case to directly verify the copy behavior
instead of relying on the train method.

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-14 14:59:12 -04:00
Alexandre Gindre
6a1eb10830 fix(crew template): fix wrong parameter name and missing input (#2387) 2025-04-14 11:09:59 -04:00
devin-ai-integration[bot]
10edde100e Fix: Use mem0_local_config instead of config in Memory.from_config (#2588)
* fix: use mem0_local_config instead of config in Memory.from_config (#2587)

Co-Authored-By: Joe Moura <joao@crewai.com>

* refactor: consolidate tests as per PR feedback

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-14 08:55:23 -04:00
Eduardo Chiarotti
40a441f30e feat: remove unused code and change ToolUsageStarted event place (#2581)
* feat: remove unused code and change ToolUsageStarted event place

* feat: run lint

* feat: add agent refernece inside liteagent

* feat: remove unused logic

* feat: Remove not needed event

* feat: remove test from tool execution erro:

* feat: remove cassete
2025-04-11 14:26:59 -04:00
Vidit Ostwal
ea5ae9086a added condition to check whether _run function returns a coroutine ob… (#2570)
* added condition to check whether _run function returns a coroutine object

* Cleaned the code

* Fixed the test modules, Class -> Functions
2025-04-11 12:56:37 -04:00
Cypher Pepe
0cd524af86 fixed broken link in docs/tools/weaviatevectorsearchtool.mdx (#2569) 2025-04-11 11:58:01 -04:00
Jesse R Weigel
4bff5408d8 Create output folder if it doesn't exits (#2573)
When running this project, I got an error because the output folder had not been created. 

I added a line to check if the output folder exists and create it if needed.
2025-04-11 09:14:05 -04:00
Lucas Gomide
d2caf11191 Support Python 3.10+ (on CI) and remove redundant Self imports (#2553)
* ci(workflows): add Python version matrix (3.10-3.12) for tests

* refactor: remove explicit Self import from typing

Python 3.10+ natively supports Self type annotation without explicit imports

* chore: rename external_memory file test

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-10 14:37:24 -04:00
Vini Brasil
37979a0ca1 Raise exception when flow fails (#2579) 2025-04-10 13:08:32 -04:00
devin-ai-integration[bot]
c9f47e6a37 Add result_as_answer parameter to @tool decorator (Fixes #2561) (#2562)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-10 09:01:26 -04:00
x1x2
5780c3147a fix: correct parameter name in crew template test function (#2567)
This commit resolves an issue in the crew template generator where the test() 
function incorrectly uses 'openai_model_name' as a parameter name when calling 
Crew.test(), while the actual implementation expects 'eval_llm'.

The mismatch causes a TypeError when users run the generated test command:
"Crew.test() got an unexpected keyword argument 'openai_model_name'"

This change ensures that templates generated with 'crewai create crew' will 
produce code that aligns with the framework's API.
2025-04-10 08:51:10 -04:00
João Moura
98ccbeb4bd new version 2025-04-09 18:13:41 -07:00
Tony Kipkemboi
fbb156b9de Docs: Alphabetize sections, add YouTube video, improve layout (#2560) 2025-04-09 14:14:03 -07:00
Lorenze Jay
b73960cebe KISS: Refactor LiteAgent integration in flows to use Agents instead. … (#2556)
* KISS: Refactor LiteAgent integration in flows to use Agents instead. Update documentation and examples to reflect changes in class usage, including async support and structured output handling. Enhance tests for Agent functionality and ensure compatibility with new features.

* lint fix

* dropped for clarity
2025-04-09 11:54:45 -07:00
Lucas Gomide
10328f3db4 chore: remove unsupported crew attributes from docs (#2557) 2025-04-09 11:34:49 -07:00
164 changed files with 18222 additions and 4952 deletions

33
.github/workflows/notify-downstream.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: Notify Downstream
on:
push:
branches:
- main
permissions:
contents: read
jobs:
notify-downstream:
runs-on: ubuntu-latest
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.OSS_SYNC_APP_ID }}
private_key: ${{ secrets.OSS_SYNC_APP_PRIVATE_KEY }}
- name: Notify Repo B
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ steps.app-token.outputs.token }}
repository: ${{ secrets.OSS_SYNC_DOWNSTREAM_REPO }}
event-type: upstream-commit
client-payload: |
{
"commit_sha": "${{ github.sha }}"
}

View File

@@ -12,6 +12,9 @@ jobs:
tests:
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -21,9 +24,8 @@ jobs:
with:
enable-cache: true
- name: Set up Python
run: uv python install 3.12.8
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
- name: Install the project
run: uv sync --dev --all-extras

View File

@@ -257,10 +257,14 @@ reporting_task:
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:

View File

@@ -4,6 +4,36 @@ description: View the latest updates and changes to CrewAI
icon: timeline
---
<Update label="2025-04-07" description="v0.114.0">
## Release Highlights
<Frame>
<img src="/images/v01140.png" />
</Frame>
**New Features & Enhancements**
- Agents as an atomic unit. (`Agent(...).kickoff()`)
- Support for [Custom LLM implementations](https://docs.crewai.com/guides/advanced/custom-llm).
- Integrated External Memory and [Opik observability](https://docs.crewai.com/how-to/opik-observability).
- Enhanced YAML extraction.
- Multimodal agent validation.
- Added Secure fingerprints for agents and crews.
**Core Improvements & Fixes**
- Improved serialization, agent copying, and Python compatibility.
- Added wildcard support to `emit()`
- Added support for additional router calls and context window adjustments.
- Fixed typing issues, validation, and import statements.
- Improved method performance.
- Enhanced agent task handling, event emissions, and memory management.
- Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs.
**Documentation & Guides**
- Improved documentation structure, theme, and organization.
- Added guides for Local NVIDIA NIM with WSL2, W&B Weave, and Arize Phoenix.
- Updated tool configuration examples, prompts, and observability docs.
- Guide on using singular agents within Flows.
</Update>
<Update label="2025-03-17" description="v0.108.0">
**Features**
- Converted tabs to spaces in `crew.py` template

View File

@@ -118,7 +118,7 @@ class LatestAiDevelopmentCrew():
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@@ -126,7 +126,7 @@ class LatestAiDevelopmentCrew():
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
```

View File

@@ -179,7 +179,78 @@ def crew(self) -> Crew:
```
</Note>
### 10. API Keys
### 10. Deploy
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
```shell Terminal
crewai signup
```
If you already have an account, you can login with:
```shell Terminal
crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI Enterprise platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list
```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI Enterprise platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
### 11. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.

View File

@@ -23,8 +23,7 @@ The `Crew` class has been enriched with several attributes to support advanced f
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Internationalization / Customization** (`prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
@@ -49,4 +48,4 @@ Consider a crew with a researcher agent tasked with data gathering and a writer
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -20,13 +20,10 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
@@ -55,12 +52,16 @@ After creating your CrewAI project as outlined in the [Installation](/installati
```python code
from crewai import Agent, Crew, Task, Process
from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class YourCrewName:
"""Description of your crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Paths to your YAML configuration files
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
@@ -83,27 +84,27 @@ class YourCrewName:
@agent
def agent_one(self) -> Agent:
return Agent(
config=self.agents_config['agent_one'],
config=self.agents_config['agent_one'], # type: ignore[index]
verbose=True
)
@agent
def agent_two(self) -> Agent:
return Agent(
config=self.agents_config['agent_two'],
config=self.agents_config['agent_two'], # type: ignore[index]
verbose=True
)
@task
def task_one(self) -> Task:
return Task(
config=self.tasks_config['task_one']
config=self.tasks_config['task_one'] # type: ignore[index]
)
@task
def task_two(self) -> Task:
return Task(
config=self.tasks_config['task_two']
config=self.tasks_config['task_two'] # type: ignore[index]
)
@crew

View File

@@ -545,16 +545,20 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
## Adding LiteAgent to Flows
## Adding Agents to Flows
LiteAgents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use a LiteAgent within a flow to perform market research:
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
```python
from typing import List, cast
from crewai_tools.tools.website_search.website_search_tool import WebsiteSearchTool
import asyncio
from typing import Any, Dict, List
from crewai_tools import SerperDevTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.flow.flow import Flow, listen, start
from crewai.lite_agent import LiteAgent
# Define a structured output format
class MarketAnalysis(BaseModel):
@@ -562,28 +566,30 @@ class MarketAnalysis(BaseModel):
market_size: str = Field(description="Estimated market size")
competitors: List[str] = Field(description="Major competitors in the space")
# Define flow state
class MarketResearchState(BaseModel):
product: str = ""
analysis: MarketAnalysis | None = None
# Create a flow class
class MarketResearchFlow(Flow[MarketResearchState]):
@start()
def initialize_research(self):
def initialize_research(self) -> Dict[str, Any]:
print(f"Starting market research for {self.state.product}")
return {"product": self.state.product}
@listen(initialize_research)
def analyze_market(self):
# Create a LiteAgent for market research
analyst = LiteAgent(
async def analyze_market(self) -> Dict[str, Any]:
# Create an Agent for market research
analyst = Agent(
role="Market Research Analyst",
goal=f"Analyze the market for {self.state.product}",
backstory="You are an experienced market analyst with expertise in "
"identifying market trends and opportunities.",
llm="gpt-4o",
tools=[WebsiteSearchTool()],
tools=[SerperDevTool()],
verbose=True,
response_format=MarketAnalysis,
)
# Define the research query
@@ -592,49 +598,65 @@ class MarketResearchFlow(Flow[MarketResearchState]):
1. Key market trends
2. Market size
3. Major competitors
Format your response according to the specified structure.
"""
# Execute the analysis
result = analyst.kickoff(query)
self.state.analysis = cast(MarketAnalysis, result.pydantic)
return result.pydantic
# Execute the analysis with structured output format
result = await analyst.kickoff_async(query, response_format=MarketAnalysis)
if result.pydantic:
print("result", result.pydantic)
else:
print("result", result)
# Return the analysis to update the state
return {"analysis": result.pydantic}
@listen(analyze_market)
def present_results(self):
analysis = self.state.analysis
if analysis is None:
print("No analysis results available")
return
def present_results(self, analysis) -> None:
print("\nMarket Analysis Results")
print("=====================")
print("\nKey Market Trends:")
for trend in analysis.key_trends:
print(f"- {trend}")
if isinstance(analysis, dict):
# If we got a dict with 'analysis' key, extract the actual analysis object
market_analysis = analysis.get("analysis")
else:
market_analysis = analysis
print(f"\nMarket Size: {analysis.market_size}")
if market_analysis and isinstance(market_analysis, MarketAnalysis):
print("\nKey Market Trends:")
for trend in market_analysis.key_trends:
print(f"- {trend}")
print(f"\nMarket Size: {market_analysis.market_size}")
print("\nMajor Competitors:")
for competitor in market_analysis.competitors:
print(f"- {competitor}")
else:
print("No structured analysis data available.")
print("Raw analysis:", analysis)
print("\nMajor Competitors:")
for competitor in analysis.competitors:
print(f"- {competitor}")
# Usage example
flow = MarketResearchFlow()
result = flow.kickoff(inputs={"product": "AI-powered chatbots"})
async def run_flow():
flow = MarketResearchFlow()
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
return result
# Run the flow
if __name__ == "__main__":
asyncio.run(run_flow())
```
This example demonstrates several key features of using LiteAgents in flows:
This example demonstrates several key features of using Agents in flows:
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
2. **State Management**: The flow state (`MarketResearchState`) maintains context between steps and stores both inputs and outputs.
3. **Tool Integration**: LiteAgents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
If you want to learn more about LiteAgents, check out the [LiteAgent](/concepts/lite-agent) page.
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
## Adding Crews to Flows

View File

@@ -42,6 +42,16 @@ CrewAI supports various types of knowledge sources out of the box:
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
<Tip>
Unlike retrieval from a vector database using a tool, agents preloaded with knowledge will not need a retrieval persona or task.
Simply add the relevant knowledge sources your agent or crew needs to function.
Knowledge sources can be added at the agent or crew level.
Crew level knowledge sources will be used by **all agents** in the crew.
Agent level knowledge sources will be used by the **specific agent** that is preloaded with the knowledge.
</Tip>
## Quickstart Example
<Tip>
@@ -146,6 +156,26 @@ result = crew.kickoff(
)
```
## Knowledge Configuration
You can configure the knowledge configuration for the crew or agent.
```python Code
from crewai.knowledge.knowledge_config import KnowledgeConfig
knowledge_config = KnowledgeConfig(results_limit=10, score_threshold=0.5)
agent = Agent(
...
knowledge_config=knowledge_config
)
```
<Tip>
`results_limit`: is the number of relevant documents to return. Default is 3.
`score_threshold`: is the minimum score for a document to be considered relevant. Default is 0.35.
</Tip>
## More Examples
Here are examples of how to use different types of knowledge sources:

View File

@@ -1,242 +0,0 @@
---
title: LiteAgent
description: A lightweight, single-purpose agent for simple autonomous tasks within the CrewAI framework.
icon: feather
---
## Overview
A `LiteAgent` is a streamlined version of CrewAI's Agent, designed for simpler, standalone tasks that don't require the full complexity of a crew-based workflow. It's perfect for quick automations, single-purpose tasks, or when you need a lightweight solution.
<Tip>
Think of a LiteAgent as a specialized worker that excels at individual tasks.
While regular Agents are team players in a crew, LiteAgents are solo
performers optimized for specific operations.
</Tip>
## LiteAgent Attributes
| Attribute | Parameter | Type | Description |
| :------------------------------- | :---------------- | :--------------------- | :-------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise. |
| **Goal** | `goal` | `str` | The specific objective that guides the agent's actions. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model powering the agent. Defaults to "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities available to the agent. Defaults to an empty list. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs. Default is False. |
| **Response Format** _(optional)_ | `response_format` | `Type[BaseModel]` | Pydantic model for structured output. Optional. |
## Creating a LiteAgent
Here's a simple example of creating and using a standalone LiteAgent:
```python
from typing import List, cast
from crewai_tools import SerperDevTool
from pydantic import BaseModel, Field
from crewai.lite_agent import LiteAgent
# Define a structured output format
class MovieReview(BaseModel):
title: str = Field(description="The title of the movie")
rating: float = Field(description="Rating out of 10")
pros: List[str] = Field(description="List of positive aspects")
cons: List[str] = Field(description="List of negative aspects")
# Create a LiteAgent
critic = LiteAgent(
role="Movie Critic",
goal="Provide insightful movie reviews",
backstory="You are an experienced film critic known for balanced, thoughtful reviews.",
tools=[SerperDevTool()],
verbose=True,
response_format=MovieReview,
)
# Use the agent
query = """
Review the movie 'Inception'. Include:
1. Your rating out of 10
2. Key positive aspects
3. Areas that could be improved
"""
result = critic.kickoff(query)
# Access the structured output
review = cast(MovieReview, result.pydantic)
print(f"\nMovie Review: {review.title}")
print(f"Rating: {review.rating}/10")
print("\nPros:")
for pro in review.pros:
print(f"- {pro}")
print("\nCons:")
for con in review.cons:
print(f"- {con}")
```
This example demonstrates the core features of a LiteAgent:
- Structured output using Pydantic models
- Tool integration with WebSearchTool
- Simple execution with `kickoff()`
- Easy access to both raw and structured results
## Using LiteAgent in a Flow
For more complex scenarios, you can integrate LiteAgents into a Flow. Here's an example of a market research flow:
````python
from typing import List
from pydantic import BaseModel, Field
from crewai.flow.flow import Flow, start, listen
from crewai.lite_agent import LiteAgent
from crewai.tools import WebSearchTool
# Define a structured output format
class MarketAnalysis(BaseModel):
key_trends: List[str] = Field(description="List of identified market trends")
market_size: str = Field(description="Estimated market size")
competitors: List[str] = Field(description="Major competitors in the space")
# Define flow state
class MarketResearchState(BaseModel):
product: str = ""
analysis: MarketAnalysis = None
# Create a flow class
class MarketResearchFlow(Flow[MarketResearchState]):
@start()
def initialize_research(self, product: str):
print(f"Starting market research for {product}")
self.state.product = product
@listen(initialize_research)
async def analyze_market(self):
# Create a LiteAgent for market research
analyst = LiteAgent(
role="Market Research Analyst",
goal=f"Analyze the market for {self.state.product}",
backstory="You are an experienced market analyst with expertise in "
"identifying market trends and opportunities.",
tools=[WebSearchTool()],
verbose=True,
response_format=MarketAnalysis
)
# Define the research query
query = f"""
Research the market for {self.state.product}. Include:
1. Key market trends
2. Market size
3. Major competitors
Format your response according to the specified structure.
"""
# Execute the analysis
result = await analyst.kickoff_async(query)
self.state.analysis = result.pydantic
return result.pydantic
@listen(analyze_market)
def present_results(self):
analysis = self.state.analysis
print("\nMarket Analysis Results")
print("=====================")
print("\nKey Market Trends:")
for trend in analysis.key_trends:
print(f"- {trend}")
print(f"\nMarket Size: {analysis.market_size}")
print("\nMajor Competitors:")
for competitor in analysis.competitors:
print(f"- {competitor}")
# Usage example
import asyncio
async def run_flow():
flow = MarketResearchFlow()
result = await flow.kickoff(inputs={"product": "AI-powered chatbots"})
return result
# Run the flow
if __name__ == "__main__":
asyncio.run(run_flow())
## Key Features
### 1. Simplified Setup
Unlike regular Agents, LiteAgents are designed for quick setup and standalone operation. They don't require crew configuration or task management.
### 2. Structured Output
LiteAgents support Pydantic models for response formatting, making it easy to get structured, type-safe data from your agent's operations.
### 3. Tool Integration
Just like regular Agents, LiteAgents can use tools to enhance their capabilities:
```python
from crewai.tools import SerperDevTool, CalculatorTool
agent = LiteAgent(
role="Research Assistant",
goal="Find and analyze information",
tools=[SerperDevTool(), CalculatorTool()],
verbose=True
)
````
### 4. Async Support
LiteAgents support asynchronous execution through the `kickoff_async` method, making them suitable for non-blocking operations in your application.
## Response Formatting
LiteAgents support structured output through Pydantic models using the `response_format` parameter. This feature ensures type safety and consistent output structure, making it easier to work with agent responses in your application.
### Basic Usage
```python
from pydantic import BaseModel, Field
class SearchResult(BaseModel):
title: str = Field(description="The title of the found content")
summary: str = Field(description="A brief summary of the content")
relevance_score: float = Field(description="Relevance score from 0 to 1")
agent = LiteAgent(
role="Search Specialist",
goal="Find and summarize relevant information",
response_format=SearchResult
)
result = await agent.kickoff_async("Find information about quantum computing")
print(f"Title: {result.pydantic.title}")
print(f"Summary: {result.pydantic.summary}")
print(f"Relevance: {result.pydantic.relevance_score}")
```
### Handling Responses
When using `response_format`, the agent's response will be available in two forms:
1. **Raw Response**: Access the unstructured string response
```python
result = await agent.kickoff_async("Analyze the market")
print(result.raw) # Original LLM response
```
2. **Structured Response**: Access the parsed Pydantic model
```python
print(result.pydantic) # Parsed response as Pydantic model
print(result.pydantic.dict()) # Convert to dictionary
```

View File

@@ -1,71 +0,0 @@
---
title: Using LlamaIndex Tools
description: Learn how to integrate LlamaIndex tools with CrewAI agents to enhance search-based queries and more.
icon: toolbox
---
## Using LlamaIndex Tools
<Info>
CrewAI seamlessly integrates with LlamaIndexs comprehensive toolkit for RAG (Retrieval-Augmented Generation) and agentic pipelines, enabling advanced search-based queries and more.
</Info>
Here are the available built-in tools offered by LlamaIndex.
```python Code
from crewai import Agent
from crewai_tools import LlamaIndexTool
# Example 1: Initialize from FunctionTool
from llama_index.core.tools import FunctionTool
your_python_function = lambda ...: ...
og_tool = FunctionTool.from_defaults(
your_python_function,
name="<name>",
description='<description>'
)
tool = LlamaIndexTool.from_tool(og_tool)
# Example 2: Initialize from LlamaHub Tools
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
wolfram_spec = WolframAlphaToolSpec(app_id="<app_id>")
wolfram_tools = wolfram_spec.to_tool_list()
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
# Example 3: Initialize Tool from a LlamaIndex Query Engine
query_engine = index.as_query_engine()
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Uber 2019 10K Query Tool",
description="Use this tool to lookup the 2019 Uber 10K Annual Report"
)
# Create and assign the tools to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
)
# rest of the code ...
```
## Steps to Get Started
To effectively use the LlamaIndexTool, follow these steps:
<Steps>
<Step title="Package Installation">
Make sure that `crewai[tools]` package is installed in your Python environment:
<CodeGroup>
```shell Terminal
pip install 'crewai[tools]'
```
</CodeGroup>
</Step>
<Step title="Install and Use LlamaIndex">
Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
</Step>
</Steps>

View File

@@ -438,7 +438,7 @@ In this section, you'll find detailed examples that help you select, configure,
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
llm=local_nvidia_nim_llm
)
@@ -535,14 +535,13 @@ In this section, you'll find detailed examples that help you select, configure,
<Accordion title="Hugging Face">
Set the following environment variables in your `.env` file:
```toml Code
HUGGINGFACE_API_KEY=<your-api-key>
HF_TOKEN=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
base_url="your_api_endpoint"
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
</Accordion>

View File

@@ -145,6 +145,7 @@ from crewai.memory import LongTermMemory
# Simple memory configuration
crew = Crew(memory=True) # Uses default storage locations
```
Note that External Memory wont be defined when `memory=True` is set, as we cant infer which external memory would be suitable for your case
### Custom Storage Configuration
```python
@@ -278,15 +279,19 @@ crew = Crew(
### Using External Memory
External Memory is a powerful feature that allows you to integrate external memory systems with your CrewAI applications. This is particularly useful when you want to use specialized memory providers or maintain memory across different applications.
Since its an external memory, were not able to add a default value to it - unlike with Long Term and Short Term memory.
#### Basic Usage with Mem0
The most common way to use External Memory is with Mem0 as the provider:
```python
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
os.environ["MEM0_API_KEY"] = "YOUR-API-KEY"
agent = Agent(
role="You are a helpful assistant",
goal="Plan a vacation for the user",
@@ -304,7 +309,6 @@ crew = Crew(
tasks=[task],
verbose=True,
process=Process.sequential,
memory=True,
external_memory=ExternalMemory(
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration
),
@@ -363,7 +367,6 @@ crew = Crew(
tasks=[task],
verbose=True,
process=Process.sequential,
memory=True,
external_memory=external_memory,
)

View File

@@ -113,7 +113,7 @@ class LatestAiDevelopmentCrew():
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@@ -121,20 +121,20 @@ class LatestAiDevelopmentCrew():
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task']
config=self.tasks_config['research_task'] # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task']
config=self.tasks_config['reporting_task'] # type: ignore[index]
)
@crew
@@ -288,26 +288,20 @@ To add a guardrail to a task, provide a validation function through the `guardra
```python Code
from typing import Tuple, Union, Dict, Any
from crewai import TaskOutput
def validate_blog_content(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate blog content meets requirements."""
try:
# Check word count
word_count = len(result.split())
if word_count > 200:
return (False, {
"error": "Blog content exceeds 200 words",
"code": "WORD_COUNT_ERROR",
"context": {"word_count": word_count}
})
return (False, "Blog content exceeds 200 words")
# Additional validation logic here
return (True, result.strip())
except Exception as e:
return (False, {
"error": "Unexpected error during validation",
"code": "SYSTEM_ERROR"
})
return (False, "Unexpected error during validation")
blog_task = Task(
description="Write a blog post about AI",
@@ -325,29 +319,24 @@ blog_task = Task(
- Type hints are recommended but optional
2. **Return Values**:
- Success: Return `(True, validated_result)`
- Failure: Return `(False, error_details)`
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
from crewai import TaskOutput
def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
try:
# Main validation logic
validated_data = perform_validation(result)
return (True, validated_data)
except ValidationError as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"input": result}
})
return (False, f"VALIDATION_ERROR: {str(e)}")
except Exception as e:
return (False, {
"error": "Unexpected error",
"code": "SYSTEM_ERROR"
})
return (False, str(e))
```
2. **Error Categories**:
@@ -358,28 +347,25 @@ def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
from crewai import TaskOutput
def complex_validation(result: str) -> Tuple[bool, Union[str, Dict[str, Any]]]:
def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
"""Chain multiple validation steps."""
# Step 1: Basic validation
if not result:
return (False, {"error": "Empty result", "code": "EMPTY_INPUT"})
return (False, "Empty result")
# Step 2: Content validation
try:
validated = validate_content(result)
if not validated:
return (False, {"error": "Invalid content", "code": "CONTENT_ERROR"})
return (False, "Invalid content")
# Step 3: Format validation
formatted = format_output(validated)
return (True, formatted)
except Exception as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"step": "content_validation"}
})
return (False, str(e))
```
### Handling Guardrail Results
@@ -394,19 +380,16 @@ When a guardrail returns `(False, error)`:
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
from crewai import TaskOutput, Task
def validate_json_output(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
def validate_json_output(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate and parse JSON output."""
try:
# Try to parse as JSON
data = json.loads(result)
return (True, data)
except json.JSONDecodeError as e:
return (False, {
"error": "Invalid JSON format",
"code": "JSON_ERROR",
"context": {"line": e.lineno, "column": e.colno}
})
return (False, "Invalid JSON format")
task = Task(
description="Generate a JSON report",

View File

@@ -8,25 +8,27 @@
"dark": "#C94C3C"
},
"favicon": "favicon.svg",
"contextual": {
"options": ["copy", "view", "chatgpt", "claude"]
},
"navigation": {
"tabs": [
{
"tab": "Get Started",
"tab": "Documentation",
"groups": [
{
"group": "Get Started",
"pages": [
"introduction",
"installation",
"quickstart",
"changelog"
"quickstart"
]
},
{
"group": "Guides",
"pages": [
{
"group": "Concepts",
"group": "Strategy",
"pages": [
"guides/concepts/evaluating-use-cases"
]
@@ -66,7 +68,6 @@
"concepts/tasks",
"concepts/crews",
"concepts/flows",
"concepts/lite-agent",
"concepts/knowledge",
"concepts/llms",
"concepts/processes",
@@ -77,42 +78,7 @@
"concepts/testing",
"concepts/cli",
"concepts/tools",
"concepts/event-listener",
"concepts/langchain-tools",
"concepts/llamaindex-tools"
]
},
{
"group": "How to Guides",
"pages": [
"how-to/create-custom-tools",
"how-to/sequential-process",
"how-to/hierarchical-process",
"how-to/custom-manager-agent",
"how-to/llm-connections",
"how-to/customizing-agents",
"how-to/multimodal-agents",
"how-to/coding-agents",
"how-to/force-tool-output-as-result",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/conditional-tasks"
]
},
{
"group": "Agent Monitoring & Observability",
"pages": [
"how-to/agentops-observability",
"how-to/arize-phoenix-observability",
"how-to/langfuse-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/opik-observability",
"how-to/portkey-observability",
"how-to/weave-integration"
"concepts/event-listener"
]
},
{
@@ -142,6 +108,7 @@
"tools/hyperbrowserloadtool",
"tools/linkupsearchtool",
"tools/llamaindextool",
"tools/langchaintool",
"tools/serperdevtool",
"tools/s3readertool",
"tools/s3writertool",
@@ -171,6 +138,40 @@
"tools/youtubevideosearchtool"
]
},
{
"group": "Agent Monitoring & Observability",
"pages": [
"how-to/agentops-observability",
"how-to/arize-phoenix-observability",
"how-to/langfuse-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/opik-observability",
"how-to/portkey-observability",
"how-to/weave-integration"
]
},
{
"group": "Learn",
"pages": [
"how-to/conditional-tasks",
"how-to/coding-agents",
"how-to/create-custom-tools",
"how-to/custom-llm",
"how-to/custom-manager-agent",
"how-to/customizing-agents",
"how-to/force-tool-output-as-result",
"how-to/hierarchical-process",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/llm-connections",
"how-to/multimodal-agents",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/sequential-process"
]
},
{
"group": "Telemetry",
"pages": [
@@ -179,6 +180,42 @@
}
]
},
{
"tab": "Enterprise",
"groups": [
{
"group": "Getting Started",
"pages": [
"enterprise/introduction"
]
},
{
"group": "How-To Guides",
"pages": [
"enterprise/guides/build-crew",
"enterprise/guides/deploy-crew",
"enterprise/guides/kickoff-crew",
"enterprise/guides/update-crew",
"enterprise/guides/use-crew-api",
"enterprise/guides/enable-crew-studio"
]
},
{
"group": "Features",
"pages": [
"enterprise/features/tool-repository",
"enterprise/features/webhook-streaming",
"enterprise/features/traces"
]
},
{
"group": "Resources",
"pages": [
"enterprise/resources/frequently-asked-questions"
]
}
]
},
{
"tab": "Examples",
"groups": [
@@ -189,14 +226,35 @@
]
}
]
},
{
"tab": "Releases",
"groups": [
{
"group": "Releases",
"pages": [
"changelog"
]
}
]
}
],
"global": {
"anchors": [
{
"anchor": "Community",
"anchor": "Website",
"href": "https://crewai.com",
"icon": "globe"
},
{
"anchor": "Forum",
"href": "https://community.crewai.com",
"icon": "discourse"
},
{
"anchor": "Get Help",
"href": "mailto:support@crewai.com",
"icon": "headset"
}
]
}
@@ -210,6 +268,12 @@
"strict": false
},
"navbar": {
"links": [
{
"label": "Start Free Trial",
"href": "https://app.crewai.com"
}
],
"primary": {
"type": "github",
"href": "https://github.com/crewAIInc/crewAI"
@@ -219,7 +283,12 @@
"prompt": "Search CrewAI docs"
},
"seo": {
"indexing": "navigable"
"indexing": "all"
},
"errors": {
"404": {
"redirect": true
}
},
"footer": {
"socials": {
@@ -231,4 +300,4 @@
"reddit": "https://www.reddit.com/r/crewAIInc/"
}
}
}
}

View File

@@ -0,0 +1,106 @@
---
title: Tool Repository
description: "Using the Tool Repository to manage your tools"
icon: "toolbox"
---
## Overview
The Tool Repository is a package manager for CrewAI tools. It allows users to publish, install, and manage tools that integrate with CrewAI crews and flows.
Tools can be:
- **Private**: accessible only within your organization (default)
- **Public**: accessible to all CrewAI users if published with the `--public` flag
The repository is not a version control system. Use Git to track code changes and enable collaboration.
## Prerequisites
Before using the Tool Repository, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
- [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI Enterprise organization
## Installing Tools
To install a tool:
```bash
crewai tool install <tool-name>
```
This installs the tool and adds it to `pyproject.toml`.
## Creating and Publishing Tools
To create a new tool project:
```bash
crewai tool create <tool-name>
```
This generates a scaffolded tool project locally.
After making changes, initialize a Git repository and commit the code:
```bash
git init
git add .
git commit -m "Initial version"
```
To publish the tool:
```bash
crewai tool publish
```
By default, tools are published as private. To make a tool public:
```bash
crewai tool publish --public
```
For more details on how to build tools, see [Creating your own tools](https://docs.crewai.com/concepts/tools#creating-your-own-tools).
## Updating Tools
To update a published tool:
1. Modify the tool locally
2. Update the version in `pyproject.toml` (e.g., from `0.1.0` to `0.1.1`)
3. Commit the changes and publish
```bash
git commit -m "Update version to 0.1.1"
crewai tool publish
```
## Deleting Tools
To delete a tool:
1. Go to [CrewAI Enterprise](https://app.crewai.com)
2. Navigate to **Tools**
3. Select the tool
4. Click **Delete**
<Warning>
Deletion is permanent. Deleted tools cannot be restored or re-installed.
</Warning>
## Security Checks
Every published version undergoes automated security checks, and are only available to install after they pass.
You can check the security check status of a tool at:
`CrewAI Enterprise > Tools > Your Tool > Versions`
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,146 @@
---
title: Traces
description: "Using Traces to monitor your Crews"
icon: "timeline"
---
## Overview
Traces provide comprehensive visibility into your crew executions, helping you monitor performance, debug issues, and optimize your AI agent workflows.
## What are Traces?
Traces in CrewAI Enterprise are detailed execution records that capture every aspect of your crew's operation, from initial inputs to final outputs. They record:
- Agent thoughts and reasoning
- Task execution details
- Tool usage and outputs
- Token consumption metrics
- Execution times
- Cost estimates
<Frame>
![Traces Overview](/images/enterprise/traces-overview.png)
</Frame>
## Accessing Traces
<Steps>
<Step title="Navigate to the Traces Tab">
Once in your CrewAI Enterprise dashboard, click on the **Traces** to view all execution records.
</Step>
<Step title="Select an Execution">
You'll see a list of all crew executions, sorted by date. Click on any execution to view its detailed trace.
</Step>
</Steps>
## Understanding the Trace Interface
The trace interface is divided into several sections, each providing different insights into your crew's execution:
### 1. Execution Summary
The top section displays high-level metrics about the execution:
- **Total Tokens**: Number of tokens consumed across all tasks
- **Prompt Tokens**: Tokens used in prompts to the LLM
- **Completion Tokens**: Tokens generated in LLM responses
- **Requests**: Number of API calls made
- **Execution Time**: Total duration of the crew run
- **Estimated Cost**: Approximate cost based on token usage
<Frame>
![Execution Summary](/images/enterprise/trace-summary.png)
</Frame>
### 2. Tasks & Agents
This section shows all tasks and agents that were part of the crew execution:
- Task name and agent assignment
- Agents and LLMs used for each task
- Status (completed/failed)
- Individual execution time of the task
<Frame>
![Task List](/images/enterprise/trace-tasks.png)
</Frame>
### 3. Final Output
Displays the final result produced by the crew after all tasks are completed.
<Frame>
![Final Output](/images/enterprise/final-output.png)
</Frame>
### 4. Execution Timeline
A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns.
<Frame>
![Execution Timeline](/images/enterprise/trace-timeline.png)
</Frame>
### 5. Detailed Task View
When you click on a specific task in the timeline or task list, you'll see:
<Frame>
![Detailed Task View](/images/enterprise/trace-detailed-task.png)
</Frame>
- **Task Key**: Unique identifier for the task
- **Task ID**: Technical identifier in the system
- **Status**: Current state (completed/running/failed)
- **Agent**: Which agent performed the task
- **LLM**: Language model used for this task
- **Start/End Time**: When the task began and completed
- **Execution Time**: Duration of this specific task
- **Task Description**: What the agent was instructed to do
- **Expected Output**: What output format was requested
- **Input**: Any input provided to this task from previous tasks
- **Output**: The actual result produced by the agent
## Using Traces for Debugging
Traces are invaluable for troubleshooting issues with your crews:
<Steps>
<Step title="Identify Failure Points">
When a crew execution doesn't produce the expected results, examine the trace to find where things went wrong. Look for:
- Failed tasks
- Unexpected agent decisions
- Tool usage errors
- Misinterpreted instructions
<Frame>
![Failure Points](/images/enterprise/failure.png)
</Frame>
</Step>
<Step title="Optimize Performance">
Use execution metrics to identify performance bottlenecks:
- Tasks that took longer than expected
- Excessive token usage
- Redundant tool operations
- Unnecessary API calls
</Step>
<Step title="Improve Cost Efficiency">
Analyze token usage and cost estimates to optimize your crew's efficiency:
- Consider using smaller models for simpler tasks
- Refine prompts to be more concise
- Cache frequently accessed information
- Structure tasks to minimize redundant operations
</Step>
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with trace analysis or any other CrewAI Enterprise features.
</Card>

View File

@@ -0,0 +1,82 @@
---
title: Webhook Streaming
description: "Using Webhook Streaming to stream events to your webhook"
icon: "webhook"
---
## Overview
Enterprise Event Streaming lets you receive real-time webhook updates about your crews and flows deployed to
CrewAI Enterprise, such as model calls, tool usage, and flow steps.
## Usage
When using the Kickoff API, include a `webhooks` object to your request, for example:
```json
{
"inputs": {"foo": "bar"},
"webhooks": {
"events": ["crew_kickoff_started", "llm_call_started"],
"url": "https://your.endpoint/webhook",
"realtime": false,
"authentication": {
"strategy": "bearer",
"token": "my-secret-token"
}
}
}
```
If `realtime` is set to `true`, each event is delivered individually and immediately, at the cost of crew/flow performance.
## Webhook Format
Each webhook sends a list of events:
```json
{
"events": [
{
"id": "event-id",
"execution_id": "crew-run-id",
"timestamp": "2025-02-16T10:58:44.965Z",
"type": "llm_call_started",
"data": {
"model": "gpt-4",
"messages": [
{"role": "system", "content": "You are an assistant."},
{"role": "user", "content": "Summarize this article."}
]
}
}
]
}
```
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) on GitHub.
As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field.
## Supported Events
CrewAI supports both system events and custom events in Enterprise Event Streaming. These events are sent to your configured webhook endpoint during crew and flow execution.
- `crew_kickoff_started`
- `crew_step_started`
- `crew_step_completed`
- `crew_execution_completed`
- `llm_call_started`
- `llm_call_completed`
- `tool_usage_started`
- `tool_usage_completed`
- `crew_test_failed`
- *...and others*
Event names match the internal event bus. See [GitHub source](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) for the full list.
You can emit your own custom events, and they will be delivered through the webhook stream alongside system events.
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with webhook integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,43 @@
---
title: "Build Crew"
description: "A Crew is a group of agents that work together to complete a task."
icon: "people-arrows"
---
<Tip>
[CrewAI Enterprise](https://app.crewai.com) streamlines the process of **creating**, **deploying**, and **managing** your AI agents in production environments.
</Tip>
## Getting Started
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/d1Yp8eeknDk?si=tIxnTRI5UlyCp3z_"
title="Building Crews with CrewAI CLI"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/installation">
Follow our standard installation guide to set up CrewAI CLI and create your first project.
</Card>
### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration.
</Card>
## Support and Resources
For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com).
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
</Card>

View File

@@ -0,0 +1,216 @@
---
title: "Deploy Crew"
description: "Deploy your local CrewAI project to the Enterprise platform"
icon: "cloud-arrow-up"
---
## Option 1: CLI Deployment
<Tip>
This video tutorial walks you through the process of deploying your locally developed CrewAI project to the CrewAI Enterprise platform,
transforming it into a production-ready API endpoint.
</Tip>
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="Deploying a Crew to CrewAI Enterprise"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
## Prerequisites
Before starting the deployment process, make sure you have:
- A CrewAI project built locally ([follow our quickstart guide](/quickstart) if you haven't created one yet)
- Your code pushed to a GitHub repository
- The latest version of the CrewAI CLI installed (`uv tool install crewai`)
<Note>
For a quick reference project, you can clone our example repository at [github.com/tonykipkemboi/crewai-latest-ai-development](https://github.com/tonykipkemboi/crewai-latest-ai-development).
</Note>
### Step 1: Authenticate with the Enterprise Platform
First, you need to authenticate your CLI with the CrewAI Enterprise platform:
```bash
# If you already have a CrewAI Enterprise account
crewai login
# If you're creating a new account
crewai signup
```
When you run either command, the CLI will:
1. Display a URL and a unique device code
2. Open your browser to the authentication page
3. Prompt you to confirm the device
4. Complete the authentication process
Upon successful authentication, you'll see a confirmation message in your terminal!
### Step 2: Create a Deployment
From your project directory, run:
```bash
crewai deploy create
```
This command will:
1. Detect your GitHub repository information
2. Identify environment variables in your local `.env` file
3. Securely transfer these variables to the Enterprise platform
4. Create a new deployment with a unique identifier
On successful creation, you'll see a message like:
```shell
Deployment created successfully!
Name: your_project_name
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
Current Status: Deploy Enqueued
```
### Step 3: Monitor Deployment Progress
Track the deployment status with:
```bash
crewai deploy status
```
For detailed logs of the build process:
```bash
crewai deploy logs
```
<Tip>
The first deployment typically takes 10-15 minutes as it builds the container images. Subsequent deployments are much faster.
</Tip>
### Additional CLI Commands
The CrewAI CLI offers several commands to manage your deployments:
```bash
# List all your deployments
crewai deploy list
# Get the status of your deployment
crewai deploy status
# View the logs of your deployment
crewai deploy logs
# Push updates after code changes
crewai deploy push
# Remove a deployment
crewai deploy remove <deployment_id>
```
## Option 2: Deploy Directly via Web Interface
You can also deploy your crews directly through the CrewAI Enterprise web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
### Step 1: Pushing to GitHub
First, you need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/quickstart).
### Step 2: Connecting GitHub to CrewAI Enterprise
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
</Frame>
### Step 3: Select the Repository
After connecting your GitHub account, you'll be able to select which repository to deploy:
<Frame>
![Select Repository](/images/enterprise/select-repo.png)
</Frame>
### Step 4: Set Environment Variables
Before deploying, you'll need to set up your environment variables to connect to your LLM provider or other services:
1. You can add variables individually or in bulk
2. Enter your environment variables in `KEY=VALUE` format (one per line)
<Frame>
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
### Step 5: Deploy Your Crew
1. Click the "Deploy" button to start the deployment process
2. You can monitor the progress through the progress bar
3. The first deployment typically takes around 10-15 minutes; subsequent deployments will be faster
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)
</Frame>
Once deployment is complete, you'll see:
- Your crew's unique URL
- A Bearer token to protect your crew API
- A "Delete" button if you need to remove the deployment
### Interact with Your Deployed Crew
Once deployment is complete, you can access your crew through:
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
- `/inputs`: Lists the required input parameters
- `/kickoff`: Initiates an execution with provided inputs
- `/status/{kickoff_id}`: Checks the execution status
2. **Web Interface**: Visit [app.crewai.com](https://app.crewai.com) to access:
- **Status tab**: View deployment information, API endpoint details, and authentication token
- **Run tab**: Visual representation of your crew's structure
- **Executions tab**: History of all executions
- **Metrics tab**: Performance analytics
- **Traces tab**: Detailed execution insights
### Trigger an Execution
From the Enterprise dashboard, you can:
1. Click on your crew's name to open its details
2. Select "Trigger Crew" from the management interface
3. Enter the required inputs in the modal that appears
4. Monitor progress as the execution moves through the pipeline
## Monitoring and Analytics
The Enterprise platform provides comprehensive observability features:
- **Execution Management**: Track active and completed runs
- **Traces**: Detailed breakdowns of each execution
- **Metrics**: Token usage, execution times, and costs
- **Timeline View**: Visual representation of task sequences
## Advanced Features
The Enterprise platform also offers:
- **Environment Variables Management**: Securely store and manage API keys
- **LLM Connections**: Configure integrations with various LLM providers
- **Custom Tools Repository**: Create, share, and install tools
- **Crew Studio**: Build crews through a chat interface without writing code
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with deployment issues or questions about the Enterprise platform.
</Card>

View File

@@ -0,0 +1,166 @@
---
title: "Enable Crew Studio"
description: "Enabling Crew Studio on CrewAI Enterprise"
icon: "comments"
---
<Tip>
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
</Tip>
## What is Crew Studio?
Crew Studio is an innovative way to create AI agent crews without writing code.
<Frame>
![Crew Studio Interface](/images/enterprise/crew-studio-interface.png)
</Frame>
With Crew Studio, you can:
- Chat with the Crew Assistant to describe your problem
- Automatically generate agents and tasks
- Select appropriate tools
- Configure necessary inputs
- Generate downloadable code for customization
- Deploy directly to the CrewAI Enterprise platform
## Configuration Steps
Before you can start using Crew Studio, you need to configure your LLM connections:
<Steps>
<Step title="Set Up LLM Connection">
Go to the **LLM Connections** tab in your CrewAI Enterprise dashboard and create a new LLM connection.
<Note>
Feel free to use any LLM provider you want that is supported by CrewAI.
</Note>
Configure your LLM connection:
- Enter a `Connection Name` (e.g., `OpenAI`)
- Select your model provider: `openai` or `azure`
- Select models you'd like to use in your Studio-generated Crews
- We recommend at least `gpt-4o`, `o1-mini`, and `gpt-4o-mini`
- Add your API key as an environment variable:
- For OpenAI: Add `OPENAI_API_KEY` with your API key
- For Azure OpenAI: Refer to [this article](https://blog.crewai.com/configuring-azure-openai-with-crewai-a-comprehensive-guide/) for configuration details
- Click `Add Connection` to save your configuration
<Frame>
![LLM Connection Configuration](/images/enterprise/llm-connection-config.png)
</Frame>
</Step>
<Step title="Verify Connection Added">
Once you complete the setup, you'll see your new connection added to the list of available connections.
<Frame>
![Connection Added](/images/enterprise/connection-added.png)
</Frame>
</Step>
<Step title="Configure LLM Defaults">
In the main menu, go to **Settings → Defaults** and configure the LLM Defaults settings:
- Select default models for agents and other components
- Set default configurations for Crew Studio
Click `Save Settings` to apply your changes.
<Frame>
![LLM Defaults Configuration](/images/enterprise/llm-defaults.png)
</Frame>
</Step>
</Steps>
## Using Crew Studio
Now that you've configured your LLM connection and default settings, you're ready to start using Crew Studio!
<Steps>
<Step title="Access Studio">
Navigate to the **Studio** section in your CrewAI Enterprise dashboard.
</Step>
<Step title="Start a Conversation">
Start a conversation with the Crew Assistant by describing the problem you want to solve:
```md
I need a crew that can research the latest AI developments and create a summary report.
```
The Crew Assistant will ask clarifying questions to better understand your requirements.
</Step>
<Step title="Review Generated Crew">
Review the generated crew configuration, including:
- Agents and their roles
- Tasks to be performed
- Required inputs
- Tools to be used
This is your opportunity to refine the configuration before proceeding.
</Step>
<Step title="Deploy or Download">
Once you're satisfied with the configuration, you can:
- Download the generated code for local customization
- Deploy the crew directly to the CrewAI Enterprise platform
- Modify the configuration and regenerate the crew
</Step>
<Step title="Test Your Crew">
After deployment, test your crew with sample inputs to ensure it performs as expected.
</Step>
</Steps>
<Tip>
For best results, provide clear, detailed descriptions of what you want your crew to accomplish. Include specific inputs and expected outputs in your description.
</Tip>
## Example Workflow
Here's a typical workflow for creating a crew with Crew Studio:
<Steps>
<Step title="Describe Your Problem">
Start by describing your problem:
```md
I need a crew that can analyze financial news and provide investment recommendations
```
</Step>
<Step title="Answer Questions">
Respond to clarifying questions from the Crew Assistant to refine your requirements.
</Step>
<Step title="Review the Plan">
Review the generated crew plan, which might include:
- A Research Agent to gather financial news
- An Analysis Agent to interpret the data
- A Recommendations Agent to provide investment advice
</Step>
<Step title="Approve or Modify">
Approve the plan or request changes if necessary.
</Step>
<Step title="Download or Deploy">
Download the code for customization or deploy directly to the platform.
</Step>
<Step title="Test and Refine">
Test your crew with sample inputs and refine as needed.
</Step>
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Crew Studio or any other CrewAI Enterprise features.
</Card>

View File

@@ -0,0 +1,186 @@
---
title: "Kickoff Crew"
description: "Kickoff a Crew on CrewAI Enterprise"
icon: "flag-checkered"
---
# Kickoff a Crew on CrewAI Enterprise
Once you've deployed your crew to the CrewAI Enterprise platform, you can kickoff executions through the web interface or the API. This guide covers both approaches.
## Method 1: Using the Web Interface
### Step 1: Navigate to Your Deployed Crew
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the crew name from your projects list
3. You'll be taken to the crew's detail page
<Frame>
![Crew Dashboard](/images/enterprise/crew-dashboard.png)
</Frame>
### Step 2: Initiate Execution
From your crew's detail page, you have two options to kickoff an execution:
#### Option A: Quick Kickoff
1. Click the `Kickoff` link in the Test Endpoints section
2. Enter the required input parameters for your crew in the JSON editor
3. Click the `Send Request` button
<Frame>
![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)
</Frame>
#### Option B: Using the Visual Interface
1. Click the `Run` tab in the crew detail page
2. Enter the required inputs in the form fields
3. Click the `Run Crew` button
<Frame>
![Run Crew](/images/enterprise/run-crew.png)
</Frame>
### Step 3: Monitor Execution Progress
After initiating the execution:
1. You'll receive a response containing a `kickoff_id` - **copy this ID**
2. This ID is essential for tracking your execution
<Frame>
![Copy Task ID](/images/enterprise/copy-task-id.png)
</Frame>
### Step 4: Check Execution Status
To monitor the progress of your execution:
1. Click the "Status" endpoint in the Test Endpoints section
2. Paste the `kickoff_id` into the designated field
3. Click the "Get Status" button
<Frame>
![Get Status](/images/enterprise/get-status.png)
</Frame>
The status response will show:
- Current execution state (`running`, `completed`, etc.)
- Details about which tasks are in progress
- Any outputs produced so far
### Step 5: View Final Results
Once execution is complete:
1. The status will change to `completed`
2. You can view the full execution results and outputs
3. For a more detailed view, check the `Executions` tab in the crew detail page
## Method 2: Using the API
You can also kickoff crews programmatically using the CrewAI Enterprise REST API.
### Authentication
All API requests require a bearer token for authentication:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
Your bearer token is available on the Status tab of your crew's detail page.
### Checking Crew Health
Before executing operations, you can verify that your crew is running properly:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
A successful response will return a message indicating the crew is operational:
```
Healthy%
```
### Step 1: Retrieve Required Inputs
First, determine what inputs your crew requires:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
The response will be a JSON object containing an array of required input parameters, for example:
```json
{"inputs":["topic","current_year"]}
```
This example shows that this particular crew requires two inputs: `topic` and `current_year`.
### Step 2: Kickoff Execution
Initiate execution by providing the required inputs:
```bash
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
-d '{"inputs": {"topic": "AI Agent Frameworks", "current_year": "2025"}}' \
https://your-crew-url.crewai.com/kickoff
```
The response will include a `kickoff_id` that you'll need for tracking:
```json
{"kickoff_id":"abcd1234-5678-90ef-ghij-klmnopqrstuv"}
```
### Step 3: Check Execution Status
Monitor the execution progress using the kickoff_id:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
## Handling Executions
### Long-Running Executions
For executions that may take a long time:
1. Consider implementing a polling mechanism to check status periodically
2. Use webhooks (if available) for notification when execution completes
3. Implement error handling for potential timeouts
### Execution Context
The execution context includes:
- Inputs provided at kickoff
- Environment variables configured during deployment
- Any state maintained between tasks
### Debugging Failed Executions
If an execution fails:
1. Check the "Executions" tab for detailed logs
2. Review the "Traces" tab for step-by-step execution details
3. Look for LLM responses and tool usage in the trace details
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with execution issues or questions about the Enterprise platform.
</Card>

View File

@@ -0,0 +1,89 @@
---
title: "Update Crew"
description: "Updating a Crew on CrewAI Enterprise"
icon: "pencil"
---
<Note>
After deploying your crew to CrewAI Enterprise, you may need to make updates to the code, security settings, or configuration.
This guide explains how to perform these common update operations.
</Note>
## Why Update Your Crew?
CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew.
There are several reasons you might want to update your crew deployment:
- You want to update the code with a latest commit you pushed to GitHub
- You want to reset the bearer token for security reasons
- You want to update environment variables
## 1. Updating Your Crew Code for a Latest Commit
When you've pushed new commits to your GitHub repository and want to update your deployment:
1. Navigate to your crew in the CrewAI Enterprise platform
2. Click on the `Re-deploy` button on your crew details page
<Frame>
![Re-deploy Button](/images/enterprise/redeploy-button.png)
</Frame>
This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment.
## 2. Resetting Bearer Token
If you need to generate a new bearer token (for example, if you suspect the current token might have been compromised):
1. Navigate to your crew in the CrewAI Enterprise platform
2. Find the `Bearer Token` section
3. Click the `Reset` button next to your current token
<Frame>
![Reset Token](/images/enterprise/reset-token.png)
</Frame>
<Warning>
Resetting your bearer token will invalidate the previous token immediately. Make sure to update any applications or scripts that are using the old token.
</Warning>
## 3. Updating Environment Variables
To update the environment variables for your crew:
1. First access the deployment page by clicking on your crew's name
<Frame>
![Environment Variables Button](/images/enterprise/env-vars-button.png)
</Frame>
2. Locate the `Environment Variables` section (you will need to click the `Settings` icon to access it)
3. Edit the existing variables or add new ones in the fields provided
4. Click the `Update` button next to each variable you modify
<Frame>
![Update Environment Variables](/images/enterprise/update-env-vars.png)
</Frame>
5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes
<Note>
Updating environment variables will trigger a new deployment, but this will only update the environment configuration and not the code itself.
</Note>
## After Updating
After performing any update:
1. The system will rebuild and redeploy your crew
2. You can monitor the deployment progress in real-time
3. Once complete, test your crew to ensure the changes are working as expected
<Tip>
If you encounter any issues after updating, you can view deployment logs in the platform or contact support for assistance.
</Tip>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with updating your crew or troubleshooting deployment issues.
</Card>

View File

@@ -0,0 +1,319 @@
---
title: "Trigger Deployed Crew API"
description: "Using your deployed crew's API on CrewAI Enterprise"
icon: "arrow-up-right-from-square"
---
Once you have deployed your crew to CrewAI Enterprise, it automatically becomes available as a REST API. This guide explains how to interact with your crew programmatically.
## API Basics
Your deployed crew exposes several endpoints that allow you to:
1. Discover required inputs
2. Start crew executions
3. Monitor execution status
4. Receive results
### Authentication
All API requests require a bearer token for authentication, which is generated when you deploy your crew:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com/...
```
<Tip>
You can find your bearer token in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
</Tip>
<Frame>
![Bearer Token](/images/enterprise/bearer-token.png)
</Frame>
## Available Endpoints
Your crew API provides three main endpoints:
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/inputs` | GET | Lists all required inputs for crew execution |
| `/kickoff` | POST | Starts a crew execution with provided inputs |
| `/status/{kickoff_id}` | GET | Retrieves the status and results of an execution |
## GET /inputs
The inputs endpoint allows you to discover what parameters your crew requires:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
### Response
```json
{
"inputs": ["budget", "interests", "duration", "age"]
}
```
This response indicates that your crew expects four input parameters: `budget`, `interests`, `duration`, and `age`.
## POST /kickoff
The kickoff endpoint starts a new crew execution:
```bash
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
-d '{
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
}
}' \
https://your-crew-url.crewai.com/kickoff
```
### Request Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `inputs` | Object | Yes | Key-value pairs of all required inputs |
| `meta` | Object | No | Additional metadata to pass to the crew |
| `taskWebhookUrl` | String | No | Callback URL executed after each task |
| `stepWebhookUrl` | String | No | Callback URL executed after each agent thought |
| `crewWebhookUrl` | String | No | Callback URL executed when the crew finishes |
### Example with Webhooks
```json
{
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
},
"meta": {
"requestId": "user-request-12345",
"source": "mobile-app"
},
"taskWebhookUrl": "https://your-server.com/webhooks/task",
"stepWebhookUrl": "https://your-server.com/webhooks/step",
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
}
```
### Response
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv"
}
```
The `kickoff_id` is used to track and retrieve the execution results.
## GET /status/{kickoff_id}
The status endpoint allows you to check the progress and results of a crew execution:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
### Response Structure
The response structure will vary depending on the execution state:
#### In Progress
```json
{
"status": "running",
"current_task": "research_task",
"progress": {
"completed_tasks": 0,
"total_tasks": 2
}
}
```
#### Completed
```json
{
"status": "completed",
"result": {
"output": "Comprehensive travel itinerary...",
"tasks": [
{
"task_id": "research_task",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
},
{
"task_id": "planning_task",
"output": "7-day itinerary plan...",
"agent": "Trip Planner",
"execution_time": 62.8
}
]
},
"execution_time": 108.5
}
```
## Webhook Integration
When you provide webhook URLs in your kickoff request, the system will make POST requests to those URLs at specific points in the execution:
### taskWebhookUrl
Called when each task completes:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"status": "completed",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
}
```
### stepWebhookUrl
Called after each agent thought or action:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"agent": "Researcher",
"step_type": "thought",
"content": "I should first search for popular destinations that match these interests..."
}
```
### crewWebhookUrl
Called when the entire crew execution completes:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"status": "completed",
"result": {
"output": "Comprehensive travel itinerary...",
"tasks": [
{
"task_id": "research_task",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
},
{
"task_id": "planning_task",
"output": "7-day itinerary plan...",
"agent": "Trip Planner",
"execution_time": 62.8
}
]
},
"execution_time": 108.5,
"meta": {
"requestId": "user-request-12345",
"source": "mobile-app"
}
}
```
## Best Practices
### Handling Long-Running Executions
Crew executions can take anywhere from seconds to minutes depending on their complexity. Consider these approaches:
1. **Webhooks (Recommended)**: Set up webhook endpoints to receive notifications when the execution completes
2. **Polling**: Implement a polling mechanism with exponential backoff
3. **Client-Side Timeout**: Set appropriate timeouts for your API requests
### Error Handling
The API may return various error codes:
| Code | Description | Recommended Action |
|------|-------------|-------------------|
| 401 | Unauthorized | Check your bearer token |
| 404 | Not Found | Verify your crew URL and kickoff_id |
| 422 | Validation Error | Ensure all required inputs are provided |
| 500 | Server Error | Contact support with the error details |
### Sample Code
Here's a complete Python example for interacting with your crew API:
```python
import requests
import time
# Configuration
CREW_URL = "https://your-crew-url.crewai.com"
BEARER_TOKEN = "your-crew-token"
HEADERS = {
"Authorization": f"Bearer {BEARER_TOKEN}",
"Content-Type": "application/json"
}
# 1. Get required inputs
response = requests.get(f"{CREW_URL}/inputs", headers=HEADERS)
required_inputs = response.json()["inputs"]
print(f"Required inputs: {required_inputs}")
# 2. Start crew execution
payload = {
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
}
}
response = requests.post(f"{CREW_URL}/kickoff", headers=HEADERS, json=payload)
kickoff_id = response.json()["kickoff_id"]
print(f"Execution started with ID: {kickoff_id}")
# 3. Poll for results
MAX_RETRIES = 30
POLL_INTERVAL = 10 # seconds
for i in range(MAX_RETRIES):
print(f"Checking status (attempt {i+1}/{MAX_RETRIES})...")
response = requests.get(f"{CREW_URL}/status/{kickoff_id}", headers=HEADERS)
data = response.json()
if data["status"] == "completed":
print("Execution completed!")
print(f"Result: {data['result']['output']}")
break
elif data["status"] == "error":
print(f"Execution failed: {data.get('error', 'Unknown error')}")
break
else:
print(f"Status: {data['status']}, waiting {POLL_INTERVAL} seconds...")
time.sleep(POLL_INTERVAL)
```
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,67 @@
---
title: "CrewAI Enterprise"
description: "Deploy, monitor, and scale your AI agent workflows"
icon: "globe"
---
## Introduction
CrewAI Enterprise provides a platform for deploying, monitoring, and scaling your crews and agents in a production environment.
CrewAI Enterprise extends the power of the open-source framework with features designed for production deployments, collaboration, and scalability. Deploy your crews to a managed infrastructure and monitor their execution in real-time.
## Key Features
<CardGroup cols={2}>
<Card title="Crew Deployments" icon="rocket">
Deploy your crews to a managed infrastructure with a few clicks
</Card>
<Card title="API Access" icon="code">
Access your deployed crews via REST API for integration with existing systems
</Card>
<Card title="Observability" icon="chart-line">
Monitor your crews with detailed execution traces and logs
</Card>
<Card title="Tool Repository" icon="toolbox">
Publish and install tools to enhance your crews' capabilities
</Card>
<Card title="Webhook Streaming" icon="webhook">
Stream real-time events and updates to your systems
</Card>
<Card title="Crew Studio" icon="paintbrush">
Create and customize crews using a no-code/low-code interface
</Card>
</CardGroup>
## Deployment Options
<CardGroup cols={3}>
<Card title="GitHub Integration" icon="github">
Connect directly to your GitHub repositories to deploy code
</Card>
<Card title="Crew Studio" icon="palette">
Deploy crews created through the no-code Crew Studio interface
</Card>
<Card title="CLI Deployment" icon="terminal">
Use the CrewAI CLI for more advanced deployment workflows
</Card>
</CardGroup>
## Getting Started
<Steps>
<Step title="Sign up for an account">
Create your account at [app.crewai.com](https://app.crewai.com)
</Step>
<Step title="Create your first crew">
Use code or Crew Studio to create your crew
</Step>
<Step title="Deploy your crew">
Deploy your crew to the Enterprise platform
</Step>
<Step title="Access your crew">
Integrate with your crew via the generated API endpoints
</Step>
</Steps>
For detailed instructions, check out our [deployment guide](/enterprise/guides/deploy-crew) or click the button below to get started.

View File

@@ -0,0 +1,181 @@
---
title: FAQs
description: "Frequently asked questions about CrewAI Enterprise"
icon: "code"
---
<AccordionGroup>
<Accordion title="How is task execution handled in the hierarchical process?">
In the hierarchical process, a manager agent is automatically created and coordinates the workflow, delegating tasks and validating outcomes for
streamlined and effective execution. The manager agent utilizes tools to facilitate task delegation and execution by agents under the manager's guidance.
The manager LLM is crucial for the hierarchical process and must be set up correctly for proper function.
</Accordion>
<Accordion title="Where can I get the latest CrewAI documentation?">
The most up-to-date documentation for CrewAI is available on our official documentation website; https://docs.crewai.com/
<Card href="https://docs.crewai.com/" icon="books">CrewAI Docs</Card>
</Accordion>
<Accordion title="What are the key differences between Hierarchical and Sequential Processes in CrewAI?">
#### Hierarchical Process:
Tasks are delegated and executed based on a structured chain of command.
A manager language model (`manager_llm`) must be specified for the manager agent.
Manager agent oversees task execution, planning, delegation, and validation.
Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities.
#### Sequential Process:
Tasks are executed one after another, ensuring tasks are completed in an orderly progression.
Output of one task serves as context for the next.
Task execution follows the predefined order in the task list.
#### Which Process is Better Suited for Complex Projects?
The hierarchical process is better suited for complex projects because it allows for:
- **Dynamic task allocation and delegation**: Manager agent can assign tasks based on agent capabilities, allowing for efficient resource utilization.
- **Structured validation and oversight**: Manager agent reviews task outputs and ensures task completion, increasing reliability and accuracy.
- **Complex task management**: Assigning tools at the agent level allows for precise control over tool availability, facilitating the execution of intricate tasks.
</Accordion>
<Accordion title="What are the benefits of using memory in the CrewAI framework?">
- **Adaptive Learning**: Crews become more efficient over time, adapting to new information and refining their approach to tasks.
- **Enhanced Personalization**: Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- **Improved Problem Solving**: Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
</Accordion>
<Accordion title="What is the purpose of setting a maximum RPM limit for an agent?">
Setting a maximum RPM limit for an agent prevents the agent from making too many requests to external services, which can help to avoid rate limits and improve performance.
</Accordion>
<Accordion title="What role does human input play in the execution of tasks within a CrewAI crew?">
It allows agents to request additional information or clarification when necessary.
This feature is crucial in complex decision-making processes or when agents require more details to complete a task effectively.
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer.
This input can provide extra context, clarify ambiguities, or validate the agent's output.
</Accordion>
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
CrewAI provides a range of advanced customization options to tailor and enhance agent behavior and capabilities:
- **Language Model Customization**: Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities.
- **Performance and Debugging Settings**: Adjust an agent's performance and monitor its operations for efficient task execution.
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization.
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`).
- **Maximum Iterations for Task Execution**: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions.
- **Delegation and Autonomy**: Control an agent's ability to delegate or ask questions, tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to True, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
- **Human Input in Agent Execution**: Human input is critical in several agent execution scenarios, allowing agents to request additional information or clarification when necessary. This feature is especially useful in complex decision-making processes or when agents require more details to complete a task effectively.
</Accordion>
<Accordion title="In what scenarios is human input particularly useful in agent execution?">
Human input is particularly useful in agent execution when:
- **Agents require additional information or clarification**: When agents encounter ambiguity or incomplete data, human input can provide the necessary context to complete the task effectively.
- **Agents need to make complex or sensitive decisions**: Human input can assist agents in ethical or nuanced decision-making, ensuring responsible and informed outcomes.
- **Oversight and validation of agent output**: Human input can help validate the results generated by agents, ensuring accuracy and preventing any misinterpretation or errors.
- **Customizing agent behavior**: Human input can provide feedback on agent responses, allowing users to refine the agent's behavior and responses over time.
- **Identifying and resolving errors or limitations**: Human input can help identify and address any errors or limitations in the agent's capabilities, enabling continuous improvement and optimization.
</Accordion>
<Accordion title="What are the different types of memory that are available in crewAI?">
The different types of memory available in CrewAI are:
- `short-term memory`
- `long-term memory`
- `entity memory`
- `contextual memory`
Learn more about the different types of memory here:
<Card href="https://docs.crewai.com/concepts/memory" icon="brain">CrewAI Memory</Card>
</Accordion>
<Accordion title="How can I create custom tools for my CrewAI agents?">
You can create custom tools by subclassing the `BaseTool` class provided by CrewAI or by using the tool decorator. Subclassing involves defining a new class that inherits from `BaseTool`, specifying the name, description, and the `_run` method for operational logic. The tool decorator allows you to create a `Tool` object directly with the required attributes and a functional logic.
Click here for more details:
<Card href="https://docs.crewai.com/how-to/create-custom-tools" icon="code">CrewAI Tools</Card>
</Accordion>
<Accordion title="How do I use Output Pydantic in a Task?">
To use Output Pydantic in a task, you need to define the expected output of the task as a Pydantic model. Here's an example:
<Steps>
<Step title="Define a Pydantic model">
First, you need to define a Pydantic model. For instance, let's create a simple model for a user:
```python
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
```
</Step>
<Step title="Then, when creating a task, specify the expected output as this Pydantic model:">
```python
from crewai import Task, Crew, Agent
# Import the User model
from my_models import User
# Create a task with Output Pydantic
task = Task(
description="Create a user with the provided name and age",
expected_output=User, # This is the Pydantic model
agent=agent,
tools=[tool1, tool2]
)
```
</Step>
<Step title="In your agent, make sure to set the output_pydantic attribute to the Pydantic model you're using:">
```python
from crewai import Agent
# Import the User model
from my_models import User
# Create an agent with Output Pydantic
agent = Agent(
role='User Creator',
goal='Create users',
backstory='I am skilled in creating user accounts',
tools=[tool1, tool2],
output_pydantic=User
)
```
</Step>
<Step title="When executing the crew, the output of the task will be a User object:">
```python
from crewai import Crew
# Create a crew with the agent and task
crew = Crew(agents=[agent], tasks=[task])
# Kick off the crew
result = crew.kickoff()
# The output of the task will be a User object
print(result.tasks[0].output)
```
</Step>
</Steps>
Here's a tutorial on how to consistently get structured outputs from your agents:
<Frame>
<iframe
height="400"
width="100%"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen></iframe>
</Frame>
</Accordion>
</AccordionGroup>

View File

@@ -185,15 +185,20 @@ Let's modify the `crew.py` file:
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class ResearchCrew():
"""Research crew for comprehensive topic analysis and reporting"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@@ -201,20 +206,20 @@ class ResearchCrew():
@agent
def analyst(self) -> Agent:
return Agent(
config=self.agents_config['analyst'],
config=self.agents_config['analyst'], # type: ignore[index]
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task']
config=self.tasks_config['research_task'] # type: ignore[index]
)
@task
def analysis_task(self) -> Task:
return Task(
config=self.tasks_config['analysis_task'],
config=self.tasks_config['analysis_task'], # type: ignore[index]
output_file='output/report.md'
)
@@ -387,4 +392,4 @@ Now that you've built your first crew, you can:
<Check>
Congratulations! You've successfully built your first CrewAI crew that can research and analyze any topic you provide. This foundational experience has equipped you with the skills to create increasingly sophisticated AI systems that can tackle complex, multi-stage problems through collaborative intelligence.
</Check>
</Check>

View File

@@ -203,35 +203,40 @@ These task definitions provide detailed instructions to our agents, ensuring the
# src/guide_creator_flow/crews/content_crew/content_crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class ContentCrew():
"""Content writing crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def content_writer(self) -> Agent:
return Agent(
config=self.agents_config['content_writer'],
config=self.agents_config['content_writer'], # type: ignore[index]
verbose=True
)
@agent
def content_reviewer(self) -> Agent:
return Agent(
config=self.agents_config['content_reviewer'],
config=self.agents_config['content_reviewer'], # type: ignore[index]
verbose=True
)
@task
def write_section_task(self) -> Task:
return Task(
config=self.tasks_config['write_section_task']
config=self.tasks_config['write_section_task'] # type: ignore[index]
)
@task
def review_section_task(self) -> Task:
return Task(
config=self.tasks_config['review_section_task'],
config=self.tasks_config['review_section_task'], # type: ignore[index]
context=[self.write_section_task()]
)
@@ -263,6 +268,7 @@ Let's create our flow in the `main.py` file:
```python
#!/usr/bin/env python
import json
import os
from typing import List, Dict
from pydantic import BaseModel, Field
from crewai import LLM
@@ -341,6 +347,9 @@ class GuideCreatorFlow(Flow[GuideCreatorState]):
outline_dict = json.loads(response)
self.state.guide_outline = GuideOutline(**outline_dict)
# Ensure output directory exists before saving
os.makedirs("output", exist_ok=True)
# Save the outline to a file
with open("output/guide_outline.json", "w") as f:
json.dump(outline_dict, f, indent=2)

View File

@@ -0,0 +1,443 @@
---
title: Bring your own agent
description: Learn how to bring your own agents that work within a Crew.
icon: robots
---
Interoperability is a core concept in CrewAI. This guide will show you how to bring your own agents that work within a Crew.
## Adapter Guide for Bringing your own agents (Langgraph Agents, OpenAI Agents, etc...)
We require 3 adapters to turn any agent from different frameworks to work within crew.
1. BaseAgentAdapter
2. BaseToolAdapter
3. BaseConverter
## BaseAgentAdapter
This abstract class defines the common interface and functionality that all
agent adapters must implement. It extends BaseAgent to maintain compatibility
with the CrewAI framework while adding adapter-specific requirements.
Required Methods:
1. `def configure_tools`
2. `def configure_structured_output`
## Creating your own Adapter
To integrate an agent from a different framework (e.g., LangGraph, Autogen, OpenAI Assistants) into CrewAI, you need to create a custom adapter by inheriting from `BaseAgentAdapter`. This adapter acts as a compatibility layer, translating between the CrewAI interfaces and the specific requirements of your external agent.
Here's how you implement your custom adapter:
1. **Inherit from `BaseAgentAdapter`**:
```python
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.tools import BaseTool
from typing import List, Optional, Any, Dict
class MyCustomAgentAdapter(BaseAgentAdapter):
# ... implementation details ...
```
2. **Implement `__init__`**:
The constructor should call the parent class constructor `super().__init__(**kwargs)` and perform any initialization specific to your external agent. You can use the optional `agent_config` dictionary passed during CrewAI's `Agent` initialization to configure your adapter and the underlying agent.
```python
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
super().__init__(agent_config=agent_config, **kwargs)
# Initialize your external agent here, possibly using agent_config
# Example: self.external_agent = initialize_my_agent(agent_config)
print(f"Initializing MyCustomAgentAdapter with config: {agent_config}")
```
3. **Implement `configure_tools`**:
This abstract method is crucial. It receives a list of CrewAI `BaseTool` instances. Your implementation must convert or adapt these tools into the format expected by your external agent framework. This might involve wrapping them, extracting specific attributes, or registering them with the external agent instance.
```python
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
if tools:
adapted_tools = []
for tool in tools:
# Adapt CrewAI BaseTool to the format your agent expects
# Example: adapted_tool = adapt_to_my_framework(tool)
# adapted_tools.append(adapted_tool)
pass # Replace with your actual adaptation logic
# Configure the external agent with the adapted tools
# Example: self.external_agent.set_tools(adapted_tools)
print(f"Configuring tools for MyCustomAgentAdapter: {adapted_tools}") # Placeholder
else:
# Handle the case where no tools are provided
# Example: self.external_agent.set_tools([])
print("No tools provided for MyCustomAgentAdapter.")
```
4. **Implement `configure_structured_output`**:
This method is called when the CrewAI `Agent` is configured with structured output requirements (e.g., `output_json` or `output_pydantic`). Your adapter needs to ensure the external agent is set up to comply with these requirements. This might involve setting specific parameters on the external agent or ensuring its underlying model supports the requested format. If the external agent doesn't support structured output in a way compatible with CrewAI's expectations, you might need to handle the conversion or raise an appropriate error.
```python
def configure_structured_output(self, structured_output: Any) -> None:
# Configure your external agent to produce output in the specified format
# Example: self.external_agent.set_output_format(structured_output)
self.adapted_structured_output = True # Signal that structured output is handled
print(f"Configuring structured output for MyCustomAgentAdapter: {structured_output}")
```
By implementing these methods, your `MyCustomAgentAdapter` will allow your custom agent implementation to function correctly within a CrewAI crew, interacting with tasks and tools seamlessly. Remember to replace the example comments and print statements with your actual adaptation logic specific to the external agent framework you are integrating.
## BaseToolAdapter implementation
The `BaseToolAdapter` class is responsible for converting CrewAI's native `BaseTool` objects into a format that your specific external agent framework can understand and utilize. Different agent frameworks (like LangGraph, OpenAI Assistants, etc.) have their own unique ways of defining and handling tools, and the `BaseToolAdapter` acts as the translator.
Here's how you implement your custom tool adapter:
1. **Inherit from `BaseToolAdapter`**:
```python
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools import BaseTool
from typing import List, Any
class MyCustomToolAdapter(BaseToolAdapter):
# ... implementation details ...
```
2. **Implement `configure_tools`**:
This is the core abstract method you must implement. It receives a list of CrewAI `BaseTool` instances provided to the agent. Your task is to iterate through this list, adapt each `BaseTool` into the format expected by your external framework, and store the converted tools in the `self.converted_tools` list (which is initialized in the base class constructor).
```python
def configure_tools(self, tools: List[BaseTool]) -> None:
"""Configure and convert CrewAI tools for the specific implementation."""
self.converted_tools = [] # Reset in case it's called multiple times
for tool in tools:
# Sanitize the tool name if required by the target framework
sanitized_name = self.sanitize_tool_name(tool.name)
# --- Your Conversion Logic Goes Here ---
# Example: Convert BaseTool to a dictionary format for LangGraph
# converted_tool = {
# "name": sanitized_name,
# "description": tool.description,
# "parameters": tool.args_schema.schema() if tool.args_schema else {},
# # Add any other framework-specific fields
# }
# Example: Convert BaseTool to an OpenAI function definition
# converted_tool = {
# "type": "function",
# "function": {
# "name": sanitized_name,
# "description": tool.description,
# "parameters": tool.args_schema.schema() if tool.args_schema else {"type": "object", "properties": {}},
# }
# }
# --- Replace above examples with your actual adaptation ---
converted_tool = self.adapt_tool_to_my_framework(tool, sanitized_name) # Placeholder
self.converted_tools.append(converted_tool)
print(f"Adapted tool '{tool.name}' to '{sanitized_name}' for MyCustomToolAdapter") # Placeholder
print(f"MyCustomToolAdapter finished configuring tools: {len(self.converted_tools)} adapted.") # Placeholder
# --- Helper method for adaptation (Example) ---
def adapt_tool_to_my_framework(self, tool: BaseTool, sanitized_name: str) -> Any:
# Replace this with the actual logic to convert a CrewAI BaseTool
# to the format needed by your specific external agent framework.
# This will vary greatly depending on the target framework.
adapted_representation = {
"framework_specific_name": sanitized_name,
"framework_specific_description": tool.description,
"inputs": tool.args_schema.schema() if tool.args_schema else None,
"implementation_reference": tool.run # Or however the framework needs to call it
}
# Also ensure the tool works both sync and async
async def async_tool_wrapper(*args, **kwargs):
output = tool.run(*args, **kwargs)
if inspect.isawaitable(output):
return await output
else:
return output
adapted_tool = MyFrameworkTool(
name=sanitized_name,
description=tool.description,
inputs=tool.args_schema.schema() if tool.args_schema else None,
implementation_reference=async_tool_wrapper
)
return adapted_representation
```
3. **Using the Adapter**:
Typically, you would instantiate your `MyCustomToolAdapter` within your `MyCustomAgentAdapter`'s `configure_tools` method and use it to process the tools before configuring your external agent.
```python
# Inside MyCustomAgentAdapter.configure_tools
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
if tools:
tool_adapter = MyCustomToolAdapter() # Instantiate your tool adapter
tool_adapter.configure_tools(tools) # Convert the tools
adapted_tools = tool_adapter.tools() # Get the converted tools
# Now configure your external agent with the adapted_tools
# Example: self.external_agent.set_tools(adapted_tools)
print(f"Configuring external agent with adapted tools: {adapted_tools}") # Placeholder
else:
# Handle no tools case
print("No tools provided for MyCustomAgentAdapter.")
```
By creating a `BaseToolAdapter`, you decouple the tool conversion logic from the agent adaptation, making the integration cleaner and more modular. Remember to replace the placeholder examples with the actual conversion logic required by your specific external agent framework.
## BaseConverter
The `BaseConverterAdapter` plays a crucial role when a CrewAI `Task` requires an agent to return its final output in a specific structured format, such as JSON or a Pydantic model. It bridges the gap between CrewAI's structured output requirements and the capabilities of your external agent.
Its primary responsibilities are:
1. **Configuring the Agent for Structured Output:** Based on the `Task`'s requirements (`output_json` or `output_pydantic`), it instructs the associated `BaseAgentAdapter` (and indirectly, the external agent) on what format is expected.
2. **Enhancing the System Prompt:** It modifies the agent's system prompt to include clear instructions on *how* to generate the output in the required structure.
3. **Post-processing the Result:** It takes the raw output from the agent and attempts to parse, validate, and format it according to the required structure, ultimately returning a string representation (e.g., a JSON string).
Here's how you implement your custom converter adapter:
1. **Inherit from `BaseConverterAdapter`**:
```python
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
# Assuming you have your MyCustomAgentAdapter defined
# from .my_custom_agent_adapter import MyCustomAgentAdapter
from crewai.task import Task
from typing import Any
class MyCustomConverterAdapter(BaseConverterAdapter):
# Store the expected output type (e.g., 'json', 'pydantic', 'text')
_output_type: str = 'text'
_output_schema: Any = None # Store JSON schema or Pydantic model
# ... implementation details ...
```
2. **Implement `__init__`**:
The constructor must accept the corresponding `agent_adapter` instance it will work with.
```python
def __init__(self, agent_adapter: Any): # Use your specific AgentAdapter type hint
self.agent_adapter = agent_adapter
print(f"Initializing MyCustomConverterAdapter for agent adapter: {type(agent_adapter).__name__}")
```
3. **Implement `configure_structured_output`**:
This method receives the CrewAI `Task` object. You need to check the task's `output_json` and `output_pydantic` attributes to determine the required output structure. Store this information (e.g., in `_output_type` and `_output_schema`) and potentially call configuration methods on your `self.agent_adapter` if the external agent needs specific setup for structured output (which might have been partially handled in the agent adapter's `configure_structured_output` already).
```python
def configure_structured_output(self, task: Task) -> None:
"""Configure the expected structured output based on the task."""
if task.output_pydantic:
self._output_type = 'pydantic'
self._output_schema = task.output_pydantic
print(f"Converter: Configured for Pydantic output: {self._output_schema.__name__}")
elif task.output_json:
self._output_type = 'json'
self._output_schema = task.output_json
print(f"Converter: Configured for JSON output with schema: {self._output_schema}")
else:
self._output_type = 'text'
self._output_schema = None
print("Converter: Configured for standard text output.")
# Optionally, inform the agent adapter if needed
# self.agent_adapter.set_output_mode(self._output_type, self._output_schema)
```
4. **Implement `enhance_system_prompt`**:
This method takes the agent's base system prompt string and should append instructions tailored to the currently configured `_output_type` and `_output_schema`. The goal is to guide the LLM powering the agent to produce output in the correct format.
```python
def enhance_system_prompt(self, base_prompt: str) -> str:
"""Enhance the system prompt with structured output instructions."""
if self._output_type == 'text':
return base_prompt # No enhancement needed for plain text
instructions = "\n\nYour final answer MUST be formatted as "
if self._output_type == 'json':
schema_str = json.dumps(self._output_schema, indent=2)
instructions += f"a JSON object conforming to the following schema:\n```json\n{schema_str}\n```"
elif self._output_type == 'pydantic':
schema_str = json.dumps(self._output_schema.model_json_schema(), indent=2)
instructions += f"a JSON object conforming to the Pydantic model '{self._output_schema.__name__}' with the following schema:\n```json\n{schema_str}\n```"
instructions += "\nEnsure your entire response is ONLY the valid JSON object, without any introductory text, explanations, or concluding remarks."
print(f"Converter: Enhancing prompt for {self._output_type} output.")
return base_prompt + instructions
```
*Note: The exact prompt engineering might need tuning based on the agent/LLM being used.*
5. **Implement `post_process_result`**:
This method receives the raw string output from the agent. If structured output was requested (`json` or `pydantic`), you should attempt to parse the string into the expected format. Handle potential parsing errors (e.g., log them, attempt simple fixes, or raise an exception). Crucially, the method must **always return a string**, even if the intermediate format was a dictionary or Pydantic object (e.g., by serializing it back to a JSON string).
```python
import json
from pydantic import ValidationError
def post_process_result(self, result: str) -> str:
"""Post-process the agent's result to ensure it matches the expected format."""
print(f"Converter: Post-processing result for {self._output_type} output.")
if self._output_type == 'json':
try:
# Attempt to parse and re-serialize to ensure validity and consistent format
parsed_json = json.loads(result)
# Optional: Validate against self._output_schema if it's a JSON schema dictionary
# from jsonschema import validate
# validate(instance=parsed_json, schema=self._output_schema)
return json.dumps(parsed_json)
except json.JSONDecodeError as e:
print(f"Error: Failed to parse JSON output: {e}\nRaw output:\n{result}")
# Handle error: return raw, raise exception, or try to fix
return result # Example: return raw output on failure
# except Exception as e: # Catch validation errors if using jsonschema
# print(f"Error: JSON output failed schema validation: {e}\nRaw output:\n{result}")
# return result
elif self._output_type == 'pydantic':
try:
# Attempt to parse into the Pydantic model
model_instance = self._output_schema.model_validate_json(result)
# Return the model serialized back to JSON
return model_instance.model_dump_json()
except ValidationError as e:
print(f"Error: Failed to validate Pydantic output: {e}\nRaw output:\n{result}")
# Handle error
return result # Example: return raw output on failure
except json.JSONDecodeError as e:
print(f"Error: Failed to parse JSON for Pydantic model: {e}\nRaw output:\n{result}")
return result
else: # 'text'
return result # No processing needed for plain text
```
By implementing these methods, your `MyCustomConverterAdapter` ensures that structured output requests from CrewAI tasks are correctly handled by your integrated external agent, improving the reliability and usability of your custom agent within the CrewAI framework.
## Out of the Box Adapters
We provide out of the box adapters for the following frameworks:
1. LangGraph
2. OpenAI Agents
## Kicking off a crew with adapted agents:
```python
import json
import os
from typing import List
from crewai_tools import SerperDevTool
from src.crewai import Agent, Crew, Task
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from crewai.agents.agent_adapters.langgraph.langgraph_adapter import (
LangGraphAgentAdapter,
)
from crewai.agents.agent_adapters.openai_agents.openai_adapter import OpenAIAgentAdapter
# CrewAI Agent
code_helper_agent = Agent(
role="Code Helper",
goal="Help users solve coding problems effectively and provide clear explanations.",
backstory="You are an experienced programmer with deep knowledge across multiple programming languages and frameworks. You specialize in solving complex coding challenges and explaining solutions clearly.",
allow_delegation=False,
verbose=True,
)
# OpenAI Agent Adapter
link_finder_agent = OpenAIAgentAdapter(
role="Link Finder",
goal="Find the most relevant and high-quality resources for coding tasks.",
backstory="You are a research specialist with a talent for finding the most helpful resources. You're skilled at using search tools to discover documentation, tutorials, and examples that directly address the user's coding needs.",
tools=[SerperDevTool()],
allow_delegation=False,
verbose=True,
)
# LangGraph Agent Adapter
reporter_agent = LangGraphAgentAdapter(
role="Reporter",
goal="Report the results of the tasks.",
backstory="You are a reporter who reports the results of the other tasks",
llm=ChatOpenAI(model="gpt-4o"),
allow_delegation=True,
verbose=True,
)
class Code(BaseModel):
code: str
task = Task(
description="Give an answer to the coding question: {task}",
expected_output="A thorough answer to the coding question: {task}",
agent=code_helper_agent,
output_json=Code,
)
task2 = Task(
description="Find links to resources that can help with coding tasks. Use the serper tool to find resources that can help.",
expected_output="A list of links to resources that can help with coding tasks",
agent=link_finder_agent,
)
class Report(BaseModel):
code: str
links: List[str]
task3 = Task(
description="Report the results of the tasks.",
expected_output="A report of the results of the tasks. this is the code produced and then the links to the resources that can help with the coding task.",
agent=reporter_agent,
output_json=Report,
)
# Use in CrewAI
crew = Crew(
agents=[code_helper_agent, link_finder_agent, reporter_agent],
tasks=[task, task2, task3],
verbose=True,
)
result = crew.kickoff(
inputs={"task": "How do you implement an abstract class in python?"}
)
# Print raw result first
print("Raw result:", result)
# Handle result based on its type
if hasattr(result, "json_dict") and result.json_dict:
json_result = result.json_dict
print("\nStructured JSON result:")
print(f"{json.dumps(json_result, indent=2)}")
# Access fields safely
if isinstance(json_result, dict):
if "code" in json_result:
print("\nCode:")
print(
json_result["code"][:200] + "..."
if len(json_result["code"]) > 200
else json_result["code"]
)
if "links" in json_result:
print("\nLinks:")
for link in json_result["links"][:5]: # Print first 5 links
print(f"- {link}")
if len(json_result["links"]) > 5:
print(f"...and {len(json_result['links']) - 5} more links")
elif hasattr(result, "pydantic") and result.pydantic:
print("\nPydantic model result:")
print(result.pydantic.model_dump_json(indent=2))
else:
# Fallback to raw output
print("\nNo structured result available, using raw output:")
print(result.raw[:500] + "..." if len(result.raw) > 500 else result.raw)
```

View File

@@ -1,9 +1,13 @@
# Custom LLM Implementations
---
title: Custom LLM Implementation
description: Learn how to create custom LLM implementations in CrewAI.
icon: code
---
## Custom LLM Implementations
CrewAI now supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to create your own LLM implementations that don't rely on litellm's authentication mechanism.
## Using Custom LLM Implementations
To create a custom LLM implementation, you need to:
1. Inherit from the `BaseLLM` abstract base class

View File

@@ -1,5 +1,5 @@
---
title: Create Your Own Manager Agent
title: Custom Manager Agent
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
icon: user-shield
---

View File

@@ -20,10 +20,8 @@ Here's an example of how to replay from a task:
To use the replay feature, follow these steps:
<Steps>
<Step title="Open your terminal or command prompt.">
</Step>
<Step title="Navigate to the directory where your CrewAI project is located.">
</Step>
<Step title="Open your terminal or command prompt."></Step>
<Step title="Navigate to the directory where your CrewAI project is located."></Step>
<Step title="Run the following commands:">
To view the latest kickoff task_ids use:

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 705 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 547 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 332 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 333 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 259 KiB

BIN
docs/images/v01140.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

View File

@@ -4,6 +4,21 @@ description: Get started with CrewAI - Install, configure, and build your first
icon: wrench
---
## Video Tutorial
Watch this video tutorial for a step-by-step demonstration of the installation process:
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/-kSOTtYzgEw"
title="CrewAI Installation Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
## Text Tutorial
<Note>
**Python Version Requirements**

View File

@@ -87,15 +87,20 @@ Follow the steps below to get Crewing! 🚣‍♂️
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@@ -103,20 +108,20 @@ Follow the steps below to get Crewing! 🚣‍♂️
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='output/report.md' # This is the file that will be contain the final report.
)
@@ -331,9 +336,22 @@ email_summarizer_task:
- research_task
```
## Deploying Your Project
## Deploying Your Crew
The easiest way to deploy your crew is through [CrewAI Enterprise](http://app.crewai.com), where you can deploy your crew in a few clicks.
The easiest way to deploy your crew to production is through [CrewAI Enterprise](http://app.crewai.com).
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
<CardGroup cols={2}>
<Card

View File

@@ -8,11 +8,29 @@ icon: code-simple
## Description
The `CodeInterpreterTool` enables CrewAI agents to execute Python 3 code that they generate autonomously. The code is run in a secure, isolated Docker container, ensuring safety regardless of the content. This functionality is particularly valuable as it allows agents to create code, execute it, obtain the results, and utilize that information to inform subsequent decisions and actions.
The `CodeInterpreterTool` enables CrewAI agents to execute Python 3 code that they generate autonomously. This functionality is particularly valuable as it allows agents to create code, execute it, obtain the results, and utilize that information to inform subsequent decisions and actions.
## Requirements
There are several ways to use this tool:
### Docker Container (Recommended)
This is the primary option. The code runs in a secure, isolated Docker container, ensuring safety regardless of its content.
Make sure Docker is installed and running on your system. If you dont have it, you can install it from [here](https://docs.docker.com/get-docker/).
### Sandbox environment
If Docker is unavailable — either not installed or not accessible for any reason — the code will be executed in a restricted Python environment - called sandbox.
This environment is very limited, with strict restrictions on many modules and built-in functions.
### Unsafe Execution
**NOT RECOMMENDED FOR PRODUCTION**
This mode allows execution of any Python code, including dangerous calls to `sys, os..` and similar modules. [Check out](/tools/codeinterpretertool#enabling-unsafe-mode) how to enable this mode
## Logging
The `CodeInterpreterTool` logs the selected execution strategy to STDOUT
- Docker must be installed and running on your system. If you don't have it, you can install it from [here](https://docs.docker.com/get-docker/).
## Installation
@@ -74,18 +92,32 @@ programmer_agent = Agent(
)
```
### Enabling `unsafe_mode`
```python Code
from crewai_tools import CodeInterpreterTool
code = """
import os
os.system("ls -la")
"""
CodeInterpreterTool(unsafe_mode=True).run(code=code)
```
## Parameters
The `CodeInterpreterTool` accepts the following parameters during initialization:
- **user_dockerfile_path**: Optional. Path to a custom Dockerfile to use for the code interpreter container.
- **user_docker_base_url**: Optional. URL to the Docker daemon to use for running the container.
- **unsafe_mode**: Optional. Whether to run code directly on the host machine instead of in a Docker container. Default is `False`. Use with caution!
- **unsafe_mode**: Optional. Whether to run code directly on the host machine instead of in a Docker container or sandbox. Default is `False`. Use with caution!
- **default_image_tag**: Optional. Default Docker image tag. Default is `code-interpreter:latest`
When using the tool with an agent, the agent will need to provide:
- **code**: Required. The Python 3 code to execute.
- **libraries_used**: Required. A list of libraries used in the code that need to be installed.
- **libraries_used**: Optional. A list of libraries used in the code that need to be installed. Default is `[]`
## Agent Integration Example
@@ -152,7 +184,7 @@ class CodeInterpreterTool(BaseTool):
if self.unsafe_mode:
return self.run_code_unsafe(code, libraries_used)
else:
return self.run_code_in_docker(code, libraries_used)
return self.run_code_safety(code, libraries_used)
```
The tool performs the following steps:
@@ -168,8 +200,9 @@ The tool performs the following steps:
By default, the `CodeInterpreterTool` runs code in an isolated Docker container, which provides a layer of security. However, there are still some security considerations to keep in mind:
1. The Docker container has access to the current working directory, so sensitive files could potentially be accessed.
2. The `unsafe_mode` parameter allows code to be executed directly on the host machine, which should only be used in trusted environments.
3. Be cautious when allowing agents to install arbitrary libraries, as they could potentially include malicious code.
2. If the Docker container is unavailable and the code needs to run safely, it will be executed in a sandbox environment. For security reasons, installing arbitrary libraries is not allowed
3. The `unsafe_mode` parameter allows code to be executed directly on the host machine, which should only be used in trusted environments.
4. Be cautious when allowing agents to install arbitrary libraries, as they could potentially include malicious code.
## Conclusion

View File

@@ -30,7 +30,7 @@ pip install 'crewai[tools]'
Here are updated examples on how to utilize the JSONSearchTool effectively for searching within JSON files. These examples take into account the current implementation and usage patterns identified in the codebase.
```python Code
from crewai.json_tools import JSONSearchTool # Updated import path
from crewai_tools import JSONSearchTool
# General JSON content search
# This approach is suitable when the JSON path is either known beforehand or can be dynamically identified.

View File

@@ -1,10 +1,10 @@
---
title: Using LangChain Tools
description: Learn how to integrate LangChain tools with CrewAI agents to enhance search-based queries and more.
title: LangChain Tool
description: The `LangChainTool` is a wrapper for LangChain tools and query engines.
icon: link
---
## Using LangChain Tools
## `LangChainTool`
<Info>
CrewAI seamlessly integrates with LangChain's comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI.

View File

@@ -25,7 +25,7 @@ uv add weaviate-client
To effectively use the `WeaviateVectorSearchTool`, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` and `weaviate-client` packages are installed in your Python environment.
2. **Weaviate Setup**: Set up a Weaviate cluster. You can follow the [Weaviate documentation](https://weaviate.io/developers/wcs/connect) for instructions.
2. **Weaviate Setup**: Set up a Weaviate cluster. You can follow the [Weaviate documentation](https://weaviate.io/developers/wcs/manage-clusters/connect) for instructions.
3. **API Keys**: Obtain your Weaviate cluster URL and API key.
4. **OpenAI API Key**: Ensure you have an OpenAI API key set in your environment variables as `OPENAI_API_KEY`.
@@ -161,4 +161,4 @@ rag_agent = Agent(
## Conclusion
The `WeaviateVectorSearchTool` provides a powerful way to search for semantically similar documents in a Weaviate vector database. By leveraging vector embeddings, it enables more accurate and contextually relevant search results compared to traditional keyword-based searches. This tool is particularly useful for applications that require finding information based on meaning rather than exact matches.
The `WeaviateVectorSearchTool` provides a powerful way to search for semantically similar documents in a Weaviate vector database. By leveraging vector embeddings, it enables more accurate and contextually relevant search results compared to traditional keyword-based searches. This tool is particularly useful for applications that require finding information based on meaning rather than exact matches.

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.108.0"
version = "0.117.1"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.60.2",
"litellm==1.67.2",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
@@ -45,7 +45,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools~=0.40.1"]
tools = ["crewai-tools~=0.42.2"]
embeddings = [
"tiktoken~=0.7.0"
]
@@ -60,7 +60,7 @@ pandas = [
openpyxl = [
"openpyxl>=3.1.5",
]
mem0 = ["mem0ai>=0.1.29"]
mem0 = ["mem0ai>=0.1.94"]
docling = [
"docling>=2.12.0",
]
@@ -81,10 +81,10 @@ dev-dependencies = [
"pillow>=10.2.0",
"cairosvg>=2.7.1",
"pytest>=8.0.0",
"pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0",
"pytest-asyncio>=0.23.7",
"pytest-subprocess>=1.5.2",
"pytest-recording>=0.13.2",
]
[project.scripts]

View File

@@ -17,7 +17,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.108.0"
__version__ = "0.117.1"
__all__ = [
"Agent",
"Crew",

View File

@@ -1,7 +1,6 @@
import re
import shutil
import subprocess
from typing import Any, Dict, List, Literal, Optional, Sequence, Union
from typing import Any, Dict, List, Literal, Optional, Sequence, Type, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
@@ -11,6 +10,7 @@ from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
from crewai.lite_agent import LiteAgent, LiteAgentOutput
from crewai.llm import BaseLLM
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.security import Fingerprint
@@ -114,6 +114,14 @@ class Agent(BaseAgent):
default=None,
description="Embedder configuration for the agent.",
)
agent_knowledge_context: Optional[str] = Field(
default=None,
description="Knowledge context for the agent.",
)
crew_knowledge_context: Optional[str] = Field(
default=None,
description="Knowledge context for the crew.",
)
@model_validator(mode="after")
def post_init_setup(self):
@@ -156,11 +164,28 @@ class Agent(BaseAgent):
except (TypeError, ValueError) as e:
raise ValueError(f"Invalid Knowledge Configuration: {str(e)}")
def _is_any_available_memory(self) -> bool:
"""Check if any memory is available."""
if not self.crew:
return False
memory_attributes = [
"memory",
"memory_config",
"_short_term_memory",
"_long_term_memory",
"_entity_memory",
"_user_memory",
"_external_memory",
]
return any(getattr(self.crew, attr) for attr in memory_attributes)
def execute_task(
self,
task: Task,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
tools: Optional[List[BaseTool]] = None
) -> str:
"""Execute a task with the agent.
@@ -171,6 +196,11 @@ class Agent(BaseAgent):
Returns:
Output of the agent
Raises:
TimeoutError: If execution exceeds the maximum execution time.
ValueError: If the max execution time is not a positive integer.
RuntimeError: If the agent execution fails for other reasons.
"""
if self.tools_handler:
self.tools_handler.last_used_tool = {} # type: ignore # Incompatible types in assignment (expression has type "dict[Never, Never]", variable has type "ToolCalling")
@@ -200,7 +230,7 @@ class Agent(BaseAgent):
task=task_prompt, context=context
)
if self.crew and self.crew.memory:
if self._is_any_available_memory():
contextual_memory = ContextualMemory(
self.crew.memory_config,
self.crew._short_term_memory,
@@ -212,22 +242,30 @@ class Agent(BaseAgent):
memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
knowledge_config = (
self.knowledge_config.model_dump() if self.knowledge_config else {}
)
if self.knowledge:
agent_knowledge_snippets = self.knowledge.query([task.prompt()])
agent_knowledge_snippets = self.knowledge.query(
[task.prompt()], **knowledge_config
)
if agent_knowledge_snippets:
agent_knowledge_context = extract_knowledge_context(
self.agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
)
if agent_knowledge_context:
task_prompt += agent_knowledge_context
if self.agent_knowledge_context:
task_prompt += self.agent_knowledge_context
if self.crew:
knowledge_snippets = self.crew.query_knowledge([task.prompt()])
knowledge_snippets = self.crew.query_knowledge(
[task.prompt()], **knowledge_config
)
if knowledge_snippets:
crew_knowledge_context = extract_knowledge_context(knowledge_snippets)
if crew_knowledge_context:
task_prompt += crew_knowledge_context
self.crew_knowledge_context = extract_knowledge_context(
knowledge_snippets
)
if self.crew_knowledge_context:
task_prompt += self.crew_knowledge_context
tools = tools or self.tools or []
self.create_agent_executor(tools=tools, task=task)
@@ -247,14 +285,26 @@ class Agent(BaseAgent):
task=task,
),
)
result = self.agent_executor.invoke(
{
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
"ask_for_human_input": task.human_input,
}
)["output"]
# Determine execution method based on timeout setting
if self.max_execution_time is not None:
if not isinstance(self.max_execution_time, int) or self.max_execution_time <= 0:
raise ValueError("Max Execution time must be a positive integer greater than zero")
result = self._execute_with_timeout(task_prompt, task, self.max_execution_time)
else:
result = self._execute_without_timeout(task_prompt, task)
except TimeoutError as e:
# Propagate TimeoutError without retry
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise e
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
@@ -295,6 +345,66 @@ class Agent(BaseAgent):
)
return result
def _execute_with_timeout(
self,
task_prompt: str,
task: Task,
timeout: int
) -> str:
"""Execute a task with a timeout.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
timeout: Maximum execution time in seconds.
Returns:
The output of the agent.
Raises:
TimeoutError: If execution exceeds the timeout.
RuntimeError: If execution fails for other reasons.
"""
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
self._execute_without_timeout,
task_prompt=task_prompt,
task=task
)
try:
return future.result(timeout=timeout)
except concurrent.futures.TimeoutError:
future.cancel()
raise TimeoutError(f"Task '{task.description}' execution timed out after {timeout} seconds. Consider increasing max_execution_time or optimizing the task.")
except Exception as e:
future.cancel()
raise RuntimeError(f"Task execution failed: {str(e)}")
def _execute_without_timeout(
self,
task_prompt: str,
task: Task
) -> str:
"""Execute a task without a timeout.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
Returns:
The output of the agent.
"""
return self.agent_executor.invoke(
{
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
"ask_for_human_input": task.human_input,
}
)["output"]
def create_agent_executor(
self, tools: Optional[List[BaseTool]] = None, task=None
) -> None:
@@ -449,3 +559,76 @@ class Agent(BaseAgent):
def set_fingerprint(self, fingerprint: Fingerprint):
self.security_config.fingerprint = fingerprint
def kickoff(
self,
messages: Union[str, List[Dict[str, str]]],
response_format: Optional[Type[Any]] = None,
) -> LiteAgentOutput:
"""
Execute the agent with the given messages using a LiteAgent instance.
This method is useful when you want to use the Agent configuration but
with the simpler and more direct execution flow of LiteAgent.
Args:
messages: Either a string query or a list of message dictionaries.
If a string is provided, it will be converted to a user message.
If a list is provided, each dict should have 'role' and 'content' keys.
response_format: Optional Pydantic model for structured output.
Returns:
LiteAgentOutput: The result of the agent execution.
"""
lite_agent = LiteAgent(
role=self.role,
goal=self.goal,
backstory=self.backstory,
llm=self.llm,
tools=self.tools or [],
max_iterations=self.max_iter,
max_execution_time=self.max_execution_time,
respect_context_window=self.respect_context_window,
verbose=self.verbose,
response_format=response_format,
i18n=self.i18n,
original_agent=self,
)
return lite_agent.kickoff(messages)
async def kickoff_async(
self,
messages: Union[str, List[Dict[str, str]]],
response_format: Optional[Type[Any]] = None,
) -> LiteAgentOutput:
"""
Execute the agent asynchronously with the given messages using a LiteAgent instance.
This is the async version of the kickoff method.
Args:
messages: Either a string query or a list of message dictionaries.
If a string is provided, it will be converted to a user message.
If a list is provided, each dict should have 'role' and 'content' keys.
response_format: Optional Pydantic model for structured output.
Returns:
LiteAgentOutput: The result of the agent execution.
"""
lite_agent = LiteAgent(
role=self.role,
goal=self.goal,
backstory=self.backstory,
llm=self.llm,
tools=self.tools or [],
max_iterations=self.max_iter,
max_execution_time=self.max_execution_time,
respect_context_window=self.respect_context_window,
verbose=self.verbose,
response_format=response_format,
i18n=self.i18n,
original_agent=self,
)
return await lite_agent.kickoff_async(messages)

View File

@@ -0,0 +1,42 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from pydantic import PrivateAttr
from crewai.agent import BaseAgent
from crewai.tools import BaseTool
class BaseAgentAdapter(BaseAgent, ABC):
"""Base class for all agent adapters in CrewAI.
This abstract class defines the common interface and functionality that all
agent adapters must implement. It extends BaseAgent to maintain compatibility
with the CrewAI framework while adding adapter-specific requirements.
"""
adapted_structured_output: bool = False
_agent_config: Optional[Dict[str, Any]] = PrivateAttr(default=None)
model_config = {"arbitrary_types_allowed": True}
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
super().__init__(adapted_agent=True, **kwargs)
self._agent_config = agent_config
@abstractmethod
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure and adapt tools for the specific agent implementation.
Args:
tools: Optional list of BaseTool instances to be configured
"""
pass
def configure_structured_output(self, structured_output: Any) -> None:
"""Configure the structured output for the specific agent implementation.
Args:
structured_output: The structured output to be configured
"""
pass

View File

@@ -0,0 +1,29 @@
from abc import ABC, abstractmethod
class BaseConverterAdapter(ABC):
"""Base class for all converter adapters in CrewAI.
This abstract class defines the common interface and functionality that all
converter adapters must implement for converting structured output.
"""
def __init__(self, agent_adapter):
self.agent_adapter = agent_adapter
@abstractmethod
def configure_structured_output(self, task) -> None:
"""Configure agents to return structured output.
Must support json and pydantic output.
"""
pass
@abstractmethod
def enhance_system_prompt(self, base_prompt: str) -> str:
"""Enhance the system prompt with structured output instructions."""
pass
@abstractmethod
def post_process_result(self, result: str) -> str:
"""Post-process the result to ensure it matches the expected format: string."""
pass

View File

@@ -0,0 +1,37 @@
from abc import ABC, abstractmethod
from typing import Any, List, Optional
from crewai.tools.base_tool import BaseTool
class BaseToolAdapter(ABC):
"""Base class for all tool adapters in CrewAI.
This abstract class defines the common interface that all tool adapters
must implement. It provides the structure for adapting CrewAI tools to
different frameworks and platforms.
"""
original_tools: List[BaseTool]
converted_tools: List[Any]
def __init__(self, tools: Optional[List[BaseTool]] = None):
self.original_tools = tools or []
self.converted_tools = []
@abstractmethod
def configure_tools(self, tools: List[BaseTool]) -> None:
"""Configure and convert tools for the specific implementation.
Args:
tools: List of BaseTool instances to be configured and converted
"""
pass
def tools(self) -> List[Any]:
"""Return all converted tools."""
return self.converted_tools
def sanitize_tool_name(self, tool_name: str) -> str:
"""Sanitize tool name for API compatibility."""
return tool_name.replace(" ", "_")

View File

@@ -0,0 +1,226 @@
from typing import Any, AsyncIterable, Dict, List, Optional
from pydantic import Field, PrivateAttr
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.agents.agent_adapters.langgraph.langgraph_tool_adapter import (
LangGraphToolAdapter,
)
from crewai.agents.agent_adapters.langgraph.structured_output_converter import (
LangGraphConverterAdapter,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import BaseTool
from crewai.utilities import Logger
from crewai.utilities.converter import Converter
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
try:
from langchain_core.messages import ToolMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
LANGGRAPH_AVAILABLE = True
except ImportError:
LANGGRAPH_AVAILABLE = False
class LangGraphAgentAdapter(BaseAgentAdapter):
"""Adapter for LangGraph agents to work with CrewAI."""
model_config = {"arbitrary_types_allowed": True}
_logger: Logger = PrivateAttr(default_factory=lambda: Logger())
_tool_adapter: LangGraphToolAdapter = PrivateAttr()
_graph: Any = PrivateAttr(default=None)
_memory: Any = PrivateAttr(default=None)
_max_iterations: int = PrivateAttr(default=10)
function_calling_llm: Any = Field(default=None)
step_callback: Any = Field(default=None)
model: str = Field(default="gpt-4o")
verbose: bool = Field(default=False)
def __init__(
self,
role: str,
goal: str,
backstory: str,
tools: Optional[List[BaseTool]] = None,
llm: Any = None,
max_iterations: int = 10,
agent_config: Optional[Dict[str, Any]] = None,
**kwargs,
):
"""Initialize the LangGraph agent adapter."""
if not LANGGRAPH_AVAILABLE:
raise ImportError(
"LangGraph Agent Dependencies are not installed. Please install it using `uv add langchain-core langgraph`"
)
super().__init__(
role=role,
goal=goal,
backstory=backstory,
tools=tools,
llm=llm or self.model,
agent_config=agent_config,
**kwargs,
)
self._tool_adapter = LangGraphToolAdapter(tools=tools)
self._converter_adapter = LangGraphConverterAdapter(self)
self._max_iterations = max_iterations
self._setup_graph()
def _setup_graph(self) -> None:
"""Set up the LangGraph workflow graph."""
try:
self._memory = MemorySaver()
converted_tools: List[Any] = self._tool_adapter.tools()
if self._agent_config:
self._graph = create_react_agent(
model=self.llm,
tools=converted_tools,
checkpointer=self._memory,
debug=self.verbose,
**self._agent_config,
)
else:
self._graph = create_react_agent(
model=self.llm,
tools=converted_tools or [],
checkpointer=self._memory,
debug=self.verbose,
)
except ImportError as e:
self._logger.log(
"error", f"Failed to import LangGraph dependencies: {str(e)}"
)
raise
except Exception as e:
self._logger.log("error", f"Error setting up LangGraph agent: {str(e)}")
raise
def _build_system_prompt(self) -> str:
"""Build a system prompt for the LangGraph agent."""
base_prompt = f"""
You are {self.role}.
Your goal is: {self.goal}
Your backstory: {self.backstory}
When working on tasks, think step-by-step and use the available tools when necessary.
"""
return self._converter_adapter.enhance_system_prompt(base_prompt)
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
) -> str:
"""Execute a task using the LangGraph workflow."""
self.create_agent_executor(tools)
self.configure_structured_output(task)
try:
task_prompt = task.prompt() if hasattr(task, "prompt") else str(task)
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
session_id = f"task_{id(task)}"
config = {"configurable": {"thread_id": session_id}}
result = self._graph.invoke(
{
"messages": [
("system", self._build_system_prompt()),
("user", task_prompt),
]
},
config,
)
messages = result.get("messages", [])
last_message = messages[-1] if messages else None
final_answer = ""
if isinstance(last_message, dict):
final_answer = last_message.get("content", "")
elif hasattr(last_message, "content"):
final_answer = getattr(last_message, "content", "")
final_answer = (
self._converter_adapter.post_process_result(final_answer)
or "Task execution completed but no clear answer was provided."
)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(
agent=self, task=task, output=final_answer
),
)
return final_answer
except Exception as e:
self._logger.log("error", f"Error executing LangGraph task: {str(e)}")
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure the LangGraph agent for execution."""
self.configure_tools(tools)
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure tools for the LangGraph agent."""
if tools:
all_tools = list(self.tools or []) + list(tools or [])
self._tool_adapter.configure_tools(all_tools)
available_tools = self._tool_adapter.tools()
self._graph.tools = available_tools
def get_delegation_tools(self, agents: List[BaseAgent]) -> List[BaseTool]:
"""Implement delegation tools support for LangGraph."""
agent_tools = AgentTools(agents=agents)
return agent_tools.tools()
def get_output_converter(
self, llm: Any, text: str, model: Any, instructions: str
) -> Any:
"""Convert output format if needed."""
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def configure_structured_output(self, task) -> None:
"""Configure the structured output for LangGraph."""
self._converter_adapter.configure_structured_output(task)

View File

@@ -0,0 +1,61 @@
import inspect
from typing import Any, List, Optional
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools.base_tool import BaseTool
class LangGraphToolAdapter(BaseToolAdapter):
"""Adapts CrewAI tools to LangGraph agent tool compatible format"""
def __init__(self, tools: Optional[List[BaseTool]] = None):
self.original_tools = tools or []
self.converted_tools = []
def configure_tools(self, tools: List[BaseTool]) -> None:
"""
Configure and convert CrewAI tools to LangGraph-compatible format.
LangGraph expects tools in langchain_core.tools format.
"""
from langchain_core.tools import BaseTool, StructuredTool
converted_tools = []
if self.original_tools:
all_tools = tools + self.original_tools
else:
all_tools = tools
for tool in all_tools:
if isinstance(tool, BaseTool):
converted_tools.append(tool)
continue
sanitized_name = self.sanitize_tool_name(tool.name)
async def tool_wrapper(*args, tool=tool, **kwargs):
output = None
if len(args) > 0 and isinstance(args[0], str):
output = tool.run(args[0])
elif "input" in kwargs:
output = tool.run(kwargs["input"])
else:
output = tool.run(**kwargs)
if inspect.isawaitable(output):
result = await output
else:
result = output
return result
converted_tool = StructuredTool(
name=sanitized_name,
description=tool.description,
func=tool_wrapper,
args_schema=tool.args_schema,
)
converted_tools.append(converted_tool)
self.converted_tools = converted_tools
def tools(self) -> List[Any]:
return self.converted_tools or []

View File

@@ -0,0 +1,80 @@
import json
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
from crewai.utilities.converter import generate_model_description
class LangGraphConverterAdapter(BaseConverterAdapter):
"""Adapter for handling structured output conversion in LangGraph agents"""
def __init__(self, agent_adapter):
"""Initialize the converter adapter with a reference to the agent adapter"""
self.agent_adapter = agent_adapter
self._output_format = None
self._schema = None
self._system_prompt_appendix = None
def configure_structured_output(self, task) -> None:
"""Configure the structured output for LangGraph."""
if not (task.output_json or task.output_pydantic):
self._output_format = None
self._schema = None
self._system_prompt_appendix = None
return
if task.output_json:
self._output_format = "json"
self._schema = generate_model_description(task.output_json)
elif task.output_pydantic:
self._output_format = "pydantic"
self._schema = generate_model_description(task.output_pydantic)
self._system_prompt_appendix = self._generate_system_prompt_appendix()
def _generate_system_prompt_appendix(self) -> str:
"""Generate an appendix for the system prompt to enforce structured output"""
if not self._output_format or not self._schema:
return ""
return f"""
Important: Your final answer MUST be provided in the following structured format:
{self._schema}
DO NOT include any markdown code blocks, backticks, or other formatting around your response.
The output should be raw JSON that exactly matches the specified schema.
"""
def enhance_system_prompt(self, original_prompt: str) -> str:
"""Add structured output instructions to the system prompt if needed"""
if not self._system_prompt_appendix:
return original_prompt
return f"{original_prompt}\n{self._system_prompt_appendix}"
def post_process_result(self, result: str) -> str:
"""Post-process the result to ensure it matches the expected format"""
if not self._output_format:
return result
# Try to extract valid JSON if it's wrapped in code blocks or other text
if self._output_format in ["json", "pydantic"]:
try:
# First, try to parse as is
json.loads(result)
return result
except json.JSONDecodeError:
# Try to extract JSON from the text
import re
json_match = re.search(r"(\{.*\})", result, re.DOTALL)
if json_match:
try:
extracted = json_match.group(1)
# Validate it's proper JSON
json.loads(extracted)
return extracted
except:
pass
return result

View File

@@ -0,0 +1,178 @@
from typing import Any, List, Optional
from pydantic import Field, PrivateAttr
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.agents.agent_adapters.openai_agents.structured_output_converter import (
OpenAIConverterAdapter,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.utilities import Logger
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
try:
from agents import Agent as OpenAIAgent # type: ignore
from agents import Runner, enable_verbose_stdout_logging # type: ignore
from .openai_agent_tool_adapter import OpenAIAgentToolAdapter
OPENAI_AVAILABLE = True
except ImportError:
OPENAI_AVAILABLE = False
class OpenAIAgentAdapter(BaseAgentAdapter):
"""Adapter for OpenAI Assistants"""
model_config = {"arbitrary_types_allowed": True}
_openai_agent: "OpenAIAgent" = PrivateAttr()
_logger: Logger = PrivateAttr(default_factory=lambda: Logger())
_active_thread: Optional[str] = PrivateAttr(default=None)
function_calling_llm: Any = Field(default=None)
step_callback: Any = Field(default=None)
_tool_adapter: "OpenAIAgentToolAdapter" = PrivateAttr()
_converter_adapter: OpenAIConverterAdapter = PrivateAttr()
def __init__(
self,
model: str = "gpt-4o-mini",
tools: Optional[List[BaseTool]] = None,
agent_config: Optional[dict] = None,
**kwargs,
):
if not OPENAI_AVAILABLE:
raise ImportError(
"OpenAI Agent Dependencies are not installed. Please install it using `uv add openai-agents`"
)
else:
role = kwargs.pop("role", None)
goal = kwargs.pop("goal", None)
backstory = kwargs.pop("backstory", None)
super().__init__(
role=role,
goal=goal,
backstory=backstory,
tools=tools,
agent_config=agent_config,
**kwargs,
)
self._tool_adapter = OpenAIAgentToolAdapter(tools=tools)
self.llm = model
self._converter_adapter = OpenAIConverterAdapter(self)
def _build_system_prompt(self) -> str:
"""Build a system prompt for the OpenAI agent."""
base_prompt = f"""
You are {self.role}.
Your goal is: {self.goal}
Your backstory: {self.backstory}
When working on tasks, think step-by-step and use the available tools when necessary.
"""
return self._converter_adapter.enhance_system_prompt(base_prompt)
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
) -> str:
"""Execute a task using the OpenAI Assistant"""
self._converter_adapter.configure_structured_output(task)
self.create_agent_executor(tools)
if self.verbose:
enable_verbose_stdout_logging()
try:
task_prompt = task.prompt()
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
result = self.agent_executor.run_sync(self._openai_agent, task_prompt)
final_answer = self.handle_execution_result(result)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(
agent=self, task=task, output=final_answer
),
)
return final_answer
except Exception as e:
self._logger.log("error", f"Error executing OpenAI task: {str(e)}")
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> None:
"""
Configure the OpenAI agent for execution.
While OpenAI handles execution differently through Runner,
we can use this method to set up tools and configurations.
"""
all_tools = list(self.tools or []) + list(tools or [])
instructions = self._build_system_prompt()
self._openai_agent = OpenAIAgent(
name=self.role,
instructions=instructions,
model=self.llm,
**self._agent_config or {},
)
if all_tools:
self.configure_tools(all_tools)
self.agent_executor = Runner
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure tools for the OpenAI Assistant"""
if tools:
self._tool_adapter.configure_tools(tools)
if self._tool_adapter.converted_tools:
self._openai_agent.tools = self._tool_adapter.converted_tools
def handle_execution_result(self, result: Any) -> str:
"""Process OpenAI Assistant execution result converting any structured output to a string"""
return self._converter_adapter.post_process_result(result.final_output)
def get_delegation_tools(self, agents: List[BaseAgent]) -> List[BaseTool]:
"""Implement delegation tools support"""
agent_tools = AgentTools(agents=agents)
tools = agent_tools.tools()
return tools
def configure_structured_output(self, task) -> None:
"""Configure the structured output for the specific agent implementation.
Args:
structured_output: The structured output to be configured
"""
self._converter_adapter.configure_structured_output(task)

View File

@@ -0,0 +1,91 @@
import inspect
from typing import Any, List, Optional
from agents import FunctionTool, Tool
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools import BaseTool
class OpenAIAgentToolAdapter(BaseToolAdapter):
"""Adapter for OpenAI Assistant tools"""
def __init__(self, tools: Optional[List[BaseTool]] = None):
self.original_tools = tools or []
def configure_tools(self, tools: List[BaseTool]) -> None:
"""Configure tools for the OpenAI Assistant"""
if self.original_tools:
all_tools = tools + self.original_tools
else:
all_tools = tools
if all_tools:
self.converted_tools = self._convert_tools_to_openai_format(all_tools)
def _convert_tools_to_openai_format(
self, tools: Optional[List[BaseTool]]
) -> List[Tool]:
"""Convert CrewAI tools to OpenAI Assistant tool format"""
if not tools:
return []
def sanitize_tool_name(name: str) -> str:
"""Convert tool name to match OpenAI's required pattern"""
import re
sanitized = re.sub(r"[^a-zA-Z0-9_-]", "_", name).lower()
return sanitized
def create_tool_wrapper(tool: BaseTool):
"""Create a wrapper function that handles the OpenAI function tool interface"""
async def wrapper(context_wrapper: Any, arguments: Any) -> Any:
# Get the parameter name from the schema
param_name = list(
tool.args_schema.model_json_schema()["properties"].keys()
)[0]
# Handle different argument types
if isinstance(arguments, dict):
args_dict = arguments
elif isinstance(arguments, str):
try:
import json
args_dict = json.loads(arguments)
except json.JSONDecodeError:
args_dict = {param_name: arguments}
else:
args_dict = {param_name: str(arguments)}
# Run the tool with the processed arguments
output = tool._run(**args_dict)
# Await if the tool returned a coroutine
if inspect.isawaitable(output):
result = await output
else:
result = output
# Ensure the result is JSON serializable
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
return result
return str(result)
return wrapper
openai_tools = []
for tool in tools:
schema = tool.args_schema.model_json_schema()
schema.update({"additionalProperties": False, "type": "object"})
openai_tool = FunctionTool(
name=sanitize_tool_name(tool.name),
description=tool.description,
params_json_schema=schema,
on_invoke_tool=create_tool_wrapper(tool),
)
openai_tools.append(openai_tool)
return openai_tools

View File

@@ -0,0 +1,122 @@
import json
import re
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
from crewai.utilities.converter import generate_model_description
from crewai.utilities.i18n import I18N
class OpenAIConverterAdapter(BaseConverterAdapter):
"""
Adapter for handling structured output conversion in OpenAI agents.
This adapter enhances the OpenAI agent to handle structured output formats
and post-processes the results when needed.
Attributes:
_output_format: The expected output format (json, pydantic, or None)
_schema: The schema description for the expected output
_output_model: The Pydantic model for the output
"""
def __init__(self, agent_adapter):
"""Initialize the converter adapter with a reference to the agent adapter"""
self.agent_adapter = agent_adapter
self._output_format = None
self._schema = None
self._output_model = None
def configure_structured_output(self, task) -> None:
"""
Configure the structured output for OpenAI agent based on task requirements.
Args:
task: The task containing output format requirements
"""
# Reset configuration
self._output_format = None
self._schema = None
self._output_model = None
# If no structured output is required, return early
if not (task.output_json or task.output_pydantic):
return
# Configure based on task output format
if task.output_json:
self._output_format = "json"
self._schema = generate_model_description(task.output_json)
self.agent_adapter._openai_agent.output_type = task.output_json
self._output_model = task.output_json
elif task.output_pydantic:
self._output_format = "pydantic"
self._schema = generate_model_description(task.output_pydantic)
self.agent_adapter._openai_agent.output_type = task.output_pydantic
self._output_model = task.output_pydantic
def enhance_system_prompt(self, base_prompt: str) -> str:
"""
Enhance the base system prompt with structured output requirements if needed.
Args:
base_prompt: The original system prompt
Returns:
Enhanced system prompt with output format instructions if needed
"""
if not self._output_format:
return base_prompt
output_schema = (
I18N()
.slice("formatted_task_instructions")
.format(output_format=self._schema)
)
return f"{base_prompt}\n\n{output_schema}"
def post_process_result(self, result: str) -> str:
"""
Post-process the result to ensure it matches the expected format.
This method attempts to extract valid JSON from the result if necessary.
Args:
result: The raw result from the agent
Returns:
Processed result conforming to the expected output format
"""
if not self._output_format:
return result
# Try to extract valid JSON if it's wrapped in code blocks or other text
if isinstance(result, str) and self._output_format in ["json", "pydantic"]:
# First, try to parse as is
try:
json.loads(result)
return result
except json.JSONDecodeError:
# Try to extract JSON from markdown code blocks
code_block_pattern = r"```(?:json)?\s*([\s\S]*?)```"
code_blocks = re.findall(code_block_pattern, result)
for block in code_blocks:
try:
json.loads(block.strip())
return block.strip()
except json.JSONDecodeError:
continue
# Try to extract any JSON-like structure
json_pattern = r"(\{[\s\S]*\})"
json_matches = re.findall(json_pattern, result, re.DOTALL)
for match in json_matches:
try:
json.loads(match)
return match
except json.JSONDecodeError:
continue
# If all extraction attempts fail, return the original
return str(result)

View File

@@ -19,6 +19,7 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.knowledge_config import KnowledgeConfig
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.security.security_config import SecurityConfig
from crewai.tools.base_tool import BaseTool, Tool
@@ -62,8 +63,6 @@ class BaseAgent(ABC, BaseModel):
Abstract method to execute a task.
create_agent_executor(tools=None) -> None:
Abstract method to create an agent executor.
_parse_tools(tools: List[BaseTool]) -> List[Any]:
Abstract method to parse tools.
get_delegation_tools(agents: List["BaseAgent"]):
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
get_output_converter(llm, model, instructions):
@@ -154,6 +153,13 @@ class BaseAgent(ABC, BaseModel):
callbacks: List[Callable] = Field(
default=[], description="Callbacks to be used for the agent"
)
adapted_agent: bool = Field(
default=False, description="Whether the agent is adapted"
)
knowledge_config: Optional[KnowledgeConfig] = Field(
default=None,
description="Knowledge configuration for the agent such as limits and threshold",
)
@model_validator(mode="before")
@classmethod
@@ -170,15 +176,15 @@ class BaseAgent(ABC, BaseModel):
tool meets these criteria, it is processed and added to the list of
tools. Otherwise, a ValueError is raised.
"""
if not tools:
return []
processed_tools = []
required_attrs = ["name", "func", "description"]
for tool in tools:
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif (
hasattr(tool, "name")
and hasattr(tool, "func")
and hasattr(tool, "description")
):
elif all(hasattr(tool, attr) for attr in required_attrs):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
@@ -260,13 +266,6 @@ class BaseAgent(ABC, BaseModel):
"""Set the task tools that init BaseAgenTools class."""
pass
@abstractmethod
def get_output_converter(
self, llm: Any, text: str, model: type[BaseModel] | None, instructions: str
) -> Converter:
"""Get the converter class for the agent to create json/pydantic outputs."""
pass
def copy(self: T) -> T: # type: ignore # Signature of "copy" incompatible with supertype "BaseModel"
"""Create a deep copy of the Agent."""
exclude = {

View File

@@ -72,7 +72,6 @@ class CrewAgentExecutorMixin:
"""Create and save long-term and entity memory items based on evaluation."""
if (
self.crew
and self.crew.memory
and self.crew._long_term_memory
and self.crew._entity_memory
and self.task
@@ -114,6 +113,15 @@ class CrewAgentExecutorMixin:
except Exception as e:
print(f"Failed to add to long term memory: {e}")
pass
elif (
self.crew
and self.crew._long_term_memory
and self.crew._entity_memory is None
):
self._printer.print(
content="Long term memory is enabled, but entity memory is not enabled. Please configure entity memory or set memory=True to automatically enable it.",
color="bold_yellow",
)
def _ask_human_input(self, final_answer: str) -> str:
"""Prompt human input with mode-appropriate messaging."""

View File

@@ -91,6 +91,12 @@ ENV_VARS = {
"key_name": "CEREBRAS_API_KEY",
},
],
"huggingface": [
{
"prompt": "Enter your Huggingface API key (HF_TOKEN) (press Enter to skip)",
"key_name": "HF_TOKEN",
},
],
"sambanova": [
{
"prompt": "Enter your SambaNovaCloud API key (press Enter to skip)",
@@ -106,6 +112,7 @@ PROVIDERS = [
"gemini",
"nvidia_nim",
"groq",
"huggingface",
"ollama",
"watson",
"bedrock",
@@ -115,7 +122,16 @@ PROVIDERS = [
]
MODELS = {
"openai": ["gpt-4", "gpt-4o", "gpt-4o-mini", "o1-mini", "o1-preview"],
"openai": [
"gpt-4",
"gpt-4.1",
"gpt-4.1-mini-2025-04-14",
"gpt-4.1-nano-2025-04-14",
"gpt-4o",
"gpt-4o-mini",
"o1-mini",
"o1-preview",
],
"anthropic": [
"claude-3-5-sonnet-20240620",
"claude-3-sonnet-20240229",
@@ -125,8 +141,17 @@ MODELS = {
"gemini": [
"gemini/gemini-1.5-flash",
"gemini/gemini-1.5-pro",
"gemini/gemini-2.0-flash-lite-001",
"gemini/gemini-2.0-flash-001",
"gemini/gemini-2.0-flash-thinking-exp-01-21",
"gemini/gemini-2.5-flash-preview-04-17",
"gemini/gemini-2.5-pro-exp-03-25",
"gemini/gemini-gemma-2-9b-it",
"gemini/gemini-gemma-2-27b-it",
"gemini/gemma-3-1b-it",
"gemini/gemma-3-4b-it",
"gemini/gemma-3-12b-it",
"gemini/gemma-3-27b-it",
],
"nvidia_nim": [
"nvidia_nim/nvidia/mistral-nemo-minitron-8b-8k-instruct",
@@ -270,6 +295,12 @@ MODELS = {
"bedrock/mistral.mistral-7b-instruct-v0:2",
"bedrock/mistral.mixtral-8x7b-instruct-v0:1",
],
"huggingface": [
"huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
"huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1",
"huggingface/tiiuae/falcon-180B-chat",
"huggingface/google/gemma-7b-it",
],
"sambanova": [
"sambanova/Meta-Llama-3.3-70B-Instruct",
"sambanova/QwQ-32B-Preview",

View File

@@ -1,6 +1,7 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
@@ -9,25 +10,26 @@ from crewai.project import CrewBase, agent, crew, task
class {{crew_name}}():
"""{{crew_name}} crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
# If you would like to add tools to your agents, you can learn more about it here:
# https://docs.crewai.com/concepts/agents#agent-tools
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
@@ -37,13 +39,13 @@ class {{crew_name}}():
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='report.md'
)

View File

@@ -33,7 +33,8 @@ def train():
Train the crew for a given number of iterations.
"""
inputs = {
"topic": "AI LLMs"
"topic": "AI LLMs",
'current_year': str(datetime.now().year)
}
try:
{{crew_name}}().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs)
@@ -59,8 +60,9 @@ def test():
"topic": "AI LLMs",
"current_year": str(datetime.now().year)
}
try:
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), eval_llm=sys.argv[2], inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while testing the crew: {e}")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.108.0,<1.0.0"
"crewai[tools]>=0.117.1,<1.0.0"
]
[project.scripts]

View File

@@ -1,5 +1,7 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
@@ -10,6 +12,9 @@ from crewai.project import CrewBase, agent, crew, task
class PoemCrew:
"""Poem Crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
@@ -21,7 +26,7 @@ class PoemCrew:
@agent
def poem_writer(self) -> Agent:
return Agent(
config=self.agents_config["poem_writer"],
config=self.agents_config["poem_writer"], # type: ignore[index]
)
# To learn more about structured task outputs,
@@ -30,7 +35,7 @@ class PoemCrew:
@task
def write_poem(self) -> Task:
return Task(
config=self.tasks_config["write_poem"],
config=self.tasks_config["write_poem"], # type: ignore[index]
)
@crew

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.108.0,<1.0.0",
"crewai[tools]>=0.117.1,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.108.0"
"crewai[tools]>=0.117.1"
]
[tool.crewai]

View File

@@ -117,7 +117,9 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
published_handle = publish_response.json()["handle"]
console.print(
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
f"Successfully published `{published_handle}` ({project_version}).\n\n"
+ "⚠️ Security checks are running in the background. Your tool will be available once these are complete.\n"
+ f"You can monitor the status or access your tool here:\nhttps://app.crewai.com/crewai_plus/tools/{published_handle}",
style="bold green",
)
@@ -153,8 +155,12 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
login_response_json = login_response.json()
settings = Settings()
settings.tool_repository_username = login_response_json["credential"]["username"]
settings.tool_repository_password = login_response_json["credential"]["password"]
settings.tool_repository_username = login_response_json["credential"][
"username"
]
settings.tool_repository_password = login_response_json["credential"][
"password"
]
settings.dump()
console.print(
@@ -179,7 +185,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
capture_output=False,
env=self._build_env_with_credentials(repository_handle),
text=True,
check=True
check=True,
)
if add_package_result.stderr:
@@ -204,7 +210,11 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
settings = Settings()
env = os.environ.copy()
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(settings.tool_repository_username or "")
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(settings.tool_repository_password or "")
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(
settings.tool_repository_username or ""
)
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(
settings.tool_repository_password or ""
)
return env

View File

@@ -273,11 +273,9 @@ def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
for attr_name in dir(module):
attr = getattr(module, attr_name)
try:
if isinstance(attr, Crew) and hasattr(attr, "kickoff"):
print(
f"Found valid crew object in attribute '{attr_name}' at {crew_os_path}."
)
return attr
if callable(attr) and hasattr(attr, "crew"):
crew_instance = attr().crew()
return crew_instance
except Exception as e:
print(f"Error processing attribute {attr_name}: {e}")

View File

@@ -275,46 +275,49 @@ class Crew(BaseModel):
return self
def _initialize_user_memory(self):
if (
self.memory_config
and "user_memory" in self.memory_config
and self.memory_config.get("provider") == "mem0"
): # Check for user_memory in config
user_memory_config = self.memory_config["user_memory"]
if isinstance(
user_memory_config, dict
): # Check if it's a configuration dict
self._user_memory = UserMemory(crew=self)
else:
raise TypeError("user_memory must be a configuration dictionary")
def _initialize_default_memories(self):
self._long_term_memory = self._long_term_memory or LongTermMemory()
self._short_term_memory = self._short_term_memory or ShortTermMemory(
crew=self,
embedder_config=self.embedder,
)
self._entity_memory = self.entity_memory or EntityMemory(
crew=self, embedder_config=self.embedder
)
@model_validator(mode="after")
def create_crew_memory(self) -> "Crew":
"""Set private attributes."""
"""Initialize private memory attributes."""
self._external_memory = (
# External memory doesnt support a default value since it was designed to be managed entirely externally
self.external_memory.set_crew(self) if self.external_memory else None
)
self._long_term_memory = self.long_term_memory
self._short_term_memory = self.short_term_memory
self._entity_memory = self.entity_memory
# UserMemory is gonna to be deprecated in the future, but we have to initialize a default value for now
self._user_memory = None
if self.memory:
self._long_term_memory = (
self.long_term_memory if self.long_term_memory else LongTermMemory()
)
self._short_term_memory = (
self.short_term_memory
if self.short_term_memory
else ShortTermMemory(
crew=self,
embedder_config=self.embedder,
)
)
self._entity_memory = (
self.entity_memory
if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder)
)
self._external_memory = (
# External memory doesnt support a default value since it was designed to be managed entirely externally
self.external_memory.set_crew(self)
if self.external_memory
else None
)
if (
self.memory_config
and "user_memory" in self.memory_config
and self.memory_config.get("provider") == "mem0"
): # Check for user_memory in config
user_memory_config = self.memory_config["user_memory"]
if isinstance(
user_memory_config, dict
): # Check if it's a configuration dict
self._user_memory = UserMemory(crew=self)
else:
raise TypeError("user_memory must be a configuration dictionary")
else:
self._user_memory = None # No user memory if not in config
self._initialize_default_memories()
self._initialize_user_memory()
return self
@model_validator(mode="after")
@@ -1131,9 +1134,13 @@ class Crew(BaseModel):
result = self._execute_tasks(self.tasks, start_index, True)
return result
def query_knowledge(self, query: List[str]) -> Union[List[Dict[str, Any]], None]:
def query_knowledge(
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
) -> Union[List[Dict[str, Any]], None]:
if self.knowledge:
return self.knowledge.query(query)
return self.knowledge.query(
query, results_limit=results_limit, score_threshold=score_threshold
)
return None
def fetch_inputs(self) -> Set[str]:
@@ -1214,6 +1221,20 @@ class Crew(BaseModel):
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
if self.short_term_memory:
copied_data["short_term_memory"] = self.short_term_memory.model_copy(
deep=True
)
if self.long_term_memory:
copied_data["long_term_memory"] = self.long_term_memory.model_copy(
deep=True
)
if self.entity_memory:
copied_data["entity_memory"] = self.entity_memory.model_copy(deep=True)
if self.external_memory:
copied_data["external_memory"] = self.external_memory.model_copy(deep=True)
if self.user_memory:
copied_data["user_memory"] = self.user_memory.model_copy(deep=True)
copied_data.pop("agents", None)
copied_data.pop("tasks", None)
@@ -1383,12 +1404,15 @@ class Crew(BaseModel):
RuntimeError: If the specified memory system fails to reset
"""
reset_functions = {
"long": (self._long_term_memory, "long term"),
"short": (self._short_term_memory, "short term"),
"entity": (self._entity_memory, "entity"),
"knowledge": (self.knowledge, "knowledge"),
"kickoff_outputs": (self._task_output_handler, "task output"),
"external": (self._external_memory, "external"),
"long": (getattr(self, "_long_term_memory", None), "long term"),
"short": (getattr(self, "_short_term_memory", None), "short term"),
"entity": (getattr(self, "_entity_memory", None), "entity"),
"knowledge": (getattr(self, "knowledge", None), "knowledge"),
"kickoff_outputs": (
getattr(self, "_task_output_handler", None),
"task output",
),
"external": (getattr(self, "_external_memory", None), "external"),
}
memory_system, name = reset_functions[memory_type]

View File

@@ -1043,6 +1043,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
import traceback
traceback.print_exc()
raise
def _log_flow_event(
self, message: str, color: str = "yellow", level: str = "info"

View File

@@ -21,7 +21,7 @@ class SQLiteFlowPersistence(FlowPersistence):
moderate performance requirements.
"""
db_path: str # Type annotation for instance variable
db_path: str
def __init__(self, db_path: Optional[str] = None):
"""Initialize SQLite persistence.

View File

@@ -43,7 +43,9 @@ class Knowledge(BaseModel):
self.storage.initialize_knowledge_storage()
self._add_sources()
def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]:
def query(
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
) -> List[Dict[str, Any]]:
"""
Query across all knowledge sources to find the most relevant information.
Returns the top_k most relevant chunks.
@@ -56,7 +58,8 @@ class Knowledge(BaseModel):
results = self.storage.search(
query,
limit,
limit=results_limit,
score_threshold=score_threshold,
)
return results

View File

@@ -0,0 +1,16 @@
from pydantic import BaseModel, Field
class KnowledgeConfig(BaseModel):
"""Configuration for knowledge retrieval.
Args:
results_limit (int): The number of relevant documents to return.
score_threshold (float): The minimum score for a document to be considered relevant.
"""
results_limit: int = Field(default=3, description="The number of results to return")
score_threshold: float = Field(
default=0.35,
description="The minimum score for a result to be considered relevant",
)

View File

@@ -4,7 +4,7 @@ import io
import logging
import os
import shutil
from typing import Any, Dict, List, Optional, Union, cast
from typing import Any, Dict, List, Optional, Union
import chromadb
import chromadb.errors

View File

@@ -1,6 +1,4 @@
import asyncio
import json
import re
import uuid
from datetime import datetime
from typing import Any, Callable, Dict, List, Optional, Type, Union, cast
@@ -49,11 +47,6 @@ from crewai.utilities.events.llm_events import (
LLMCallStartedEvent,
LLMCallType,
)
from crewai.utilities.events.tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
)
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import Printer
from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -157,6 +150,10 @@ class LiteAgent(BaseModel):
default=[], description="Results of the tools used by the agent."
)
# Reference of Agent
original_agent: Optional[BaseAgent] = Field(
default=None, description="Reference to the agent that created this LiteAgent"
)
# Private Attributes
_parsed_tools: List[CrewStructuredTool] = PrivateAttr(default_factory=list)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
@@ -165,7 +162,7 @@ class LiteAgent(BaseModel):
_messages: List[Dict[str, str]] = PrivateAttr(default_factory=list)
_iterations: int = PrivateAttr(default=0)
_printer: Printer = PrivateAttr(default_factory=Printer)
@model_validator(mode="after")
def setup_llm(self):
"""Set up the LLM and other components after initialization."""
@@ -414,18 +411,6 @@ class LiteAgent(BaseModel):
formatted_answer = process_llm_response(answer, self.use_stop_words)
if isinstance(formatted_answer, AgentAction):
# Emit tool usage started event
crewai_event_bus.emit(
self,
event=ToolUsageStartedEvent(
agent_key=self.key,
agent_role=self.role,
tool_name=formatted_answer.tool,
tool_args=formatted_answer.tool_input,
tool_class=formatted_answer.tool,
),
)
try:
tool_result = execute_tool_and_check_finality(
agent_action=formatted_answer,
@@ -433,34 +418,9 @@ class LiteAgent(BaseModel):
i18n=self.i18n,
agent_key=self.key,
agent_role=self.role,
)
# Emit tool usage finished event
crewai_event_bus.emit(
self,
event=ToolUsageFinishedEvent(
agent_key=self.key,
agent_role=self.role,
tool_name=formatted_answer.tool,
tool_args=formatted_answer.tool_input,
tool_class=formatted_answer.tool,
started_at=datetime.now(),
finished_at=datetime.now(),
output=tool_result.result,
),
agent=self.original_agent,
)
except Exception as e:
# Emit tool usage error event
crewai_event_bus.emit(
self,
event=ToolUsageErrorEvent(
agent_key=self.key,
agent_role=self.role,
tool_name=formatted_answer.tool,
tool_args=formatted_answer.tool_input,
tool_class=formatted_answer.tool,
error=str(e),
),
)
raise e
formatted_answer = handle_agent_action_core(

View File

@@ -4,9 +4,12 @@ import os
import sys
import threading
import warnings
from collections import defaultdict
from contextlib import contextmanager
from types import SimpleNamespace
from typing import (
Any,
DefaultDict,
Dict,
List,
Literal,
@@ -18,7 +21,8 @@ from typing import (
)
from dotenv import load_dotenv
from pydantic import BaseModel
from litellm.types.utils import ChatCompletionDeltaToolCall
from pydantic import BaseModel, Field
from crewai.utilities.events.llm_events import (
LLMCallCompletedEvent,
@@ -33,6 +37,7 @@ with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
import litellm
from litellm import Choices
from litellm.exceptions import ContextWindowExceededError
from litellm.litellm_core_utils.get_supported_openai_params import (
get_supported_openai_params,
)
@@ -60,7 +65,7 @@ class FilteredStream:
if (
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
in s
or "LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True`"
or "LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()`"
in s
):
return 0
@@ -77,14 +82,26 @@ LLM_CONTEXT_WINDOW_SIZES = {
"gpt-4o": 128000,
"gpt-4o-mini": 128000,
"gpt-4-turbo": 128000,
"gpt-4.1": 1047576, # Based on official docs
"gpt-4.1-mini-2025-04-14": 1047576,
"gpt-4.1-nano-2025-04-14": 1047576,
"o1-preview": 128000,
"o1-mini": 128000,
"o3-mini": 200000, # Based on official o3-mini specifications
# gemini
"gemini-2.0-flash": 1048576,
"gemini-2.0-flash-thinking-exp-01-21": 32768,
"gemini-2.0-flash-lite-001": 1048576,
"gemini-2.0-flash-001": 1048576,
"gemini-2.5-flash-preview-04-17": 1048576,
"gemini-2.5-pro-exp-03-25": 1048576,
"gemini-1.5-pro": 2097152,
"gemini-1.5-flash": 1048576,
"gemini-1.5-flash-8b": 1048576,
"gemini/gemma-3-1b-it": 32000,
"gemini/gemma-3-4b-it": 128000,
"gemini/gemma-3-12b-it": 128000,
"gemini/gemma-3-27b-it": 128000,
# deepseek
"deepseek-chat": 128000,
# groq
@@ -219,6 +236,15 @@ class StreamingChoices(TypedDict):
finish_reason: Optional[str]
class FunctionArgs(BaseModel):
name: str = ""
arguments: str = ""
class AccumulatedToolArgs(BaseModel):
function: FunctionArgs = Field(default_factory=FunctionArgs)
class LLM(BaseLLM):
def __init__(
self,
@@ -371,6 +397,11 @@ class LLM(BaseLLM):
last_chunk = None
chunk_count = 0
usage_info = None
tool_calls = None
accumulated_tool_args: DefaultDict[int, AccumulatedToolArgs] = defaultdict(
AccumulatedToolArgs
)
# --- 2) Make sure stream is set to True and include usage metrics
params["stream"] = True
@@ -428,6 +459,20 @@ class LLM(BaseLLM):
if chunk_content is None and isinstance(delta, dict):
# Some models might send empty content chunks
chunk_content = ""
# Enable tool calls using streaming
if "tool_calls" in delta:
tool_calls = delta["tool_calls"]
if tool_calls:
result = self._handle_streaming_tool_calls(
tool_calls=tool_calls,
accumulated_tool_args=accumulated_tool_args,
available_functions=available_functions,
)
if result is not None:
chunk_content = result
except Exception as e:
logging.debug(f"Error extracting content from chunk: {e}")
logging.debug(f"Chunk format: {type(chunk)}, content: {chunk}")
@@ -442,7 +487,6 @@ class LLM(BaseLLM):
self,
event=LLMStreamChunkEvent(chunk=chunk_content),
)
# --- 4) Fallback to non-streaming if no content received
if not full_response.strip() and chunk_count == 0:
logging.warning(
@@ -501,7 +545,7 @@ class LLM(BaseLLM):
)
# --- 6) If still empty, raise an error instead of using a default response
if not full_response.strip():
if not full_response.strip() and len(accumulated_tool_args) == 0:
raise Exception(
"No content received from streaming response. Received empty chunks or failed to extract content."
)
@@ -533,8 +577,8 @@ class LLM(BaseLLM):
tool_calls = getattr(message, "tool_calls")
except Exception as e:
logging.debug(f"Error checking for tool calls: {e}")
# --- 8) If no tool calls or no available functions, return the text response directly
if not tool_calls or not available_functions:
# Log token usage if available in streaming mode
self._handle_streaming_callbacks(callbacks, usage_info, last_chunk)
@@ -554,6 +598,11 @@ class LLM(BaseLLM):
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL)
return full_response
except ContextWindowExceededError as e:
# Catch context window errors from litellm and convert them to our own exception type.
# This exception is handled by CrewAgentExecutor._invoke_loop() which can then
# decide whether to summarize the content or abort based on the respect_context_window flag.
raise LLMContextLengthExceededException(str(e))
except Exception as e:
logging.error(f"Error in streaming response: {str(e)}")
if full_response.strip():
@@ -568,6 +617,47 @@ class LLM(BaseLLM):
)
raise Exception(f"Failed to get streaming response: {str(e)}")
def _handle_streaming_tool_calls(
self,
tool_calls: List[ChatCompletionDeltaToolCall],
accumulated_tool_args: DefaultDict[int, AccumulatedToolArgs],
available_functions: Optional[Dict[str, Any]] = None,
) -> None | str:
for tool_call in tool_calls:
current_tool_accumulator = accumulated_tool_args[tool_call.index]
if tool_call.function.name:
current_tool_accumulator.function.name = tool_call.function.name
if tool_call.function.arguments:
current_tool_accumulator.function.arguments += (
tool_call.function.arguments
)
crewai_event_bus.emit(
self,
event=LLMStreamChunkEvent(
tool_call=tool_call.to_dict(),
chunk=tool_call.function.arguments,
),
)
if (
current_tool_accumulator.function.name
and current_tool_accumulator.function.arguments
and available_functions
):
try:
json.loads(current_tool_accumulator.function.arguments)
return self._handle_tool_call(
[current_tool_accumulator],
available_functions,
)
except json.JSONDecodeError:
continue
return None
def _handle_streaming_callbacks(
self,
callbacks: Optional[List[Any]],
@@ -627,7 +717,16 @@ class LLM(BaseLLM):
str: The response text
"""
# --- 1) Make the completion call
response = litellm.completion(**params)
try:
# Attempt to make the completion call, but catch context window errors
# and convert them to our own exception type for consistent handling
# across the codebase. This allows CrewAgentExecutor to handle context
# length issues appropriately.
response = litellm.completion(**params)
except ContextWindowExceededError as e:
# Convert litellm's context window error to our own exception type
# for consistent handling in the rest of the codebase
raise LLMContextLengthExceededException(str(e))
# --- 2) Extract response message and content
response_message = cast(Choices, cast(ModelResponse, response).choices)[
@@ -707,15 +806,6 @@ class LLM(BaseLLM):
function_name, lambda: None
) # Ensure fn is always a callable
logging.error(f"Error executing function '{function_name}': {e}")
crewai_event_bus.emit(
self,
event=ToolExecutionErrorEvent(
tool_name=function_name,
tool_args=function_args,
tool_class=fn,
error=str(e),
),
)
crewai_event_bus.emit(
self,
event=LLMCallFailedEvent(error=f"Tool execution error: {str(e)}"),
@@ -795,15 +885,17 @@ class LLM(BaseLLM):
params, callbacks, available_functions
)
except LLMContextLengthExceededException:
# Re-raise LLMContextLengthExceededException as it should be handled
# by the CrewAgentExecutor._invoke_loop method, which can then decide
# whether to summarize the content or abort based on the respect_context_window flag
raise
except Exception as e:
crewai_event_bus.emit(
self,
event=LLMCallFailedEvent(error=str(e)),
)
if not LLMContextLengthExceededException(
str(e)
)._is_context_limit_error(str(e)):
logging.error(f"LiteLLM call failed: {str(e)}")
logging.error(f"LiteLLM call failed: {str(e)}")
raise
def _handle_emit_call_events(self, response: Any, call_type: LLMCallType):

View File

@@ -53,6 +53,10 @@ class ContextualMemory:
Fetches recent relevant insights from STM related to the task's description and expected_output,
formatted as bullet points.
"""
if self.stm is None:
return ""
stm_results = self.stm.search(query)
formatted_results = "\n".join(
[
@@ -67,6 +71,10 @@ class ContextualMemory:
Fetches historical data or insights from LTM that are relevant to the task's description and expected_output,
formatted as bullet points.
"""
if self.ltm is None:
return ""
ltm_results = self.ltm.search(task, latest_n=2)
if not ltm_results:
return None
@@ -86,6 +94,9 @@ class ContextualMemory:
Fetches relevant entity information from Entity Memory related to the task's description and expected_output,
formatted as bullet points.
"""
if self.em is None:
return ""
em_results = self.em.search(query)
formatted_results = "\n".join(
[

View File

@@ -1,4 +1,4 @@
from typing import TYPE_CHECKING, Any, Dict, Optional, Self
from typing import TYPE_CHECKING, Any, Dict, Optional
from crewai.memory.external.external_memory_item import ExternalMemoryItem
from crewai.memory.memory import Memory
@@ -52,7 +52,7 @@ class ExternalMemory(Memory):
def reset(self) -> None:
self.storage.reset()
def set_crew(self, crew: Any) -> Self:
def set_crew(self, crew: Any) -> "ExternalMemory":
super().set_crew(crew)
if not self.storage:

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, List, Optional, Self
from typing import Any, Dict, List, Optional
from pydantic import BaseModel
@@ -38,6 +38,6 @@ class Memory(BaseModel):
query=query, limit=limit, score_threshold=score_threshold
)
def set_crew(self, crew: Any) -> Self:
def set_crew(self, crew: Any) -> "Memory":
self.crew = crew
return self

Some files were not shown because too many files have changed in this diff Show More