Compare commits

...

242 Commits

Author SHA1 Message Date
Lucas Gomide
ebcda8b2ec Merge branch 'main' into lg-support-set-task-context 2025-05-13 18:13:35 -03:00
Lucas Gomide
fed397f745 refactor: move logic to fetch agent to utilities file (#2822)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-05-13 09:51:21 -04:00
Lucas Gomide
d55e596800 feat: support to load an Agent from a repository (#2816)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: support to load an Agent from a repository

* test: fix get_auth_token test
2025-05-12 16:08:57 -04:00
Lucas Gomide
f700e014c9 fix: address race condition in FilteredStream by using context managers (#2818)
During the sys.stdout = FilteredStream(old_stdout) assignment, if any code (including logging, print, or internal library output) writes to sys.stdout immediately, and that write happens before __init__ completes, the write() method is called on a not-fully-initialized object.. hence _lock doesn’t exist yet.
2025-05-12 15:05:14 -04:00
Lucas Gomide
fced8ba47f sytle: fix linter issues 2025-05-12 11:53:57 -03:00
Lucas Gomide
7204910da4 Merge branch 'main' into lg-support-set-task-context 2025-05-12 09:47:52 -03:00
Vidit Ostwal
4e496d7a20 Added link to github issue (#2810)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-05-12 08:27:18 -04:00
Lucas Gomide
8663c7e1c2 Enable ALL Ruff rules set by default (#2775)
* style: use Ruff default linter rules

* ci: check linter files over changed ones
2025-05-12 08:10:31 -04:00
Lucas Gomide
971a90f534 feat: support to set an empty context to the Task 2025-05-10 09:46:21 -03:00
Orce MARINKOVSKI
cb1a98cabf Update arize-phoenix-observability.mdx (#2595)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
missing code to kickoff the monitoring for the crew

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-08 13:25:10 -04:00
Mark McDonald
369e6d109c Adds link to AI Studio when entering Gemini key (#2780)
I used ai.dev as the alternate URL as it takes up less space but if this
is likely to confuse users we can use the long form.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-08 13:00:03 -04:00
Mark McDonald
2c011631f9 Clean up the Google setup section (#2785)
The Gemini & Vertex sections were conflated and a little hard to
distingush, so I have put them in separate sections.

Also added the latest 2.5 and 2.0 flash-lite models, and added a note
that Gemma models work too.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-08 12:24:38 -04:00
Rip&Tear
d3fc2b4477 Update security.md (#2779)
update policy for better readability
2025-05-08 09:00:41 -04:00
Lorenze Jay
516d45deaa chore: bump version to 0.119.0 and update dependencies (#2778)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This commit updates the project version to 0.119.0 and modifies the required version of the `crewai-tools` dependency to 0.44.0 across various configuration files. Additionally, the version number is reflected in the `__init__.py` file and the CLI templates for crew, flow, and tool projects.
2025-05-07 17:29:41 -07:00
Lorenze Jay
7ad51d9d05 feat: implement knowledge retrieval events in Agent (#2727)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: implement knowledge retrieval events in Agent

This commit introduces a series of knowledge retrieval events in the Agent class, enhancing its ability to handle knowledge queries. New events include KnowledgeRetrievalStartedEvent, KnowledgeRetrievalCompletedEvent, KnowledgeQueryGeneratedEvent, KnowledgeQueryFailedEvent, and KnowledgeSearchQueryCompletedEvent. The Agent now emits these events during knowledge retrieval processes, allowing for better tracking and handling of knowledge queries. Additionally, the console formatter has been updated to handle these new events, providing visual feedback during knowledge retrieval operations.

* refactor: update knowledge query handling in Agent

This commit refines the knowledge query processing in the Agent class by renaming variables for clarity and optimizing the query rewriting logic. The system prompt has been updated in the translation file to enhance clarity and context for the query rewriting process. These changes aim to improve the overall readability and maintainability of the code.

* fix: add missing newline at end of en.json file

* fix broken tests

* refactor: rename knowledge query events and enhance retrieval handling

This commit renames the KnowledgeQueryGeneratedEvent to KnowledgeQueryStartedEvent to better reflect its purpose. It also updates the event handling in the EventListener and ConsoleFormatter classes to accommodate the new event structure. Additionally, the retrieval knowledge is now included in the KnowledgeRetrievalCompletedEvent, improving the overall knowledge retrieval process.

* docs for transparancy

* refactor: improve error handling in knowledge query processing

This commit refactors the knowledge query handling in the Agent class by changing the order of checks for LLM compatibility. It now logs a warning and emits a failure event if the LLM is not an instance of BaseLLM before attempting to call the LLM. Additionally, the task_prompt attribute has been removed from the KnowledgeQueryFailedEvent, simplifying the event structure.

* test: add unit test for knowledge search query and VCR cassette

This commit introduces a new test, `test_get_knowledge_search_query`, to verify that the `_get_knowledge_search_query` method in the Agent class correctly interacts with the LLM using the appropriate prompts. Additionally, a VCR cassette is added to record the interactions with the OpenAI API for this test, ensuring consistent and reliable test results.
2025-05-07 11:55:42 -07:00
Mark McDonald
e3887ae36e Used model-agnostic examples in quickstart/firsts. (#2773)
Updated prereqs and setup steps to point to the now-more-model-agnostic
LLM setup guide, and updated the relevant text to go with it.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-07 11:30:27 -04:00
omahs
e23bc2aaa7 Fix typos (#2774)
* fix typos

* fix typo

* fix typos

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-07 11:11:57 -04:00
Lucas Gomide
7fc405408e test: fix llama converter tests to remove skip_external_api (#2770) 2025-05-07 08:33:41 -04:00
Tony Kipkemboi
cac06adc6c docs: update docxsearchtool.mdx (#2767)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- add `docx2txt` as a dependency requirement for the tool
2025-05-06 17:14:05 -04:00
leopardracer
c8ec03424a Fix typos in documentation and configuration files (#2712)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Update test_lite_agent_structured_output.yaml

* Update install_crew.py

* Update llms.mdx

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-05-06 15:07:57 -04:00
Henrique Branco
bfea85d22c docs: added Windows bug solving to docs (#2764)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-06 09:55:05 -04:00
Mark McDonald
836e9fc545 Removes model provider defaults from LLM Setup (#2766)
This removes any specific model from the "Setting up your LLM" guide,
but provides examples for the top-3 providers.

This section also conflated "model selection" with "model
configuration", where configuration is provider-specific, so I've
focused this first section on just model selection, deferring the config
to the "provider" section that follows.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-06 09:27:14 -04:00
Vidit Ostwal
c3726092fd Added Advance Configuration Docs for Rag Tool (#2713)
* Added Advance Configuration Docs for Rag Tool

* Re-run test cases

* Change doc

* prepping new version (#2733)

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-05-06 09:07:52 -04:00
Lucas Gomide
dabf02a90d build(LiteLLM): upgrade LiteLLM version (#2757)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-05-05 17:07:29 -04:00
Lucas Gomide
2912c93d77 feat: prevent crash once Telemetry is not available (#2758)
* feat: prevent crash once Telemetry is not available

* tests: adding missing cassettes
2025-05-05 15:22:32 -04:00
Vini Brasil
17474a3a0c Identify parent_flow of Crew and LiteAgent (#2723)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This commit adds a new crew field called parent_flow, evaluated when the Crew
instance is instantiated. The stacktrace is traversed to look up if the caller
is an instance of Flow, and if so, it fills in the field.

Other alternatives were considered, such as a global context or even a new
field to be manually filled, however, this is the most magical solution that
was thread-safe and did not require public API changes.
2025-05-02 14:40:39 -03:00
Lucas Gomide
f89c2bfb7e Fix crewai reset-memories when Embedding dimension mismatch (#2737)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: support to reset memories after changing Crew's embedder

The sources must not be added while initializing the Knowledge otherwise we could not reset it

* chore: improve reset memory feedback

Previously, even when no memories were actually erased, we logged that they had been. From now on, the log will specify which memory has been reset.

* feat: improve get_crew discovery from a single file

Crew instances can now be discovered from any function or method with a return type annotation of -> Crew, as well as from module-level attributes assigned to a Crew instance. Additionally, crews can be retrieved from within a Flow

* refactor: make add_sources a public method from Knowledge
2025-05-02 12:40:42 -04:00
Lucas Gomide
2902201bfa pytest improvements to handle flaky test (#2726)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* build(dev): add pytest-randomly dependency

By randomizing the test execution order, this helps identify tests
that unintentionally depend on shared state or specific execution
order, which can lead to flaky or unreliable test behavior.

* build(dev): add pytest-timeout

This will prevent a test from running indefinitely

* test: block external requests in CI and set default 10s timeout per test

* test: adding missing cassettes

We notice that those cassettes are missing after enabling block-network on CI

* test: increase tests timeout on CI

* test: fix flaky test ValueError: Circular reference detected (id repeated)

* fix: prevent crash when event handler raises exception

Previously, if a registered event handler raised an exception during execution,
it could crash the entire application or interrupt the event dispatch process.
This change wraps handler execution in a try/except block within the `emit` method,
ensuring that exceptions are caught and logged without affecting other handlers or flow.

This improves the resilience of the event bus, especially when handling third-party
or temporary listeners.
2025-05-01 15:48:29 -04:00
Lorenze Jay
378dcc79bb prepping new version (#2733)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-04-30 14:57:54 -04:00
Lucas Gomide
d348d5f20e fix: renaming TaskGuardrail to LLMGuardrail (#2731) 2025-04-30 13:11:35 -04:00
Tony Kipkemboi
bc24bc64cd Update enterprise docs and change YouTube video embed (#2728)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-30 08:46:37 -07:00
Lucas Gomide
015e1a41b2 Supporting no-code Guardrail creation (#2636)
* feat: support to define a guardrail task no-code

* feat: add auto-discovery for Guardrail code execution mode

* feat: handle malformed or invalid response from CodeInterpreterTool

* feat: allow to set unsafe_mode from Guardrail task

* feat: renaming GuardrailTask to TaskGuardrail

* feat: ensure guardrail is callable while initializing Task

* feat: remove Docker availability check from TaskGuardrail

The CodeInterpreterTool already ensures compliance with this requirement.

* refactor: replace if/raise with assert

For this use case `assert` is more appropriate choice

* test: remove useless or duplicated test

* fix: attempt to fix type-checker

* feat: support to define a task guardrail using YAML config

* refactor: simplify TaskGuardrail to use LLM for validation, no code generation

* docs: update TaskGuardrail doc strings

* refactor: drop task paramenter from TaskGuardrail

This parameter was used to get the model from the `task.agent` which is a quite bit redudant since we could propagate the llm directly
2025-04-30 10:47:58 -04:00
Lucas Gomide
94b1a6cfb8 docs: remove CrewStructuredTool from public documentation (#2707)
It is used internally and should not be recommended for building tools intended for Agent consumption
2025-04-30 09:37:05 -04:00
Lucas Gomide
1c2976c4d1 build: downgrade litellm to 1.167.1 (#2711)
The version 1.167.2 is not compatible with Windows
2025-04-30 09:23:14 -04:00
Greyson LaLonde
25c8155609 chore: add missing __init__.py files (#2719)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Add `__init__.py` files to 20 directories to conform with Python package standards. This ensures directories are properly recognized as packages, enabling cleaner imports.
2025-04-29 07:35:26 -07:00
Vini Brasil
55b07506c2 Remove logging setting from global context (#2720)
This commit fixes a bug where changing logging level would be overriden
by `src/crewai/project/crew_base.py`. For example, the following snippet
on top of a crew or flow would not work:

```python
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
```

Crews and flows should be able to set their own log level, without being
overriden by CrewAI library code.
2025-04-29 11:21:41 -03:00
Vidit Ostwal
59f34d900a Fixes missing prompt template or system template (#2408)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Fix issue #2402: Handle missing templates gracefully

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting in test files

Co-Authored-By: Joe Moura <joao@crewai.com>

* Bluit in top of devin-ai integration

* Fixed test cases

* Fixed test cases

* fixed linting issue

* Added docs

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-28 14:04:32 -04:00
João Moura
4f6054d439 new version
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-04-28 07:39:38 -07:00
Dev Khant
a86a1213c7 Fix Mem0 OSS (#2604)
* Fix Mem0 OSS

* add test

* fix lint and tests

* fix

* add tests

* drop test

* changed to class comparision

* fixed test cases

* Update src/crewai/memory/storage/mem0_storage.py

* Update src/crewai/memory/storage/mem0_storage.py

* fix

* fix lock file

---------

Co-authored-by: Vidit-Ostwal <viditostwal@gmail.com>
2025-04-28 10:37:31 -04:00
Lucas Gomide
566935fb94 upgrade liteLLM to latest version (#2684)
* build(litellm): upgrade LiteLLM to latest version

* fix: update filtered logs from LiteLLM

* Fix for a missing backtick

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-28 09:46:40 -04:00
Lucas Gomide
3a66746a99 build: upgrade crewai-tools (#2705)
* build: upgrade crewai-tools

* build: prepare new version
2025-04-28 06:38:56 -07:00
João Moura
337a6d5719 preparing new version
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-04-27 23:56:22 -07:00
Tony Kipkemboi
51eb5e9998 docs: add CrewAI Enterprise docs (#2691)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add enterprise deployment documentation to CLI docs

* Update CrewAI Enterprise documentation with comprehensive guides for Traces, Tool Repository, Webhook Streaming, and FAQ structure

* Add Enterprise documentation images

* Update Enterprise introduction with visual CardGroups and Steps components
2025-04-25 13:59:44 -07:00
Lucas Gomide
b2969e9441 style: fix linter issue (#2686)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-25 09:34:00 -04:00
João Moura
5b9606e8b6 fix contenxt windown 2025-04-24 23:09:23 -07:00
Kunal Lunia
685d20f46c added gpt-4.1 models and gemini-2.0 and 2.5 pro models (#2609)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* added gpt4.1 models and gemini 2.0 and 2.5 models

* added flash model

* Updated test fun to all models

* Added Gemma3 test cases and passed all google test case

* added gemini 2.5 flash

* test: add missing cassettes

* test: ignore authorization key from gemini/gemma3 request

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-23 11:20:32 -07:00
Lucas Gomide
9ebf3aa043 docs(CodeInterpreterTool): update docs (#2675) 2025-04-23 10:27:25 -07:00
Tony Kipkemboi
2e4c97661a Add enterprise deployment documentation to CLI docs (#2670)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-22 13:27:58 -07:00
Tony Kipkemboi
16eb4df556 docs: update docs.json with contextual options, SEO, and 404 redirect (#2654)
* docs: 0.114.0 release notes, navigation restructure, new guides, deploy video, and cleanup

- Add v0.114.0 release notes with highlights image and doc links
- Restructure docs navigation (Strategy group, Releases tab, navbar links)
- Update quickstart with deployment video and clearer instructions
- Add/rename guides (Custom Manager Agent, Custom LLM)
- Remove legacy concept/tool docs
- Add new images and tool docs
- Minor formatting and content improvements throughout

* docs: update docs.json with contextual options, SEO indexing, and 404 redirect settings
2025-04-22 09:52:27 -07:00
Vini Brasil
3d9000495c Change CLI tool publish message (#2662) 2025-04-22 13:09:30 -03:00
Tony Kipkemboi
6d0039b117 docs: 0.114.0 release notes, navigation restructure, new guides, deploy video, and cleanup (#2653)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
- Add v0.114.0 release notes with highlights image and doc links
- Restructure docs navigation (Strategy group, Releases tab, navbar links)
- Update quickstart with deployment video and clearer instructions
- Add/rename guides (Custom Manager Agent, Custom LLM)
- Remove legacy concept/tool docs
- Add new images and tool docs
- Minor formatting and content improvements throughout
2025-04-21 19:18:21 -04:00
Lorenze Jay
311a078ca6 Enhance knowledge management in CrewAI (#2637)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Enhance knowledge management in CrewAI

- Added `KnowledgeConfig` class to configure knowledge retrieval parameters such as `limit` and `score_threshold`.
- Updated `Agent` and `Crew` classes to utilize the new knowledge configuration for querying knowledge sources.
- Enhanced documentation to clarify the addition of knowledge sources at both agent and crew levels.
- Introduced new tips in documentation to guide users on knowledge source management and configuration.

* Refactor knowledge configuration parameters in CrewAI

- Renamed `limit` to `results_limit` in `KnowledgeConfig`, `query_knowledge`, and `query` methods for consistency and clarity.
- Updated related documentation to reflect the new parameter name, ensuring users understand the configuration options for knowledge retrieval.

* Refactor agent tests to utilize mock knowledge storage

- Updated test cases in `agent_test.py` to use `KnowledgeStorage` for mocking knowledge sources, enhancing test reliability and clarity.
- Renamed `limit` to `results_limit` in `KnowledgeConfig` for consistency with recent changes.
- Ensured that knowledge queries are properly mocked to return expected results during tests.

* Add VCR support for agent tests with query limits and score thresholds

- Introduced `@pytest.mark.vcr` decorator in `agent_test.py` for tests involving knowledge sources, ensuring consistent recording of HTTP interactions.
- Added new YAML cassette files for `test_agent_with_knowledge_sources_with_query_limit_and_score_threshold` and `test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_default`, capturing the expected API responses for these tests.
- Enhanced test reliability by utilizing VCR to manage external API calls during testing.

* Update documentation to format parameter names in code style

- Changed the formatting of `results_limit` and `score_threshold` in the documentation to use code style for better clarity and emphasis.
- Ensured consistency in documentation presentation to enhance user understanding of configuration options.

* Enhance KnowledgeConfig with field descriptions

- Updated `results_limit` and `score_threshold` in `KnowledgeConfig` to use Pydantic's `Field` for improved documentation and clarity.
- Added descriptions to both parameters to provide better context for their usage in knowledge retrieval configuration.

* docstrings added
2025-04-18 18:33:04 -07:00
Vidit Ostwal
371f19f3cd Support set max_execution_time to Agent (#2610)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
* Fixed fake max_execution_time paramenter
---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-04-17 16:03:00 -04:00
Lorenze Jay
870dffbb89 Feat/byoa (#2523)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
* feat: add OpenAI agent adapter implementation

- Introduced OpenAIAgentAdapter class to facilitate interaction with OpenAI Assistants.
- Implemented methods for task execution, tool configuration, and response processing.
- Added support for converting CrewAI tools to OpenAI format and handling delegation tools.

* created an adapter for the delegate and ask_question tools

* delegate and ask_questions work and it delegates to crewai agents*

* refactor: introduce OpenAIAgentToolAdapter for tool management

- Created OpenAIAgentToolAdapter class to encapsulate tool configuration and conversion for OpenAI Assistant.
- Removed tool configuration logic from OpenAIAgentAdapter and integrated it into the new adapter.
- Enhanced the tool conversion process to ensure compatibility with OpenAI's requirements.

* feat: implement BaseAgentAdapter for agent integration

- Introduced BaseAgentAdapter as an abstract base class for agent adapters in CrewAI.
- Defined common interface and methods for configuring tools and structured output.
- Updated OpenAIAgentAdapter to inherit from BaseAgentAdapter, enhancing its structure and functionality.

* feat: add LangGraph agent and tool adapter for CrewAI integration

- Introduced LangGraphAgentAdapter to facilitate interaction with LangGraph agents.
- Implemented methods for task execution, context handling, and tool configuration.
- Created LangGraphToolAdapter to convert CrewAI tools into LangGraph-compatible format.
- Enhanced error handling and logging for task execution and streaming processes.

* feat: enhance LangGraphToolAdapter and improve conversion instructions

- Added type hints for better clarity and type checking in LangGraphToolAdapter.
- Updated conversion instructions to ensure compatibility with optional LLM checks.

* feat: integrate structured output handling in LangGraph and OpenAI agents

- Added LangGraphConverterAdapter for managing structured output in LangGraph agents.
- Enhanced LangGraphAgentAdapter to utilize the new converter for system prompt and task execution.
- Updated LangGraphToolAdapter to use StructuredTool for better compatibility.
- Introduced OpenAIConverterAdapter for structured output management in OpenAI agents.
- Improved task execution flow in OpenAIAgentAdapter to incorporate structured output configuration and post-processing.

* feat: implement BaseToolAdapter for tool integration

- Introduced BaseToolAdapter as an abstract base class for tool adapters in CrewAI.
- Updated LangGraphToolAdapter and OpenAIAgentToolAdapter to inherit from BaseToolAdapter, enhancing their structure and functionality.
- Improved tool configuration methods to support better integration with various frameworks.
- Added type hints and documentation for clarity and maintainability.

* feat: enhance OpenAIAgentAdapter with configurable agent properties

- Refactored OpenAIAgentAdapter to accept agent configuration as an argument.
- Introduced a method to build a system prompt for the OpenAI agent, improving task execution context.
- Updated initialization to utilize role, goal, and backstory from kwargs, enhancing flexibility in agent setup.
- Improved tool handling and integration within the adapter.

* feat: enhance agent adapters with structured output support

- Introduced BaseConverterAdapter as an abstract class for structured output handling.
- Implemented LangGraphConverterAdapter and OpenAIConverterAdapter to manage structured output in their respective agents.
- Updated BaseAgentAdapter to accept an agent configuration dictionary during initialization.
- Enhanced LangGraphAgentAdapter to utilize the new converter and improved tool handling.
- Added methods for configuring structured output and enhancing system prompts in converter adapters.

* refactor: remove _parse_tools method from OpenAIAgentAdapter and BaseAgent

- Eliminated the _parse_tools method from OpenAIAgentAdapter and its abstract declaration in BaseAgent.
- Cleaned up related test code in MockAgent to reflect the removal of the method.

* also removed _parse_tools here as not used

* feat: add dynamic import handling for LangGraph dependencies

- Implemented conditional imports for LangGraph components to handle ImportError gracefully.
- Updated LangGraphAgentAdapter initialization to check for LangGraph availability and raise an informative error if dependencies are missing.
- Enhanced the agent adapter's robustness by ensuring it only initializes components when the required libraries are present.

* fix: improve error handling for agent adapters

- Updated LangGraphAgentAdapter to raise an ImportError with a clear message if LangGraph dependencies are not installed.
- Refactored OpenAIAgentAdapter to include a similar check for OpenAI dependencies, ensuring robust initialization and user guidance for missing libraries.
- Enhanced overall error handling in agent adapters to prevent runtime issues when dependencies are unavailable.

* refactor: enhance tool handling in agent adapters

- Updated BaseToolAdapter to initialize original and converted tools in the constructor.
- Renamed method `all_tools` to `tools` for clarity in BaseToolAdapter.
- Added `sanitize_tool_name` method to ensure tool names are API compatible.
- Modified LangGraphAgentAdapter to utilize the updated tool handling and ensure proper tool configuration.
- Refactored LangGraphToolAdapter to streamline tool conversion and ensure consistent naming conventions.

* feat: emit AgentExecutionCompletedEvent in agent adapters

- Added emission of AgentExecutionCompletedEvent in both LangGraphAgentAdapter and OpenAIAgentAdapter to signal task completion.
- Enhanced event handling to include agent, task, and output details for better tracking of execution results.

* docs: Enhance BaseConverterAdapter documentation

- Added a detailed docstring to the BaseConverterAdapter class, outlining its purpose and the expected functionality for all converter adapters.
- Updated the post_process_result method's docstring to specify the expected format of the result as a string.

* docs: Add comprehensive guide for bringing custom agents into CrewAI

- Introduced a new documentation file detailing the process of integrating custom agents using the BaseAgentAdapter, BaseToolAdapter, and BaseConverter.
- Included step-by-step instructions for creating custom adapters, configuring tools, and handling structured output.
- Provided examples for implementing adapters for various frameworks, enhancing the usability of CrewAI for developers.

* feat: Introduce adapted_agent flag in BaseAgent and update BaseAgentAdapter initialization

- Added an `adapted_agent` boolean field to the BaseAgent class to indicate if the agent is adapted.
- Updated the BaseAgentAdapter's constructor to pass `adapted_agent=True` to the superclass, ensuring proper initialization of the new field.

* feat: Enhance LangGraphAgentAdapter to support optional agent configuration

- Updated LangGraphAgentAdapter to conditionally apply agent configuration when creating the agent graph, allowing for more flexible initialization.
- Modified LangGraphToolAdapter to ensure only instances of BaseTool are converted, improving tool compatibility and handling.

* feat: Introduce OpenAIConverterAdapter for structured output handling

- Added OpenAIConverterAdapter to manage structured output conversion for OpenAI agents, enhancing their ability to process and format results.
- Updated OpenAIAgentAdapter to utilize the new converter for configuring structured output and post-processing results.
- Removed the deprecated get_output_converter method from OpenAIAgentAdapter.
- Added unit tests for BaseAgentAdapter and BaseToolAdapter to ensure proper functionality and integration of new features.

* feat: Enhance tool adapters to support asynchronous execution

- Updated LangGraphToolAdapter and OpenAIAgentToolAdapter to handle asynchronous tool execution by checking if the output is awaitable.
- Introduced `inspect` import to facilitate the awaitability check.
- Refactored tool wrapper functions to ensure proper handling of both synchronous and asynchronous tool results.

* fix: Correct method definition syntax and enhance tool adapter implementation

- Updated the method definition for `configure_structured_output` to include the `def` keyword for clarity.
- Added an asynchronous tool wrapper to ensure tools can operate in both synchronous and asynchronous contexts.
- Modified the constructor of the custom converter adapter to directly assign the agent adapter, improving clarity and functionality.

* linted

* refactor: Improve tool processing logic in BaseAgent

- Added a check to return an empty list if no tools are provided.
- Simplified the tool attribute validation by using a list of required attributes.
- Removed commented-out abstract method definition for clarity.

* refactor: Simplify tool handling in agent adapters

- Changed default value of `tools` parameter in LangGraphAgentAdapter to None for better handling of empty tool lists.
- Updated tool initialization in both LangGraphAgentAdapter and OpenAIAgentAdapter to directly pass the `tools` parameter, removing unnecessary list handling.
- Cleaned up commented-out code in OpenAIConverterAdapter to improve readability.

* refactor: Remove unused stream_task method from LangGraphAgentAdapter

- Deleted the `stream_task` method from LangGraphAgentAdapter to streamline the code and eliminate unnecessary complexity.
- This change enhances maintainability by focusing on essential functionalities within the agent adapter.
2025-04-17 09:22:48 -07:00
Lucas Gomide
ced3c8f0e0 Unblock LLM(stream=True) to work with tools (#2582)
* feat: unblock LLM(stream=True) to work with tools

* feat: replace pytest-vcr by pytest-recording

1. pytest-vcr does not support httpx - which LiteLLM uses for streaming responses.
2. pytest-vcr is no longer maintained, last commit 6 years ago :fist::skin-tone-4:
3. pytest-recording supports modern request libraries (including httpx) and actively maintained

* refactor: remove @skip_streaming_in_ci

Since we have fixed streaming response issue we can remove this @skip_streaming_in_ci

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-17 11:58:52 -04:00
Greyson LaLonde
8e555149f7 fix: docs import path for json search tool (#2631)
- updated import path to crewai-tools
- removed old comment
2025-04-17 07:51:20 -07:00
Lucas Gomide
a96a27f064 docs: fix guardrail documentation usage (#2630) 2025-04-17 10:34:50 -04:00
Vidit Ostwal
a2f3566cd9 Pr branch (#2312)
* Adjust checking for callable crew object.

Changes back to how it was being done before.
Fixes #2307

* Fix specific memory reset errors.

When not initiated, the function should raise
the "memory system is not initialized" RuntimeError.

* Remove print statement

* Fixes test case

---------

Co-authored-by: Carlos Souza <carloshrsouza@gmail.com>
2025-04-17 08:59:15 -04:00
Greyson LaLonde
e655412aca refactor: create constants.py & use in telemetry (#2627)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
- created `constants.py` for telemetry base url and service name
- updated `telemetry.py` to reflect changes
- ran ruff --fix to apply lint fixes
2025-04-16 12:46:15 -07:00
Lorenze Jay
1d91ab5d1b fix: pass original agent reference to lite agent initialization (#2625)
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
2025-04-16 10:05:09 -07:00
Vini Brasil
37359a34f0 Remove redundant comment from sqlite.py (#2622) 2025-04-16 11:25:41 -03:00
Vini Brasil
6eb4045339 Update .github/workflows/notify-downstream.yml (#2621) 2025-04-16 10:39:51 -03:00
Vini Brasil
aebbc75dea Notify downstream repo of changes (#2615)
* Notify downstream repo of changes

* Add permissions block
2025-04-16 10:18:26 -03:00
Lucas Gomide
bc91e94f03 fix: add type hints and ignore type checks for config access (#2603) 2025-04-14 16:58:09 -04:00
devin-ai-integration[bot]
d659151dca Fix #2551: Add Huggingface to provider list in CLI (#2552)
* Fix #2551: Add Huggingface to provider list in CLI

Co-Authored-By: Joe Moura <joao@crewai.com>

* Update Huggingface API key name to HF_TOKEN and remove base URL prompt

Co-Authored-By: Joe Moura <joao@crewai.com>

* Update Huggingface API key name to HF_TOKEN in documentation

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting in test_constants.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import order in test_constants.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import formatting in test_constants.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Skip failing tests in Python 3.11 due to VCR cassette issues

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import order in knowledge_test.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* Revert skip decorators to check if tests are flaky

Co-Authored-By: Joe Moura <joao@crewai.com>

* Restore skip decorators for tests with VCR cassette issues in Python 3.11

Co-Authored-By: Joe Moura <joao@crewai.com>

* revert skip pytest decorators

* Remove import sys and skip decorators from test files

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-04-14 16:28:04 -04:00
Lucas Gomide
9dffd42e6d feat: Enhance memory system with isolated memory configuration (#2597)
* feat: support defining any memory in an isolated way

This change makes it easier to use a specific memory type without unintentionally enabling all others.

Previously, setting memory=True would implicitly configure all available memories (like LTM and STM), which might not be ideal in all cases. For example, when building a chatbot that only needs an external memory, users were forced to also configure LTM and STM — which rely on default OpenAPI embeddings — even if they weren’t needed.

With this update, users can now define a single memory in isolation, making the configuration process simpler and more flexible.

* feat: add tests to ensure we are able to use contextual memory by set individual memories

* docs: enhance memory documentation

* feat: warn when long-term memory is defined but entity memory is not
2025-04-14 15:48:48 -04:00
devin-ai-integration[bot]
88455cd52c fix: Correctly copy memory objects during crew training (fixes #2593) (#2594)
* fix: Correctly copy memory objects during crew training (#2593)

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: Fix import order in tests/crew_test.py

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Rely on validator for memory copy, update test assertions

Removes manual deep copy of memory objects in Crew.copy().
The Pydantic model_validator 'create_crew_memory' handles the
initialization of new memory instances for the copied crew.

Updates test_crew_copy_with_memory assertions to verify that
the private memory attributes (_short_term_memory, etc.) are
correctly initialized as new instances in the copied crew.

Co-Authored-By: Joe Moura <joao@crewai.com>

* Revert "fix: Rely on validator for memory copy, update test assertions"

This reverts commit 8702bf1e34.

* fix: Re-add manual deep copy for all memory types in Crew.copy

Addresses feedback on PR #2594 to ensure all memory objects
(short_term, long_term, entity, external, user) are correctly
deep copied using model_copy(deep=True).

Also simplifies the test case to directly verify the copy behavior
instead of relying on the train method.

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-14 14:59:12 -04:00
Alexandre Gindre
6a1eb10830 fix(crew template): fix wrong parameter name and missing input (#2387) 2025-04-14 11:09:59 -04:00
devin-ai-integration[bot]
10edde100e Fix: Use mem0_local_config instead of config in Memory.from_config (#2588)
* fix: use mem0_local_config instead of config in Memory.from_config (#2587)

Co-Authored-By: Joe Moura <joao@crewai.com>

* refactor: consolidate tests as per PR feedback

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-14 08:55:23 -04:00
Eduardo Chiarotti
40a441f30e feat: remove unused code and change ToolUsageStarted event place (#2581)
* feat: remove unused code and change ToolUsageStarted event place

* feat: run lint

* feat: add agent refernece inside liteagent

* feat: remove unused logic

* feat: Remove not needed event

* feat: remove test from tool execution erro:

* feat: remove cassete
2025-04-11 14:26:59 -04:00
Vidit Ostwal
ea5ae9086a added condition to check whether _run function returns a coroutine ob… (#2570)
* added condition to check whether _run function returns a coroutine object

* Cleaned the code

* Fixed the test modules, Class -> Functions
2025-04-11 12:56:37 -04:00
Cypher Pepe
0cd524af86 fixed broken link in docs/tools/weaviatevectorsearchtool.mdx (#2569) 2025-04-11 11:58:01 -04:00
Jesse R Weigel
4bff5408d8 Create output folder if it doesn't exits (#2573)
When running this project, I got an error because the output folder had not been created. 

I added a line to check if the output folder exists and create it if needed.
2025-04-11 09:14:05 -04:00
Lucas Gomide
d2caf11191 Support Python 3.10+ (on CI) and remove redundant Self imports (#2553)
* ci(workflows): add Python version matrix (3.10-3.12) for tests

* refactor: remove explicit Self import from typing

Python 3.10+ natively supports Self type annotation without explicit imports

* chore: rename external_memory file test

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-10 14:37:24 -04:00
Vini Brasil
37979a0ca1 Raise exception when flow fails (#2579) 2025-04-10 13:08:32 -04:00
devin-ai-integration[bot]
c9f47e6a37 Add result_as_answer parameter to @tool decorator (Fixes #2561) (#2562)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-10 09:01:26 -04:00
x1x2
5780c3147a fix: correct parameter name in crew template test function (#2567)
This commit resolves an issue in the crew template generator where the test() 
function incorrectly uses 'openai_model_name' as a parameter name when calling 
Crew.test(), while the actual implementation expects 'eval_llm'.

The mismatch causes a TypeError when users run the generated test command:
"Crew.test() got an unexpected keyword argument 'openai_model_name'"

This change ensures that templates generated with 'crewai create crew' will 
produce code that aligns with the framework's API.
2025-04-10 08:51:10 -04:00
João Moura
98ccbeb4bd new version 2025-04-09 18:13:41 -07:00
Tony Kipkemboi
fbb156b9de Docs: Alphabetize sections, add YouTube video, improve layout (#2560) 2025-04-09 14:14:03 -07:00
Lorenze Jay
b73960cebe KISS: Refactor LiteAgent integration in flows to use Agents instead. … (#2556)
* KISS: Refactor LiteAgent integration in flows to use Agents instead. Update documentation and examples to reflect changes in class usage, including async support and structured output handling. Enhance tests for Agent functionality and ensure compatibility with new features.

* lint fix

* dropped for clarity
2025-04-09 11:54:45 -07:00
Lucas Gomide
10328f3db4 chore: remove unsupported crew attributes from docs (#2557) 2025-04-09 11:34:49 -07:00
devin-ai-integration[bot]
da42ec7eb9 Fix #2536: Add CREWAI_DISABLE_TELEMETRY environment variable (#2537)
* Fix #2536: Add CREWAI_DISABLE_TELEMETRY environment variable

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import order in telemetry test file

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix telemetry implementation based on PR feedback

Co-Authored-By: Joe Moura <joao@crewai.com>

* Revert telemetry implementation changes while keeping CREWAI_DISABLE_TELEMETRY functionality

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2025-04-09 13:20:34 -04:00
Vini Brasil
97d4439872 Bump crewai-tools to v0.40.1 (#2554) 2025-04-09 11:24:43 -04:00
Lucas Gomide
c3bb221fb3 Merge pull request #2548 from crewAIInc/devin/1744191265-fix-taskoutput-import
Fix #2547: Add TaskOutput and CrewOutput to public exports
2025-04-09 11:24:53 -03:00
Lucas Gomide
e68cad380e Merge remote-tracking branch 'origin/main' into devin/1744191265-fix-taskoutput-import 2025-04-09 11:21:16 -03:00
Lucas Gomide
96a78a97f0 Merge pull request #2336 from sakunkun/bug_fix
fix: retrieve function_calling_llm from registered LLMs in CrewBase
2025-04-09 09:59:38 -03:00
Lucas Gomide
337d2b634b Merge branch 'main' into bug_fix 2025-04-09 09:43:28 -03:00
Devin AI
475b704f95 Fix #2547: Add TaskOutput and CrewOutput to public exports
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 09:35:05 +00:00
João Moura
b992ee9d6b small comments 2025-04-08 10:27:02 -07:00
Lucas Gomide
d7fa8464c7 Add support for External Memory (the future replacement for UserMemory) (#2510)
* fix: surfacing properly supported types by Mem0Storage

* feat: prepare Mem0Storage to accept config paramenter

We're planning to remove `memory_config` soon. This commit kindly prepare this storage to accept the config provided directly

* feat: add external memory

* fix: cleanup Mem0 warning while adding messages to the memory

* feat: support set the current crew in memory

This can be useful when a memory is initialized before the crew, but the crew might still be a very relevant attribute

* fix: allow to reset only an external_memory from crew

* test: add external memory test

* test: ensure the config takes precedence over memory_config when setting mem0

* fix: support to provide a custom storage to External Memory

* docs: add docs about external memory

* chore: add warning messages about the deprecation of UserMemory

* fix: fix typing check

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-07 10:40:35 -07:00
João Moura
918c0589eb adding new docs 2025-04-07 02:46:40 -04:00
sakunkun
c9d3eb7ccf fix ruff check error of project_test.py 2025-04-07 10:08:40 +08:00
Tony Kipkemboi
d216edb022 Merge pull request #2520 from exiao/main
Fix title and position in docs for Arize Phoenix
2025-04-05 18:01:20 -04:00
exiao
afa8783750 Update arize-phoenix-observability.mdx 2025-04-03 13:03:39 -04:00
exiao
a661050464 Merge branch 'crewAIInc:main' into main 2025-04-03 11:34:29 -04:00
exiao
c14f990098 Update docs.json 2025-04-03 11:33:51 -04:00
exiao
26ccaf78ec Update arize-phoenix-observability.mdx 2025-04-03 11:33:18 -04:00
exiao
12e98e1f3c Update and rename phoenix-observability.mdx to arize-phoenix-observability.mdx 2025-04-03 11:32:56 -04:00
Brandon Hancock (bhancock_ai)
efe27bd570 Feat/individual react agent (#2483)
* WIP

* WIP

* wip

* wip

* WIP

* More WIP

* Its working but needs a massive clean up

* output type works now

* Usage metrics fixed

* more testing

* WIP

* cleaning up

* Update logger

* 99% done. Need to make docs match new example

* cleanup

* drop hard coded examples

* docs

* Clean up

* Fix errors

* Trying to fix CI issues

* more type checker fixes

* More type checking fixes

* Update LiteAgent documentation for clarity and consistency; replace WebsiteSearchTool with SerperDevTool, and improve formatting in examples.

* fix fingerprinting issues

* fix type-checker

* Fix type-checker issue by adding type ignore comment for cache read in ToolUsage class

* Add optional agent parameter to CrewAgentParser and enhance action handling logic

* Remove unused parameters from ToolUsage instantiation in tests and clean up debug print statement in CrewAgentParser.

* Remove deprecated test files and examples for LiteAgent; add comprehensive tests for LiteAgent functionality, including tool usage and structured output handling.

* Remove unused variable 'result' from ToolUsage class to clean up code.

* Add initialization for 'result' variable in ToolUsage class to resolve type-checker warnings

* Refactor agent_utils.py by removing unused event imports and adding missing commas in function definitions. Update test_events.py to reflect changes in expected event counts and adjust assertions accordingly. Modify test_tools_emits_error_events.yaml to include new headers and update response content for consistency with recent API changes.

* Enhance tests in crew_test.py by verifying cache behavior in test_tools_with_custom_caching and ensuring proper agent initialization with added commas in test_crew_kickoff_for_each_works_with_manager_agent_copy.

* Update agent tests to reflect changes in expected call counts and improve response formatting in YAML cassette. Adjusted mock call count from 2 to 3 and refined interaction formats for clarity and consistency.

* Refactor agent tests to update model versions and improve response formatting in YAML cassettes. Changed model references from 'o1-preview' to 'o3-mini' and adjusted interaction formats for consistency. Enhanced error handling in context length tests and refined mock setups for better clarity.

* Update tool usage logging to ensure tool arguments are consistently formatted as strings. Adjust agent test cases to reflect changes in maximum iterations and expected outputs, enhancing clarity in assertions. Update YAML cassettes to align with new response formats and improve overall consistency across tests.

* Update YAML cassette for LLM tests to reflect changes in response structure and model version. Adjusted request and response headers, including updated content length and user agent. Enhanced token limits and request counts for improved testing accuracy.

* Update tool usage logging to store tool arguments as native types instead of strings, enhancing data integrity and usability.

* Refactor agent tests by removing outdated test cases and updating YAML cassettes to reflect changes in tool usage and response formats. Adjusted request and response headers, including user agent and content length, for improved accuracy in testing. Enhanced interaction formats for consistency across tests.

* Add Excalidraw diagram file for visual representation of input-output flow

Created a new Excalidraw file that includes a diagram illustrating the input box, database, and output box with connecting arrows. This visual aid enhances understanding of the data flow within the application.

* Remove redundant error handling for action and final answer in CrewAgentParser. Update tests to reflect this change by deleting the corresponding test case.

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
2025-04-02 08:54:46 -07:00
Lucas Gomide
403ea385d7 Merge branch 'main' into bug_fix 2025-04-02 10:00:53 -03:00
Orce MARINKOVSKI
9b51e1174c fix expected output (#2498)
fix expected output.
missing expected_output on task throws errors

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-01 21:54:35 -07:00
Tony Kipkemboi
a3b5413f16 Merge pull request #2413 from exiao/main
Add Arize Phoenix docs and tutorials
2025-04-01 17:23:07 -04:00
exiao
bce4bb5c4e Update docs.json 2025-04-01 14:51:01 -04:00
Lorenze Jay
3f92e217f9 Merge branch 'main' into main 2025-04-01 10:35:26 -07:00
theadityarao
b0f9637662 fix documentation for "Using Crews and Flows Together" (#2490)
* Update README.md

* Update README.md

* Update README.md

* Update README.md

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-01 10:31:22 -07:00
Lucas Gomide
63ef3918dd feat: cleanup Pydantic warning (#2507)
A several warnings were addressed following by  https://docs.pydantic.dev/2.10/migration
2025-04-01 08:45:45 -07:00
Lucas Gomide
3c24350306 fix: remove logs we don't need to see from UserMemory initializion (#2497) 2025-03-31 08:27:36 -07:00
Lucas Gomide
356d4d9729 Merge pull request #2495 from Vidit-Ostwal/fix-user-memory-config
Fix user memory config
2025-03-28 17:17:52 -03:00
Vidit-Ostwal
e290064ecc Fixes minor typo in memory docs 2025-03-28 22:39:17 +05:30
Vidit-Ostwal
77fa1b18c7 added early return 2025-03-28 22:30:32 +05:30
Vidit-Ostwal
08a6a82071 Minor Changes 2025-03-28 22:08:15 +05:30
Lucas Gomide
625748e462 Merge pull request #2492 from crewAIInc/bugfix-2409-pin-tools
chore(deps): pin crewai-tools to compatible version ~=0.38.0
2025-03-27 17:10:54 -03:00
lucasgomide
6e209d5d77 chore(deps): pin crewai-tools to compatible version ~=0.38.0
fixes [issue](https://github.com/crewAIInc/crewAI/issues/2390)
2025-03-27 16:36:08 -03:00
Vini Brasil
f845fac4da Refactor event base classes (#2491)
- Renamed `CrewEvent` to `BaseEvent` across the codebase for consistency
- Created a `CrewBaseEvent` that automatically identifies fingerprints for DRY
- Added a new `to_json()` method for serializing events
2025-03-27 15:42:11 -03:00
exiao
b6c32b014c Update phoenix-observability.mdx 2025-03-27 13:22:33 -04:00
exiao
06950921e9 Update phoenix-observability.mdx 2025-03-27 13:07:16 -04:00
Lucas Gomide
fc9da22c38 Merge pull request #2265 from Vidit-Ostwal/Branch_2260
Added .copy for manager agent and shallow copy for manager llm
2025-03-27 09:26:04 -03:00
Vidit-Ostwal
02f790ffcb Fixed Intent 2025-03-27 08:14:07 +05:30
Vidit-Ostwal
af7983be43 Fixed Intent 2025-03-27 08:12:47 +05:30
Vidit-Ostwal
a83661fd6e Merge branch 'main' into Branch_2260 2025-03-27 08:11:17 +05:30
João Moura
e1a73e0c44 Using fingerprints (#2456)
* using fingerprints

* passing fingerptins on tools

* fix

* update lock

* Fix type checker errors

---------

Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-03-26 14:54:23 -07:00
Eduardo Chiarotti
48983773f5 feat: add output to ToolUsageFinishedEvent (#2477)
* feat: add output to ToolUsageFinishedEvent

* feat: add type ignore

* feat: add tests
2025-03-26 16:50:09 -03:00
Lucas Gomide
73701fda1e Merge pull request #2476 from crewAIInc/devin/1742990927-fix-issue-2475
Fix multimodal agent validation errors with image processing
2025-03-26 16:40:23 -03:00
lucasgomide
3deeba4cab test: adding missing test to ensure multimodal content structures 2025-03-26 16:30:17 -03:00
Devin AI
e3dde17af0 docs: improve LLMCallStartedEvent docstring to clarify multimodal support
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-26 16:29:24 -03:00
Devin AI
49b8cc95ae fix: update LLMCallStartedEvent message type to support multimodal content (#2475)
fix: sort imports in test file to fix linting

fix: properly sort imports with ruff

Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-26 16:29:15 -03:00
Vidit-Ostwal
6145331ee4 Added test cases mentioned in the issue 2025-03-27 00:37:13 +05:30
Lucas Gomide
f1839bc6db Merge branch 'main' into Branch_2260 2025-03-26 14:24:03 -03:00
Tony Kipkemboi
0b58911153 Merge pull request #2482 from crewAIInc/docs/improve-observability
docs: update theme to mint and modify opik observability doc
2025-03-26 11:40:45 -04:00
Tony Kipkemboi
ee78446cc5 Merge branch 'main' into docs/improve-observability 2025-03-26 11:29:59 -04:00
Tony Kipkemboi
50fe5080e6 docs: update theme to mint and modify opik observability doc 2025-03-26 11:28:02 -04:00
Brandon Hancock (bhancock_ai)
e1b8394265 Fixed (#2481)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-03-26 11:25:10 -04:00
Lorenze Jay
c23e8fbb02 Refactor type hints and clean up imports in crew.py (#2480)
- Removed unused import of BaseTool from langchain_core.tools.
- Updated type hints in crew.py to streamline code and improve readability.
- Cleaned up whitespace for better code formatting.

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-26 11:16:09 -04:00
Lucas Gomide
65aeb85e88 Merge pull request #2352 from crewAIInc/devin/1741797763-fix-long-role-name
Fix #2351: Sanitize collection names to meet ChromaDB requirements
2025-03-26 12:07:15 -03:00
Devin AI
6c003e0382 Address PR comment: Move import to top level in knowledge_storage.py
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-26 12:02:17 -03:00
lucasgomide
6b14ffcffb fix: delegate collection name sanitization to knowledge store 2025-03-26 12:02:17 -03:00
Devin AI
df25703cc2 Address PR review: Add constants, IPv4 validation, error handling, and expanded tests
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-26 12:02:17 -03:00
Devin AI
12a815e5db Fix #2351: Sanitize collection names to meet ChromaDB requirements
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-26 12:02:17 -03:00
Tony Kipkemboi
102836a2c2 Merge pull request #2478 from anmorgan24/Add-Opik-to-docs
Add Opik to docs
2025-03-26 10:55:51 -04:00
Tony Kipkemboi
d38be25d33 Merge branch 'main' into Add-Opik-to-docs 2025-03-26 10:48:17 -04:00
Abby Morgan
ac848f9ff4 Update opik-observability.mdx
Changed icon to meteor as per tony's request
2025-03-26 10:46:59 -04:00
Vini Brasil
a25a27c3d3 Add exclude option to to_serializable() (#2479) 2025-03-26 11:35:12 -03:00
Abby Morgan
22c8e5f433 Update opik-observability.mdx
Fix typo
2025-03-26 10:06:36 -04:00
Abby Morgan
8df8255f18 Update opik-observability.mdx
Fix typo
2025-03-26 10:04:53 -04:00
Abby Morgan
66124d9afb Update opik-observability.mdx 2025-03-26 09:57:32 -04:00
Abby Morgan
7def3a8acc Update opik-observability.mdx
Add resources
2025-03-26 09:42:17 -04:00
Abby Morgan
5b7fed2cb6 Create opik-observability.mdx 2025-03-26 09:36:23 -04:00
Abby Morgan
838b3bc09d Add opik screenshot 2025-03-26 09:36:05 -04:00
Lucas Gomide
ebb585e494 Merge pull request #2461 from crewAIInc/bugfix-2392-kickoff-for-each-conditional-task
fix: properly clone ConditionalTask instances
2025-03-26 08:57:09 -03:00
sakunkun
7c67c2c6af fix project_test.py 2025-03-26 14:02:04 +08:00
sakunkun
e4f5c7cdf2 Merge branch 'crewAIInc:main' into bug_fix 2025-03-26 10:50:15 +08:00
Abby Morgan
f09238e512 Update docs.json
Add Opik to docs/docs.json
2025-03-25 15:52:29 -04:00
lucasgomide
da5f60e7f3 fix: properly clone ConditionalTask instances
Previously copying a Task always returned an instance of Task even when we are cloning a subclass, such ConditionalTask.
This commit ensures that the clone preserve the original class type
2025-03-25 16:05:06 -03:00
devin-ai-integration[bot]
807c13e144 Add support for custom LLM implementations (#2277)
* Add support for custom LLM implementations

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix import sorting and type annotations

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix linting issues with import sorting

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix type errors in crew.py by updating tool-related methods to return List[BaseTool]

Co-Authored-By: Joe Moura <joao@crewai.com>

* Enhance custom LLM implementation with better error handling, documentation, and test coverage

Co-Authored-By: Joe Moura <joao@crewai.com>

* Refactor LLM module by extracting BaseLLM to a separate file

This commit moves the BaseLLM abstract base class from llm.py to a new file llms/base_llm.py to improve code organization. The changes include:

- Creating a new file src/crewai/llms/base_llm.py
- Moving the BaseLLM class to the new file
- Updating imports in __init__.py and llm.py to reflect the new location
- Updating test cases to use the new import path

The refactoring maintains the existing functionality while improving the project's module structure.

* Add AISuite LLM support and update dependencies

- Integrate AISuite as a new third-party LLM option
- Update pyproject.toml and uv.lock to include aisuite package
- Modify BaseLLM to support more flexible initialization
- Remove unnecessary LLM imports across multiple files
- Implement AISuiteLLM with basic chat completion functionality

* Update AISuiteLLM and LLM utility type handling

- Modify AISuiteLLM to support more flexible input types for messages
- Update type hints in AISuiteLLM to allow string or list of message dictionaries
- Enhance LLM utility function to support broader LLM type annotations
- Remove default `self.stop` attribute from BaseLLM initialization

* Update LLM imports and type hints across multiple files

- Modify imports in crew_chat.py to use LLM instead of BaseLLM
- Update type hints in llm_utils.py to use LLM type
- Add optional `stop` parameter to BaseLLM initialization
- Refactor type handling for LLM creation and usage

* Improve stop words handling in CrewAgentExecutor

- Add support for handling existing stop words in LLM configuration
- Ensure stop words are correctly merged and deduplicated
- Update type hints to support both LLM and BaseLLM types

* Remove abstract method set_callbacks from BaseLLM class

* Enhance CustomLLM and JWTAuthLLM initialization with model parameter

- Update CustomLLM to accept a model parameter during initialization
- Modify test cases to include the new model argument
- Ensure JWTAuthLLM and TimeoutHandlingLLM also utilize the model parameter in their constructors
- Update type hints in create_llm function to support both LLM and BaseLLM types

* Enhance create_llm function to support BaseLLM type

- Update the create_llm function to accept both LLM and BaseLLM instances
- Ensure compatibility with existing LLM handling logic

* Update type hint for initialize_chat_llm to support BaseLLM

- Modify the return type of initialize_chat_llm function to allow for both LLM and BaseLLM instances
- Ensure compatibility with recent changes in create_llm function

* Refactor AISuiteLLM to include tools parameter in completion methods

- Update the _prepare_completion_params method to accept an optional tools parameter
- Modify the chat completion method to utilize the new tools parameter for enhanced functionality
- Clean up print statements for better code clarity

* Remove unused tool_calls handling in AISuiteLLM chat completion method for cleaner code.

* Refactor Crew class and LLM hierarchy for improved type handling and code clarity

- Update Crew class methods to enhance readability with consistent formatting and type hints.
- Change LLM class to inherit from BaseLLM for better structure.
- Remove unnecessary type checks and streamline tool handling in CrewAgentExecutor.
- Adjust BaseLLM to provide default implementations for stop words and context window size methods.
- Clean up AISuiteLLM by removing unused methods related to stop words and context window size.

* Remove unused `stream` method from `BaseLLM` class to enhance code clarity and maintainability.

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-25 12:39:08 -04:00
Tony Kipkemboi
3dea3d0183 docs: reorganize observability docs and update titles (#2467) 2025-03-25 08:14:52 -07:00
Tony Kipkemboi
35cb7fcf4d Merge pull request #2463 from ayulockin/main
docs: Add documentation for W&B Weave
2025-03-25 09:48:09 -04:00
ayulockin
d2a9a4a4e4 Revert "remove uv.lock"
This reverts commit e62e9c7401.
2025-03-25 19:05:58 +05:30
ayulockin
e62e9c7401 remove uv.lock 2025-03-25 19:04:51 +05:30
ayulockin
3c5031e711 docs.json 2025-03-25 19:04:14 +05:30
ayulockin
82e84c0f88 features and resources 2025-03-25 16:43:14 +05:30
ayulockin
2c550dc175 add weave docs 2025-03-25 15:46:41 +05:30
Tony Kipkemboi
bdc92deade docs: update changelog dates (#2437)
* docs: update changelog dates

* docs: add aws bedrock tools docs

* docs: fix incorrect respect_context_window parameter in Crew example
2025-03-24 12:06:50 -04:00
sakunkun
448d31cad9 Fix the failing test of project_test.py 2025-03-22 11:28:27 +08:00
Brandon Hancock (bhancock_ai)
ed1f009c64 Feat/improve yaml extraction (#2428)
* Support wildcard handling in `emit()`

Change `emit()` to call handlers registered for parent classes using
`isinstance()`. Ensures that base event handlers receive derived
events.

* Fix failing test

* Remove unused variable

* update interpolation to work with example response types in yaml docs

* make tests

* fix circular deps

* Fixing interpolation imports

* Improve test

---------

Co-authored-by: Vinicius Brasil <vini@hey.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-03-21 18:59:55 -07:00
Matisse
bb3829a9ed docs: Update model reference in LLM configuration (#2267)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 15:12:26 -04:00
Fernando Galves
0a116202f0 Update the context window size for Amazon Bedrock FM- llm.py (#2304)
Update the context window size for Amazon Bedrock Foundation Models.

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-03-21 14:48:25 -04:00
Stefano Baccianella
4daa88fa59 As explained in https://github.com/mangiucugna/json_repair?tab=readme-ov-file#performance-considerations we can skip a wasteful json.loads() here and save quite some time (#2397)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-03-21 14:25:19 -04:00
Parth Patel
53067f8b92 add Mem0 OSS support (#2429)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 13:57:24 -04:00
Saurabh Misra
d3a09c3180 ️ Speed up method CrewAgentParser._clean_action by 427,565% (#2382)
Here is the optimized version of the program.

Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 13:51:14 -04:00
Saurabh Misra
4d7aacb5f2 ️ Speed up method Repository.is_git_repo by 72,270% (#2381)
Here is the optimized version of the `Repository` class.

Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 13:43:48 -04:00
Julio Peixoto
6b1cf78e41 docs: add detailed docstrings to Telemetry class methods (#2377)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 13:34:16 -04:00
Patcher
80f1a88b63 Upgrade OTel SDK version to 1.30.0 (#2375)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 13:26:50 -04:00
Jorge Gonzalez
32da76a2ca Use task in the note about how methods names need to match task names (#2355)
The note is about the task but mentions the agent incorrectly.

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 13:17:43 -04:00
Brandon Hancock (bhancock_ai)
b3667a8c09 Merge branch 'main' into bug_fix 2025-03-21 13:08:09 -04:00
Gustavo Satheler
3aa48dcd58 fix: move agent tools for a variable instead of use format (#2319)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-21 12:32:54 -04:00
Tony Kipkemboi
03f1d57463 Merge pull request #2430 from crewAIInc/update-llm-docs
docs: add documentation for Local NVIDIA NIM with WSL2
2025-03-20 12:57:37 -07:00
Tony Kipkemboi
4725d0de0d Merge branch 'main' into update-llm-docs 2025-03-20 12:50:06 -07:00
Arthur Chien
b766af75f2 fix the _extract_thought (#2398)
* fix the _extract_thought

the regex string should be same with prompt in en.json:129
...\nThought: I now know the final answer\nFinal Answer: the...

* fix Action match

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 15:44:44 -04:00
Tony Kipkemboi
b2c8779f4c Add documentation for Local NVIDIA NIM with WSL2 2025-03-20 12:39:37 -07:00
Tony Kipkemboi
df266bda01 Update documentation: Add changelog, fix formatting issues, replace mint.json with docs.json (#2400) 2025-03-20 14:44:21 -04:00
Vidit-Ostwal
eed7919d72 Merge remote-tracking branch 'origin/Branch_2260' into Branch_2260 2025-03-20 22:49:51 +05:30
Vidit-Ostwal
1e49d1b592 Fixed doc string of copy function 2025-03-20 22:47:46 +05:30
Vidit-Ostwal
ded7197fcb Merge branch 'main' into Branch_2260 2025-03-20 22:46:30 +05:30
Lorenze Jay
2155acb3a3 docs: Update JSONSearchTool and RagTool configuration parameter from 'embedder' to 'embedding_model' (#2311)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 13:11:37 -04:00
Sir Qasim
794574957e Add note to create ./knowldge folder for source file management (#2297)
This update includes a note in the documentation instructing users to create a ./knowldge folder. All source files (such as .txt, .pdf, .xlsx, .json) should be placed in this folder for centralized management. This change aims to streamline file organization and improve accessibility across projects.

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 12:54:17 -04:00
Sir Qasim
66b19311a7 Fix crewai run Command Issue for Flow Projects and Cloud Deployment (#2291)
This PR addresses an issue with the crewai run command following the creation of a flow project. Previously, the update command interfered with execution, causing it not to work as expected. With these changes, the command now runs according to the instructions in the readme.md, and it also improves deployment support when using CrewAI Cloud.

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 12:48:02 -04:00
devin-ai-integration[bot]
9fc84fc1ac Fix incorrect import statement in memory examples documentation (fixes #2395) (#2396)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 12:17:26 -04:00
Amine Saihi
f8f9df6d1d update doc SpaceNewsKnowledgeSource code snippet (#2275)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 12:06:21 -04:00
João Moura
6e94edb777 TYPO 2025-03-20 08:21:17 -07:00
Brandon Hancock (bhancock_ai)
5f2ac8c33e Merge branch 'main' into Branch_2260 2025-03-20 11:20:54 -04:00
Vini Brasil
bbe896d48c Support wildcard handling in emit() (#2424)
* Support wildcard handling in `emit()`

Change `emit()` to call handlers registered for parent classes using
`isinstance()`. Ensures that base event handlers receive derived
events.

* Fix failing test

* Remove unused variable
2025-03-20 09:59:17 -04:00
Seyed Mostafa Meshkati
9298054436 docs: add base_url env for anthropic llm example (#2204)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 09:48:11 -04:00
Fernando Galves
90b7937796 Update documentation (#2199)
* Update llms.mdx

Update Amazon Bedrock section with more information about the foundation models available.

* Update llms.mdx

fix the description of Amazon Bedrock section

* Update llms.mdx

Remove the incorrect </tab> tag

* Update llms.mdx

Add Claude 3.7 Sonnet to the Amazon Bedrock list

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 09:42:23 -04:00
elda27
520933b4c5 Fix: More comfortable validation #2177 (#2178)
* Fix: More confortable validation

* Fix: union type support

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-20 09:28:31 -04:00
exiao
9ea4fb8c82 Add Phoenix docs and tutorials 2025-03-20 02:23:13 -04:00
Vini Brasil
fe0813e831 Improve MethodExecutionFailedEvent.error typing (#2401) 2025-03-18 12:52:23 -04:00
Brandon Hancock (bhancock_ai)
33cebea15b spelling and tab fix (#2394) 2025-03-17 16:31:23 -04:00
João Moura
e723e5ca3f preparign new version 2025-03-17 09:13:21 -07:00
Jakub Kopecký
24f1a19310 feat: add docs for ApifyActorsTool (#2254)
* add docs for ApifyActorsTool

* improve readme, add link to template

* format

* improve tool docs

* improve readme

* Update apifyactorstool.mdx (#1)

* Update apifyactorstool.mdx

* Update apifyactorstool.mdx

* dans suggestions

* custom apify icon

* update descripton

* Update apifyactorstool.mdx

---------

Co-authored-by: Jan Čurn <jan.curn@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-16 12:29:57 -04:00
devin-ai-integration[bot]
d0959573dc Fix type check error: Remove duplicate @property decorator for fingerprint in Crew class (#2369)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-03-14 03:08:55 -03:00
Vivek Soundrapandi
939afd5f82 Bug fix in document (#2370)
A bug is in the document, where the wirte section task method is not invoked before passing on to context. This results in an error as expectaion in utlitities is a dict but a function gets passed.

this is discussed clearly here: https://community.crewai.com/t/attribute-error-str-object-has-no-attribute-get/1079/16
2025-03-14 03:02:38 -03:00
João Moura
d42e58e199 adding fingerprints (#2332)
* adding fingerprints

* fixed

* fix

* Fix Pydantic v2 compatibility in SecurityConfig and Fingerprint classes (#2335)

* Fix Pydantic v2 compatibility in SecurityConfig and Fingerprint classes

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix type-checker errors in fingerprint properties

Co-Authored-By: Joe Moura <joao@crewai.com>

* Enhance security validation in Fingerprint and SecurityConfig classes

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>

* incorporate small improvements / changes

* Expect different

* Remove redundant null check in Crew.fingerprint property (#2342)

* Remove redundant null check in Crew.fingerprint property and add security module

Co-Authored-By: Joe Moura <joao@crewai.com>

* Enhance security module with type hints, improved UUID namespace, metadata validation, and versioning

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>

---------

Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2025-03-14 03:00:30 -03:00
Lorenze Jay
000bab4cf5 Enhance Event Listener with Rich Visualization and Improved Logging (#2321)
* Enhance Event Listener with Rich Visualization and Improved Logging

* Add verbose flag to EventListener for controlled logging

* Update crew test to set EventListener verbose flag

* Refactor EventListener logging and visualization with improved tool usage tracking

* Improve task logging with task ID display in EventListener

* Fix EventListener tool branch removal and type hinting

* Add type hints to EventListener class attributes

* Simplify EventListener import in Crew class

* Refactor EventListener tree node creation and remove unused method

* Refactor EventListener to utilize ConsoleFormatter for improved logging and visualization

* Enhance EventListener with property setters for crew, task, agent, tool, flow, and method branches to streamline state management

* Refactor crew test to instantiate EventListener and set verbose flags for improved clarity in logging

* Keep private parts private

* Remove unused import and clean up type hints in EventListener

* Enhance flow logging in EventListener and ConsoleFormatter by including flow ID in tree creation and status updates for better traceability.

---------

Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-13 11:07:32 -07:00
Tony Kipkemboi
8df1042180 docs: add instructions for upgrading crewAI with uv tool (#2363) 2025-03-13 10:38:32 -04:00
sakunkun
313038882c fix: retrieve function_calling_llm from registered LLMs in CrewBase 2025-03-11 11:40:33 +00:00
João Moura
41a670166a new docs 2025-03-10 17:59:35 -07:00
João Moura
a77496a217 new images 2025-03-10 17:35:51 -07:00
João Moura
430260c985 adding state docs 2025-03-10 16:53:23 -07:00
João Moura
334b0959b0 updates 2025-03-10 16:53:23 -07:00
João Moura
2b31e26ba5 update 2025-03-10 16:53:23 -07:00
Brandon Hancock (bhancock_ai)
7122a29a20 fix mistral issues (#2308) 2025-03-10 12:08:43 -04:00
João Moura
f3ddb430a7 fix image 2025-03-09 04:34:38 -07:00
João Moura
435bfca186 preparing new version 2025-03-09 04:24:05 -07:00
João Moura
2ef896bdd5 update readme 2025-03-08 20:39:15 -08:00
Brandon Hancock (bhancock_ai)
59c6c29706 include model_name (#2310) 2025-03-07 16:55:18 -05:00
Brandon Hancock (bhancock_ai)
a1f35e768f Enhance LLM Streaming Response Handling and Event System (#2266)
* Initial Stream working

* add tests

* adjust tests

* Update test for multiplication

* Update test for multiplication part 2

* max iter on new test

* streaming tool call test update

* Force pass

* another one

* give up on agent

* WIP

* Non-streaming working again

* stream working too

* fixing type check

* fix failing test

* fix failing test

* fix failing test

* Fix testing for CI

* Fix failing test

* Fix failing test

* Skip failing CI/CD tests

* too many logs

* working

* Trying to fix tests

* drop openai failing tests

* improve logic

* Implement LLM stream chunk event handling with in-memory text stream

* More event types

* Update docs

---------

Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
2025-03-07 12:54:32 -05:00
Tony Kipkemboi
00eede0d5d docs: Update installation guide to use uv tool package manager (#2196)
* docs: add Qdrant vector search tool documentation

* Update installation docs to use uv and improve quickstart guide

* docs: improve installation instructions and add structured outputs video

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-03 10:45:57 -05:00
Vidit-Ostwal
cf1864ce0f Added docstring 2025-03-03 21:12:21 +05:30
Thiago Moretto
a3d5c86218 Convert tab to spaces on crew.py template (#2190)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-03-03 10:39:11 -05:00
Tony Kipkemboi
60d13bf7e8 docs: Tool docs improvements (#2259)
* docs: add Qdrant vector search tool documentation

* Update installation docs to use uv and improve quickstart guide

* docs: improve installation instructions and add structured outputs video

* Update tool documentation with agent integration examples and consistent formatting
2025-03-03 10:29:37 -05:00
Vidit-Ostwal
52e0a84829 Added .copy for manager agent and shallow copy for manager llm 2025-03-03 20:57:41 +05:30
Tony Kipkemboi
86825e1769 docs: add Qdrant vector search tool documentation (#2184)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-02-27 13:54:44 -05:00
Brandon Hancock (bhancock_ai)
7afc531fbb Improve hierarchical docs (#2244) 2025-02-27 13:38:21 -05:00
Brandon Hancock (bhancock_ai)
ed0490112b explain how to use event listener (#2245) 2025-02-27 13:32:16 -05:00
Brandon Hancock (bhancock_ai)
66c66e3d84 Update docs (#2226) 2025-02-26 15:21:36 -05:00
Brandon Hancock (bhancock_ai)
b9b625a70d Improve extract thought (#2223)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-02-26 14:51:46 -05:00
Brandon Hancock (bhancock_ai)
b58253cacc Support multiple router calls and address issue #2175 (#2231)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-02-26 13:42:17 -05:00
Brandon Hancock (bhancock_ai)
fbf8732784 Fix type issue (#2224)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-02-26 13:27:41 -05:00
Brandon Hancock (bhancock_ai)
8fedbe49cb Add support for python 3.10 (#2230) 2025-02-26 13:24:31 -05:00
Lorenze Jay
1e8ee247ca feat: Enhance agent knowledge setup with optional crew embedder (#2232)
- Modify `Agent` class to add `set_knowledge` method
- Allow setting embedder from crew-level configuration
- Remove `_set_knowledge` method from initialization
- Update `Crew` class to set agent knowledge during agent setup
- Add default implementation in `BaseAgent` for compatibility
2025-02-26 12:10:43 -05:00
Fernando Galves
34d2993456 Update the constants.py file adding the list of foundation models available in Amazon Bedrock (#2170)
* Update constants.py

This PR updates the list of foundation models available in Amazon Bedrock to reflect the latest offerings.

* Update constants.py with inference profiles

Add the cross-region inference profiles to increase throughput and improve resiliency by routing your requests across multiple AWS Regions during peak utilization bursts.

* Update constants.py

Fix the model order

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-02-25 15:39:23 -05:00
devin-ai-integration[bot]
e3c5c174ee feat: add context window size for o3-mini model (#2192)
* feat: add context window size for o3-mini model

Fixes #2191

Co-Authored-By: Joe Moura <joao@crewai.com>

* feat: add context window validation and tests

- Add validation for context window size bounds (1024-2097152)
- Add test for context window validation
- Fix test import error

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: fix import sorting in llm_test.py

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-02-25 15:32:14 -05:00
Brandon Hancock (bhancock_ai)
b4e2db0306 incorporating fix from @misrasaurabh1 with additional type fix (#2213)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-02-25 15:29:21 -05:00
Shivtej Narake
9cc759ba32 [MINOR]support ChatOllama from langchain_ollama (#2158)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-02-25 15:19:36 -05:00
Vidit Ostwal
ac9f8b9d5a Fixed the issue 2123 around memory command with CLI (#2155)
* Fixed the issue 2123 around memory command with CLI

* Fixed typo, added the recommendations

* Fixed Typo

* Fixed lint issue

* Fixed the print statement to include path as well

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-02-25 12:29:33 -05:00
Victor Degliame
3d4a1e4b18 fix: typo in 'delegate_work' and 'ask_question' promps (#2144)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-02-25 12:16:04 -05:00
nikolaidk
123f302744 Update kickoff-async.mdx (#2138)
Missing mandatory field expected_output on task in example

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-02-25 12:12:27 -05:00
Brandon Hancock (bhancock_ai)
5bae78639e Revert "feat: add prompt observability code (#2027)" (#2211)
* Revert "feat: add prompt observability code (#2027)"

This reverts commit 90f1bee602.

* Fix issues with flows post merge

* Decoupling telemetry and ensure tests  (#2212)

* feat: Enhance event listener and telemetry tracking

- Update event listener to improve telemetry span handling
- Add execution_span field to Task for better tracing
- Modify event handling in EventListener to use new span tracking
- Remove debug print statements
- Improve test coverage for crew and flow events
- Update cassettes to reflect new event tracking behavior

* Remove telemetry references from Crew class

- Remove Telemetry import and initialization from Crew class
- Delete _telemetry attribute from class configuration
- Clean up unused telemetry-related code

* test: Improve crew verbose output test with event log filtering

- Filter out event listener logs in verbose output test
- Ensure no output when verbose is set to False
- Enhance test coverage for crew logging behavior

* dropped comment

* refactor: Improve telemetry span tracking in EventListener

- Remove `execution_span` from Task class
- Add `execution_spans` dictionary to EventListener to track spans
- Update task event handlers to use new span tracking mechanism
- Simplify span management across task lifecycle events

* lint

* Fix failing test

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-02-24 16:30:16 -05:00
Lorenze Jay
5235442a5b Decoupling telemetry and ensure tests (#2212)
* feat: Enhance event listener and telemetry tracking

- Update event listener to improve telemetry span handling
- Add execution_span field to Task for better tracing
- Modify event handling in EventListener to use new span tracking
- Remove debug print statements
- Improve test coverage for crew and flow events
- Update cassettes to reflect new event tracking behavior

* Remove telemetry references from Crew class

- Remove Telemetry import and initialization from Crew class
- Delete _telemetry attribute from class configuration
- Clean up unused telemetry-related code

* test: Improve crew verbose output test with event log filtering

- Filter out event listener logs in verbose output test
- Ensure no output when verbose is set to False
- Enhance test coverage for crew logging behavior

* dropped comment

* refactor: Improve telemetry span tracking in EventListener

- Remove `execution_span` from Task class
- Add `execution_spans` dictionary to EventListener to track spans
- Update task event handlers to use new span tracking mechanism
- Simplify span management across task lifecycle events

* lint
2025-02-24 12:24:35 -08:00
Lorenze Jay
c62fb615b1 feat: Add LLM call events for improved observability (#2214)
* feat: Add LLM call events for improved observability

- Introduce new LLM call events: LLMCallStartedEvent, LLMCallCompletedEvent, and LLMCallFailedEvent
- Emit events for LLM calls and tool calls to provide better tracking and debugging
- Add event handling in the LLM class to track call lifecycle
- Update event bus to support new LLM-related events
- Add test cases to validate LLM event emissions

* feat: Add event handling for LLM call lifecycle events

- Implement event listeners for LLM call events in EventListener
- Add logging for LLM call start, completion, and failure events
- Import and register new LLM-specific event types

* less log

* refactor: Update LLM event response type to support Any

* refactor: Simplify LLM call completed event emission

Remove unnecessary LLMCallType conversion when emitting LLMCallCompletedEvent

* refactor: Update LLM event docstrings for clarity

Improve docstrings for LLM call events to more accurately describe their purpose and lifecycle

* feat: Add LLMCallFailedEvent emission for tool execution errors

Enhance error handling by emitting a specific event when tool execution fails during LLM calls
2025-02-24 15:17:44 -05:00
Brandon Hancock (bhancock_ai)
78797c64b0 fix reset memory issue (#2182)
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-02-24 14:51:58 -05:00
Brandon Hancock (bhancock_ai)
8a7584798b Better support async flows (#2193)
* Better support async

* Drop coroutine
2025-02-24 10:25:30 -05:00
408 changed files with 85625 additions and 19055 deletions

38
.github/security.md vendored
View File

@@ -1,19 +1,27 @@
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
## CrewAI Security Vulnerability Reporting Policy
## Reporting a Vulnerability
Please do not report security vulnerabilities through public GitHub issues.
To report a vulnerability, please email us at security@crewai.com.
Please include the requested information listed below so that we can triage your report more quickly
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- The location of the affected source code (tag/branch/commit or direct URL)
- Any special configuration required to reproduce the issue
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit the issue
### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
Email all vulnerability reports directly to:
**security@crewai.com**
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.
### Required Information
To help us quickly validate and remediate the issue, your report must include:
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
### Reward Notice
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.

View File

@@ -5,12 +5,29 @@ on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
env:
TARGET_BRANCH: ${{ github.event.pull_request.base.ref }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Requirements
- name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1
- name: Install Ruff
run: pip install ruff
- name: Get Changed Python Files
id: changed-files
run: |
pip install ruff
merge_base=$(git merge-base origin/"$TARGET_BRANCH" HEAD)
changed_files=$(git diff --name-only --diff-filter=ACMRTUB "$merge_base" | grep '\.py$' || true)
echo "files<<EOF" >> $GITHUB_OUTPUT
echo "$changed_files" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Run Ruff Linter
run: ruff check
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" | tr " " "\n" | xargs -I{} ruff check "{}"

33
.github/workflows/notify-downstream.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: Notify Downstream
on:
push:
branches:
- main
permissions:
contents: read
jobs:
notify-downstream:
runs-on: ubuntu-latest
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.OSS_SYNC_APP_ID }}
private_key: ${{ secrets.OSS_SYNC_APP_PRIVATE_KEY }}
- name: Notify Repo B
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ steps.app-token.outputs.token }}
repository: ${{ secrets.OSS_SYNC_DOWNSTREAM_REPO }}
event-type: upstream-commit
client-payload: |
{
"commit_sha": "${{ github.sha }}"
}

View File

@@ -12,6 +12,9 @@ jobs:
tests:
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -21,12 +24,11 @@ jobs:
with:
enable-cache: true
- name: Set up Python
run: uv python install 3.12.8
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
- name: Install the project
run: uv sync --dev --all-extras
- name: Run tests
run: uv run pytest tests -vv
run: uv run pytest --block-network --timeout=60 -vv

6
.gitignore vendored
View File

@@ -22,4 +22,8 @@ crew_tasks_output.json
.ruff_cache
.venv
agentops.log
test_flow.html
test_flow.html
crewairules.mdc
plan.md
conceptual_plan.md
build_image

View File

@@ -2,8 +2,3 @@ exclude = [
"templates",
"__init__.py",
]
[lint]
select = [
"I", # isort rules
]

View File

@@ -1,4 +1,4 @@
Copyright (c) 2018 The Python Packaging Authority
Copyright (c) 2025 crewAI, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

192
README.md
View File

@@ -2,21 +2,46 @@
![Logo of CrewAI](./docs/crewai_logo.png)
# **CrewAI**
**CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
</div>
**CrewAI Enterprise**
Want to plan, build (+ no code), deploy, monitor and interare your agents: [CrewAI Enterprise](https://www.crewai.com/enterprise). Designed for complex, real-world applications, our enterprise solution offers:
### Fast and Flexible Multi-Agent Automation Framework
- **Seamless Integrations**
- **Scalable & Secure Deployment**
- **Actionable Insights**
- **24/7 Support**
CrewAI is a lean, lightning-fast Python framework built entirely from
scratch—completely **independent of LangChain or other agent frameworks**.
It empowers developers with both high-level simplicity and precise low-level
control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at
[learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI Enterprise Suite
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations
that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
## Crew Control Plane Key Features:
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI Enterprise is designed for enterprises seeking a powerful,
reliable solution to transform complex business processes into efficient,
intelligent automations.
<h3>
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Discourse](https://community.crewai.com)
</h3>
@@ -47,8 +72,19 @@ Want to plan, build (+ no code), deploy, monitor and interare your agents: [Crew
## Why CrewAI?
The power of AI collaboration has too much to offer.
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
<div align="center" style="margin-bottom: 30px;">
<img src="docs/asset.png" alt="CrewAI Logo" width="100%">
</div>
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
## Getting Started
@@ -221,10 +257,14 @@ reporting_task:
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
@@ -321,18 +361,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
CrewAI stands apart as a lean, standalone, high-performance framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
## Examples
@@ -367,11 +405,16 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
### Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines.
CrewAI flows support logical operators like `or_` and `and_` to combine multiple conditions. This can be used with `@start`, `@listen`, or `@router` decorators to create complex triggering conditions.
- `or_`: Triggers when any of the specified conditions are met.
- `and_`Triggers when all of the specified conditions are met.
Here's how you can orchestrate multiple Crews within a Flow:
```python
from crewai.flow.flow import Flow, listen, start, router
from crewai import Crew, Agent, Task
from crewai.flow.flow import Flow, listen, start, router, or_
from crewai import Crew, Agent, Task, Process
from pydantic import BaseModel
# Define structured state for precise control
@@ -445,7 +488,7 @@ class AdvancedAnalysisFlow(Flow[MarketState]):
)
return strategy_crew.kickoff()
@listen("medium_confidence", "low_confidence")
@listen(or_("medium_confidence", "low_confidence"))
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
@@ -461,7 +504,7 @@ This example demonstrates how to:
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
## How CrewAI Compares
@@ -563,13 +606,39 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
## Frequently Asked Questions (FAQ)
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### General
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
- [Is CrewAI open-source?](#q-is-crewai-open-source)
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
### Features and Capabilities
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
### Resources and Community
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
### Enterprise Features
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
### Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
### Q: How do I install CrewAI?
A: You can install CrewAI using pip:
A: Install CrewAI using pip:
```shell
pip install crewai
```
@@ -577,27 +646,62 @@ For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
### Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: What is the difference between Crews and Flows?
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
### Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI Enterprise offer?
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI Enterprise for free?
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
### Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
### Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
### Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
### Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.

BIN
docs/asset.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

283
docs/changelog.mdx Normal file
View File

@@ -0,0 +1,283 @@
---
title: Changelog
description: View the latest updates and changes to CrewAI
icon: timeline
---
<Update label="2025-04-30" description="v0.117.1">
## Release Highlights
<Frame>
<img src="/images/releases/v01171.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.117.1">View on GitHub</a>
</div>
**Core Improvements & Fixes**
- Upgraded **crewai-tools** to latest version
- Upgraded **liteLLM** to latest version
- Fixed **Mem0 OSS**
</Update>
<Update label="2025-04-28" description="v0.117.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01170.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.117.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Added `result_as_answer` parameter support in `@tool` decorator.
- Introduced support for new language models: GPT-4.1, Gemini-2.0, and Gemini-2.5 Pro.
- Enhanced knowledge management capabilities.
- Added Huggingface provider option in CLI.
- Improved compatibility and CI support for Python 3.10+.
**Core Improvements & Fixes**
- Fixed issues with incorrect template parameters and missing inputs.
- Improved asynchronous flow handling with coroutine condition checks.
- Enhanced memory management with isolated configuration and correct memory object copying.
- Fixed initialization of lite agents with correct references.
- Addressed Python type hint issues and removed redundant imports.
- Updated event placement for improved tool usage tracking.
- Raised explicit exceptions when flows fail.
- Removed unused code and redundant comments from various modules.
- Updated GitHub App token action to v2.
**Documentation & Guides**
- Enhanced documentation structure, including enterprise deployment instructions.
- Automatically create output folders for documentation generation.
- Fixed broken link in WeaviateVectorSearchTool documentation.
- Fixed guardrail documentation usage and import paths for JSON search tools.
- Updated documentation for CodeInterpreterTool.
- Improved SEO, contextual navigation, and error handling for documentation pages.
</Update>
<Update label="2025-04-07" description="v0.114.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01140.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.114.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Agents as an atomic unit. (`Agent(...).kickoff()`)
- Support for [Custom LLM implementations](https://docs.crewai.com/guides/advanced/custom-llm).
- Integrated External Memory and [Opik observability](https://docs.crewai.com/how-to/opik-observability).
- Enhanced YAML extraction.
- Multimodal agent validation.
- Added Secure fingerprints for agents and crews.
**Core Improvements & Fixes**
- Improved serialization, agent copying, and Python compatibility.
- Added wildcard support to `emit()`
- Added support for additional router calls and context window adjustments.
- Fixed typing issues, validation, and import statements.
- Improved method performance.
- Enhanced agent task handling, event emissions, and memory management.
- Fixed CLI issues, conditional tasks, cloning behavior, and tool outputs.
**Documentation & Guides**
- Improved documentation structure, theme, and organization.
- Added guides for Local NVIDIA NIM with WSL2, W&B Weave, and Arize Phoenix.
- Updated tool configuration examples, prompts, and observability docs.
- Guide on using singular agents within Flows.
</Update>
<Update label="2025-03-17" description="v0.108.0">
## Release Highlights
<Frame>
<img src="/images/releases/v01080.png" />
</Frame>
<div style={{ textAlign: 'center', marginBottom: '1rem' }}>
<a href="https://github.com/crewAIInc/crewAI/releases/tag/0.108.0">View on GitHub</a>
</div>
**New Features & Enhancements**
- Converted tabs to spaces in `crew.py` template
- Enhanced LLM Streaming Response Handling and Event System
- Included `model_name`
- Enhanced Event Listener with rich visualization and improved logging
- Added fingerprints
**Bug Fixes**
- Fixed Mistral issues
- Fixed a bug in documentation
- Fixed type check error in fingerprint property
**Documentation Updates**
- Improved tool documentation
- Updated installation guide for the `uv` tool package
- Added instructions for upgrading crewAI with the `uv` tool
- Added documentation for `ApifyActorsTool`
</Update>
<Update label="2025-03-10" description="v0.105.0">
**Core Improvements & Fixes**
- Fixed issues with missing template variables and user memory configuration
- Improved async flow support and addressed agent response formatting
- Enhanced memory reset functionality and fixed CLI memory commands
- Fixed type issues, tool calling properties, and telemetry decoupling
**New Features & Enhancements**
- Added Flow state export and improved state utilities
- Enhanced agent knowledge setup with optional crew embedder
- Introduced event emitter for better observability and LLM call tracking
- Added support for Python 3.10 and ChatOllama from langchain_ollama
- Integrated context window size support for the o3-mini model
- Added support for multiple router calls
**Documentation & Guides**
- Improved documentation layout and hierarchical structure
- Added QdrantVectorSearchTool guide and clarified event listener usage
- Fixed typos in prompts and updated Amazon Bedrock model listings
</Update>
<Update label="2025-02-12" description="v0.102.0">
**Core Improvements & Fixes**
- Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models
- Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks
- Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class
- Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types
**New Features & Enhancements**
- Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support
- Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation
- Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files
- General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues
- Adding new tool: `QdrantVectorSearchTool`
**Documentation & Guides**
- Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation
- Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation
- Fixed Various Typos & Formatting Issues
</Update>
<Update label="2025-01-28" description="v0.100.0">
**Features**
- Add Composio docs
- Add SageMaker as a LLM provider
**Fixes**
- Overall LLM connection issues
- Using safe accessors on training
- Add version check to crew_chat.py
**Documentation**
- New docs for crewai chat
- Improve formatting and clarity in CLI and Composio Tool docs
</Update>
<Update label="2025-01-20" description="v0.98.0">
**Features**
- Conversation crew v1
- Add unique ID to flow states
- Add @persist decorator with FlowPersistence interface
**Integrations**
- Add SambaNova integration
- Add NVIDIA NIM provider in cli
- Introducing VoyageAI
**Fixes**
- Fix API Key Behavior and Entity Handling in Mem0 Integration
- Fixed core invoke loop logic and relevant tests
- Make tool inputs actual objects and not strings
- Add important missing parts to creating tools
- Drop litellm version to prevent windows issue
- Before kickoff if inputs are none
- Fixed typos, nested pydantic model issue, and docling issues
</Update>
<Update label="2025-01-04" description="v0.95.0">
**New Features**
- Adding Multimodal Abilities to Crew
- Programatic Guardrails
- HITL multiple rounds
- Gemini 2.0 Support
- CrewAI Flows Improvements
- Add Workflow Permissions
- Add support for langfuse with litellm
- Portkey Integration with CrewAI
- Add interpolate_only method and improve error handling
- Docling Support
- Weviate Support
**Fixes**
- output_file not respecting system path
- disk I/O error when resetting short-term memory
- CrewJSONEncoder now accepts enums
- Python max version
- Interpolation for output_file in Task
- Handle coworker role name case/whitespace properly
- Add tiktoken as explicit dependency and document Rust requirement
- Include agent knowledge in planning process
- Change storage initialization to None for KnowledgeStorage
- Fix optional storage checks
- include event emitter in flows
- Docstring, Error Handling, and Type Hints Improvements
- Suppressed userWarnings from litellm pydantic issues
</Update>
<Update label="2024-12-05" description="v0.86.0">
**Changes**
- Remove all references to pipeline and pipeline router
- Add Nvidia NIM as provider in Custom LLM
- Add knowledge demo + improve knowledge docs
- Add HITL multiple rounds of followup
- New docs about yaml crew with decorators
- Simplify template crew
</Update>
<Update label="2024-12-04" description="v0.85.0">
**Features**
- Added knowledge to agent level
- Feat/remove langchain
- Improve typed task outputs
- Log in to Tool Repository on crewai login
**Fixes**
- Fixes issues with result as answer not properly exiting LLM loop
- Fix missing key name when running with ollama provider
- Fix spelling issue found
**Documentation**
- Update readme for running mypy
- Add knowledge to mint.json
- Update Github actions
- Update Agents docs to include two approaches for creating an agent
- Improvements to LLM Configuration and Usage
</Update>
<Update label="2024-11-25" description="v0.83.0">
**New Features**
- New before_kickoff and after_kickoff crew callbacks
- Support to pre-seed agents with Knowledge
- Add support for retrieving user preferences and memories using Mem0
**Fixes**
- Fix Async Execution
- Upgrade chroma and adjust embedder function generator
- Update CLI Watson supported models + docs
- Reduce level for Bandit
- Fixing all tests
**Documentation**
- Update Docs
</Update>
<Update label="2024-11-13" description="v0.80.0">
**Fixes**
- Fixing Tokens callback replacement bug
- Fixing Step callback issue
- Add cached prompt tokens info on usage metrics
- Fix crew_train_success test
</Update>

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -18,6 +18,18 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
</Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](../images/enterprise/crew-studio-quickstart)
The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces
- Real-time testing and validation
- Template library with pre-configured agent types
- Easy customization of agent attributes and behaviors
</Note>
## Agent Attributes
| Attribute | Parameter | Type | Description |
@@ -106,7 +118,7 @@ class LatestAiDevelopmentCrew():
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@@ -114,7 +126,7 @@ class LatestAiDevelopmentCrew():
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
```
@@ -233,7 +245,7 @@ custom_agent = Agent(
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
@@ -243,7 +255,11 @@ custom_agent = Agent(
- `response_template`: Formats agent responses
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{input}` in your templates. These will be automatically populated during execution.
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
</Note>
## Agent Tools

View File

@@ -136,17 +136,21 @@ crewai test -n 5 -m gpt-3.5-turbo
### 8. Run
Run the crew.
Run the crew or flow.
```shell Terminal
crewai run
```
<Note>
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.
</Note>
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
### 9. Chat
Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
@@ -175,8 +179,78 @@ def crew(self) -> Crew:
```
</Note>
### 10. Deploy
### 10. API Keys
Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI Enterprise.
```shell Terminal
crewai signup
```
If you already have an account, you can login with:
```shell Terminal
crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI Enterprise platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list
```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI Enterprise platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI Enterprise](http://app.crewai.com) using the CLI.
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
### 11. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.

View File

@@ -23,8 +23,7 @@ The `Crew` class has been enriched with several attributes to support advanced f
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Internationalization / Customization** (`prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
@@ -49,4 +48,4 @@ Consider a crew with a researcher agent tasked with data gathering and a writer
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -20,17 +20,14 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defautls to `None`. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defaults to `None`. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
@@ -55,12 +52,16 @@ After creating your CrewAI project as outlined in the [Installation](/installati
```python code
from crewai import Agent, Crew, Task, Process
from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class YourCrewName:
"""Description of your crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Paths to your YAML configuration files
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
@@ -83,27 +84,27 @@ class YourCrewName:
@agent
def agent_one(self) -> Agent:
return Agent(
config=self.agents_config['agent_one'],
config=self.agents_config['agent_one'], # type: ignore[index]
verbose=True
)
@agent
def agent_two(self) -> Agent:
return Agent(
config=self.agents_config['agent_two'],
config=self.agents_config['agent_two'], # type: ignore[index]
verbose=True
)
@task
def task_one(self) -> Task:
return Task(
config=self.tasks_config['task_one']
config=self.tasks_config['task_one'] # type: ignore[index]
)
@task
def task_two(self) -> Task:
return Task(
config=self.tasks_config['task_two']
config=self.tasks_config['task_two'] # type: ignore[index]
)
@crew
@@ -245,7 +246,7 @@ print(f"Token Usage: {crew_output.token_usage}")
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
In case of `True(Boolean)` will save as `logs.txt`.
In case of `output_log_file` is set as `False(Booelan)` or `None`, the logs will not be populated.
In case of `output_log_file` is set as `False(Boolean)` or `None`, the logs will not be populated.
```python Code
# Save crew logs

View File

@@ -0,0 +1,365 @@
---
title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring'
icon: spinner
---
# Event Listeners
CrewAI provides a powerful event system that allows you to listen for and react to various events that occur during the execution of your Crew. This feature enables you to build custom integrations, monitoring solutions, logging systems, or any other functionality that needs to be triggered based on CrewAI's internal events.
## How It Works
CrewAI uses an event bus architecture to emit events throughout the execution lifecycle. The event system is built on the following components:
1. **CrewAIEventsBus**: A singleton event bus that manages event registration and emission
2. **BaseEvent**: Base class for all events in the system
3. **BaseEventListener**: Abstract base class for creating custom event listeners
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
![Prompt Tracing Dashboard](../images/enterprise/prompt-tracing.png)
With Prompt Tracing you can:
- View the complete history of all prompts sent to your LLM
- Track token usage and costs
- Debug agent reasoning failures
- Share prompt sequences with your team
- Compare different prompt strategies
- Export traces for compliance and auditing
</Note>
## Creating a Custom Event Listener
To create a custom event listener, you need to:
1. Create a class that inherits from `BaseEventListener`
2. Implement the `setup_listeners` method
3. Register handlers for the events you're interested in
4. Create an instance of your listener in the appropriate file
Here's a simple example of a custom event listener class:
```python
from crewai.utilities.events import (
CrewKickoffStartedEvent,
CrewKickoffCompletedEvent,
AgentExecutionCompletedEvent,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
class MyCustomListener(BaseEventListener):
def __init__(self):
super().__init__()
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_started(source, event):
print(f"Crew '{event.crew_name}' has started execution!")
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_completed(source, event):
print(f"Crew '{event.crew_name}' has completed execution!")
print(f"Output: {event.output}")
@crewai_event_bus.on(AgentExecutionCompletedEvent)
def on_agent_execution_completed(source, event):
print(f"Agent '{event.agent.role}' completed task")
print(f"Output: {event.output}")
```
## Properly Registering Your Listener
Simply defining your listener class isn't enough. You need to create an instance of it and ensure it's imported in your application. This ensures that:
1. The event handlers are registered with the event bus
2. The listener instance remains in memory (not garbage collected)
3. The listener is active when events are emitted
### Option 1: Import and Instantiate in Your Crew or Flow Implementation
The most important thing is to create an instance of your listener in the file where your Crew or Flow is defined and executed:
#### For Crew-based Applications
Create and import your listener at the top of your Crew implementation file:
```python
# In your crew.py file
from crewai import Agent, Crew, Task
from my_listeners import MyCustomListener
# Create an instance of your listener
my_listener = MyCustomListener()
class MyCustomCrew:
# Your crew implementation...
def crew(self):
return Crew(
agents=[...],
tasks=[...],
# ...
)
```
#### For Flow-based Applications
Create and import your listener at the top of your Flow implementation file:
```python
# In your main.py or flow.py file
from crewai.flow import Flow, listen, start
from my_listeners import MyCustomListener
# Create an instance of your listener
my_listener = MyCustomListener()
class MyCustomFlow(Flow):
# Your flow implementation...
@start()
def first_step(self):
# ...
```
This ensures that your listener is loaded and active when your Crew or Flow is executed.
### Option 2: Create a Package for Your Listeners
For a more structured approach, especially if you have multiple listeners:
1. Create a package for your listeners:
```
my_project/
├── listeners/
│ ├── __init__.py
│ ├── my_custom_listener.py
│ └── another_listener.py
```
2. In `my_custom_listener.py`, define your listener class and create an instance:
```python
# my_custom_listener.py
from crewai.utilities.events.base_event_listener import BaseEventListener
# ... import events ...
class MyCustomListener(BaseEventListener):
# ... implementation ...
# Create an instance of your listener
my_custom_listener = MyCustomListener()
```
3. In `__init__.py`, import the listener instances to ensure they're loaded:
```python
# __init__.py
from .my_custom_listener import my_custom_listener
from .another_listener import another_listener
# Optionally export them if you need to access them elsewhere
__all__ = ['my_custom_listener', 'another_listener']
```
4. Import your listeners package in your Crew or Flow file:
```python
# In your crew.py or flow.py file
import my_project.listeners # This loads all your listeners
class MyCustomCrew:
# Your crew implementation...
```
This is exactly how CrewAI's built-in `agentops_listener` is registered. In the CrewAI codebase, you'll find:
```python
# src/crewai/utilities/events/third_party/__init__.py
from .agentops_listener import agentops_listener
```
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
## Available Event Types
CrewAI provides a wide range of events that you can listen for:
### Crew Events
- **CrewKickoffStartedEvent**: Emitted when a Crew starts execution
- **CrewKickoffCompletedEvent**: Emitted when a Crew completes execution
- **CrewKickoffFailedEvent**: Emitted when a Crew fails to complete execution
- **CrewTestStartedEvent**: Emitted when a Crew starts testing
- **CrewTestCompletedEvent**: Emitted when a Crew completes testing
- **CrewTestFailedEvent**: Emitted when a Crew fails to complete testing
- **CrewTrainStartedEvent**: Emitted when a Crew starts training
- **CrewTrainCompletedEvent**: Emitted when a Crew completes training
- **CrewTrainFailedEvent**: Emitted when a Crew fails to complete training
### Agent Events
- **AgentExecutionStartedEvent**: Emitted when an Agent starts executing a task
- **AgentExecutionCompletedEvent**: Emitted when an Agent completes executing a task
- **AgentExecutionErrorEvent**: Emitted when an Agent encounters an error during execution
### Task Events
- **TaskStartedEvent**: Emitted when a Task starts execution
- **TaskCompletedEvent**: Emitted when a Task completes execution
- **TaskFailedEvent**: Emitted when a Task fails to complete execution
- **TaskEvaluationEvent**: Emitted when a Task is evaluated
### Tool Usage Events
- **ToolUsageStartedEvent**: Emitted when a tool execution is started
- **ToolUsageFinishedEvent**: Emitted when a tool execution is completed
- **ToolUsageErrorEvent**: Emitted when a tool execution encounters an error
- **ToolValidateInputErrorEvent**: Emitted when a tool input validation encounters an error
- **ToolExecutionErrorEvent**: Emitted when a tool execution encounters an error
- **ToolSelectionErrorEvent**: Emitted when there's an error selecting a tool
### Flow Events
- **FlowCreatedEvent**: Emitted when a Flow is created
- **FlowStartedEvent**: Emitted when a Flow starts execution
- **FlowFinishedEvent**: Emitted when a Flow completes execution
- **FlowPlotEvent**: Emitted when a Flow is plotted
- **MethodExecutionStartedEvent**: Emitted when a Flow method starts execution
- **MethodExecutionFinishedEvent**: Emitted when a Flow method completes execution
- **MethodExecutionFailedEvent**: Emitted when a Flow method fails to complete execution
### LLM Events
- **LLMCallStartedEvent**: Emitted when an LLM call starts
- **LLMCallCompletedEvent**: Emitted when an LLM call completes
- **LLMCallFailedEvent**: Emitted when an LLM call fails
- **LLMStreamChunkEvent**: Emitted for each chunk received during streaming LLM responses
## Event Handler Structure
Each event handler receives two parameters:
1. **source**: The object that emitted the event
2. **event**: The event instance, containing event-specific data
The structure of the event object depends on the event type, but all events inherit from `BaseEvent` and include:
- **timestamp**: The time when the event was emitted
- **type**: A string identifier for the event type
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Real-World Example: Integration with AgentOps
CrewAI includes an example of a third-party integration with [AgentOps](https://github.com/AgentOps-AI/agentops), a monitoring and observability platform for AI agents. Here's how it's implemented:
```python
from typing import Optional
from crewai.utilities.events import (
CrewKickoffCompletedEvent,
ToolUsageErrorEvent,
ToolUsageStartedEvent,
)
from crewai.utilities.events.base_event_listener import BaseEventListener
from crewai.utilities.events.crew_events import CrewKickoffStartedEvent
from crewai.utilities.events.task_events import TaskEvaluationEvent
try:
import agentops
AGENTOPS_INSTALLED = True
except ImportError:
AGENTOPS_INSTALLED = False
class AgentOpsListener(BaseEventListener):
tool_event: Optional["agentops.ToolEvent"] = None
session: Optional["agentops.Session"] = None
def __init__(self):
super().__init__()
def setup_listeners(self, crewai_event_bus):
if not AGENTOPS_INSTALLED:
return
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_kickoff_started(source, event: CrewKickoffStartedEvent):
self.session = agentops.init()
for agent in source.agents:
if self.session:
self.session.create_agent(
name=agent.role,
agent_id=str(agent.id),
)
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_kickoff_completed(source, event: CrewKickoffCompletedEvent):
if self.session:
self.session.end_session(
end_state="Success",
end_state_reason="Finished Execution",
)
@crewai_event_bus.on(ToolUsageStartedEvent)
def on_tool_usage_started(source, event: ToolUsageStartedEvent):
self.tool_event = agentops.ToolEvent(name=event.tool_name)
if self.session:
self.session.record(self.tool_event)
@crewai_event_bus.on(ToolUsageErrorEvent)
def on_tool_usage_error(source, event: ToolUsageErrorEvent):
agentops.ErrorEvent(exception=event.error, trigger_event=self.tool_event)
```
This listener initializes an AgentOps session when a Crew starts, registers agents with AgentOps, tracks tool usage, and ends the session when the Crew completes.
The AgentOps listener is registered in CrewAI's event system through the import in `src/crewai/utilities/events/third_party/__init__.py`:
```python
from .agentops_listener import agentops_listener
```
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:
```python
from crewai.utilities.events import crewai_event_bus, CrewKickoffStartedEvent
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStartedEvent)
def temp_handler(source, event):
print("This handler only exists within this context")
# Do something that emits events
# Outside the context, the temporary handler is removed
```
## Use Cases
Event listeners can be used for a variety of purposes:
1. **Logging and Monitoring**: Track the execution of your Crew and log important events
2. **Analytics**: Collect data about your Crew's performance and behavior
3. **Debugging**: Set up temporary listeners to debug specific issues
4. **Integration**: Connect CrewAI with external systems like monitoring platforms, databases, or notification services
5. **Custom Behavior**: Trigger custom actions based on specific events
## Best Practices
1. **Keep Handlers Light**: Event handlers should be lightweight and avoid blocking operations
2. **Error Handling**: Include proper error handling in your event handlers to prevent exceptions from affecting the main execution
3. **Cleanup**: If your listener allocates resources, ensure they're properly cleaned up
4. **Selective Listening**: Only listen for events you actually need to handle
5. **Testing**: Test your event listeners in isolation to ensure they behave as expected
By leveraging CrewAI's event system, you can extend its functionality and integrate it seamlessly with your existing infrastructure.

View File

@@ -150,12 +150,12 @@ final_output = flow.kickoff()
print("---- Final Output ----")
print(final_output)
````
```
```text Output
---- Final Output ----
Second method received: Output from first_method
````
```
</CodeGroup>
@@ -545,6 +545,119 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
## Adding Agents to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
```python
import asyncio
from typing import Any, Dict, List
from crewai_tools import SerperDevTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.flow.flow import Flow, listen, start
# Define a structured output format
class MarketAnalysis(BaseModel):
key_trends: List[str] = Field(description="List of identified market trends")
market_size: str = Field(description="Estimated market size")
competitors: List[str] = Field(description="Major competitors in the space")
# Define flow state
class MarketResearchState(BaseModel):
product: str = ""
analysis: MarketAnalysis | None = None
# Create a flow class
class MarketResearchFlow(Flow[MarketResearchState]):
@start()
def initialize_research(self) -> Dict[str, Any]:
print(f"Starting market research for {self.state.product}")
return {"product": self.state.product}
@listen(initialize_research)
async def analyze_market(self) -> Dict[str, Any]:
# Create an Agent for market research
analyst = Agent(
role="Market Research Analyst",
goal=f"Analyze the market for {self.state.product}",
backstory="You are an experienced market analyst with expertise in "
"identifying market trends and opportunities.",
tools=[SerperDevTool()],
verbose=True,
)
# Define the research query
query = f"""
Research the market for {self.state.product}. Include:
1. Key market trends
2. Market size
3. Major competitors
Format your response according to the specified structure.
"""
# Execute the analysis with structured output format
result = await analyst.kickoff_async(query, response_format=MarketAnalysis)
if result.pydantic:
print("result", result.pydantic)
else:
print("result", result)
# Return the analysis to update the state
return {"analysis": result.pydantic}
@listen(analyze_market)
def present_results(self, analysis) -> None:
print("\nMarket Analysis Results")
print("=====================")
if isinstance(analysis, dict):
# If we got a dict with 'analysis' key, extract the actual analysis object
market_analysis = analysis.get("analysis")
else:
market_analysis = analysis
if market_analysis and isinstance(market_analysis, MarketAnalysis):
print("\nKey Market Trends:")
for trend in market_analysis.key_trends:
print(f"- {trend}")
print(f"\nMarket Size: {market_analysis.market_size}")
print("\nMajor Competitors:")
for competitor in market_analysis.competitors:
print(f"- {competitor}")
else:
print("No structured analysis data available.")
print("Raw analysis:", analysis)
# Usage example
async def run_flow():
flow = MarketResearchFlow()
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
return result
# Run the flow
if __name__ == "__main__":
asyncio.run(run_flow())
```
This example demonstrates several key features of using Agents in flows:
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
2. **State Management**: The flow state (`MarketResearchState`) maintains context between steps and stores both inputs and outputs.
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
## Adding Crews to Flows
Creating a flow with multiple crews in CrewAI is straightforward.
@@ -738,3 +851,34 @@ Also, check out our YouTube video on how to use flows in CrewAI below!
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
## Running Flows
There are two ways to run a flow:
### Using the Flow API
You can run a flow programmatically by creating an instance of your flow class and calling the `kickoff()` method:
```python
flow = ExampleFlow()
result = flow.kickoff()
```
### Using the CLI
Starting from version 0.103.0, you can run flows using the `crewai run` command:
```shell
crewai run
```
This command automatically detects if your project is a flow (based on the `type = "flow"` setting in your pyproject.toml) and runs it accordingly. This is the recommended way to run flows from the command line.
For backward compatibility, you can also use:
```shell
crewai flow kickoff
```
However, the `crewai run` command is now the preferred method as it works for both crews and flows.

View File

@@ -42,6 +42,16 @@ CrewAI supports various types of knowledge sources out of the box:
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
<Tip>
Unlike retrieval from a vector database using a tool, agents preloaded with knowledge will not need a retrieval persona or task.
Simply add the relevant knowledge sources your agent or crew needs to function.
Knowledge sources can be added at the agent or crew level.
Crew level knowledge sources will be used by **all agents** in the crew.
Agent level knowledge sources will be used by the **specific agent** that is preloaded with the knowledge.
</Tip>
## Quickstart Example
<Tip>
@@ -146,10 +156,32 @@ result = crew.kickoff(
)
```
## Knowledge Configuration
You can configure the knowledge configuration for the crew or agent.
```python Code
from crewai.knowledge.knowledge_config import KnowledgeConfig
knowledge_config = KnowledgeConfig(results_limit=10, score_threshold=0.5)
agent = Agent(
...
knowledge_config=knowledge_config
)
```
<Tip>
`results_limit`: is the number of relevant documents to return. Default is 3.
`score_threshold`: is the minimum score for a document to be considered relevant. Default is 0.35.
</Tip>
## More Examples
Here are examples of how to use different types of knowledge sources:
Note: Please ensure that you create the ./knowldge folder. All source files (e.g., .txt, .pdf, .xlsx, .json) should be placed in this folder for centralized management.
### Text File Knowledge Source
```python
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
@@ -365,6 +397,53 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o
John is 30 years old and lives in San Francisco.
```
</CodeGroup>
## Query Rewriting
CrewAI implements an intelligent query rewriting mechanism to optimize knowledge retrieval. When an agent needs to search through knowledge sources, the raw task prompt is automatically transformed into a more effective search query.
### How Query Rewriting Works
1. When an agent executes a task with knowledge sources available, the `_get_knowledge_search_query` method is triggered
2. The agent's LLM is used to transform the original task prompt into an optimized search query
3. This optimized query is then used to retrieve relevant information from knowledge sources
### Benefits of Query Rewriting
<CardGroup cols={2}>
<Card title="Improved Retrieval Accuracy" icon="bullseye-arrow">
By focusing on key concepts and removing irrelevant content, query rewriting helps retrieve more relevant information.
</Card>
<Card title="Context Awareness" icon="brain">
The rewritten queries are designed to be more specific and context-aware for vector database retrieval.
</Card>
</CardGroup>
### Implementation Details
Query rewriting happens transparently using a system prompt that instructs the LLM to:
- Focus on key words of the intended task
- Make the query more specific and context-aware
- Remove irrelevant content like output format instructions
- Generate only the rewritten query without preamble or postamble
<Tip>
This mechanism is fully automatic and requires no configuration from users. The agent's LLM is used to perform the query rewriting, so using a more capable LLM can improve the quality of rewritten queries.
</Tip>
### Example
```python
# Original task prompt
task_prompt = "Answer the following questions about the user's favorite movies: What movie did John watch last week? Format your answer in JSON."
# Behind the scenes, this might be rewritten as:
rewritten_query = "What movies did John watch last week?"
```
The rewritten query is more focused on the core information need and removes irrelevant instructions about output formatting.
## Clearing Knowledge
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.
@@ -460,12 +539,12 @@ class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
data = response.json()
articles = data.get('results', [])
formatted_data = self._format_articles(articles)
formatted_data = self.validate_content(articles)
return {self.api_endpoint: formatted_data}
except Exception as e:
raise ValueError(f"Failed to fetch space news: {str(e)}")
def _format_articles(self, articles: list) -> str:
def validate_content(self, articles: list) -> str:
"""Format articles into readable text."""
formatted = "Space News Articles:\n\n"
for article in articles:
@@ -621,4 +700,11 @@ recent_news = SpaceNewsKnowledgeSource(
- Configure appropriate embedding models
- Consider using local embedding providers for faster processing
</Accordion>
<Accordion title="One Time Knowledge">
- With the typical file structure provided by CrewAI, knowledge sources are embedded every time the kickoff is triggered.
- If the knowledge sources are large, this leads to inefficiency and increased latency, as the same data is embedded each time.
- To resolve this, directly initialize the knowledge parameter instead of the knowledge_sources parameter.
- Link to the issue to get complete idea [Github Issue](https://github.com/crewAIInc/crewAI/issues/2755)
</Accordion>
</AccordionGroup>

View File

@@ -1,71 +0,0 @@
---
title: Using LlamaIndex Tools
description: Learn how to integrate LlamaIndex tools with CrewAI agents to enhance search-based queries and more.
icon: toolbox
---
## Using LlamaIndex Tools
<Info>
CrewAI seamlessly integrates with LlamaIndexs comprehensive toolkit for RAG (Retrieval-Augmented Generation) and agentic pipelines, enabling advanced search-based queries and more.
</Info>
Here are the available built-in tools offered by LlamaIndex.
```python Code
from crewai import Agent
from crewai_tools import LlamaIndexTool
# Example 1: Initialize from FunctionTool
from llama_index.core.tools import FunctionTool
your_python_function = lambda ...: ...
og_tool = FunctionTool.from_defaults(
your_python_function,
name="<name>",
description='<description>'
)
tool = LlamaIndexTool.from_tool(og_tool)
# Example 2: Initialize from LlamaHub Tools
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
wolfram_spec = WolframAlphaToolSpec(app_id="<app_id>")
wolfram_tools = wolfram_spec.to_tool_list()
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
# Example 3: Initialize Tool from a LlamaIndex Query Engine
query_engine = index.as_query_engine()
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Uber 2019 10K Query Tool",
description="Use this tool to lookup the 2019 Uber 10K Annual Report"
)
# Create and assign the tools to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
)
# rest of the code ...
```
## Steps to Get Started
To effectively use the LlamaIndexTool, follow these steps:
<Steps>
<Step title="Package Installation">
Make sure that `crewai[tools]` package is installed in your Python environment:
<CodeGroup>
```shell Terminal
pip install 'crewai[tools]'
```
</CodeGroup>
</Step>
<Step title="Install and Use LlamaIndex">
Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
</Step>
</Steps>

View File

@@ -27,23 +27,19 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
</Card>
</CardGroup>
## Setting Up Your LLM
## Setting up your LLM
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
There are different places in CrewAI code where you can specify the model to use. Once you specify the model you are using, you will need to provide the configuration (like an API key) for each of the model providers you use. See the [provider configuration examples](#provider-configuration-examples) section for your provider.
<Tabs>
<Tab title="1. Environment Variables">
The simplest way to get started. Set these variables in your environment:
The simplest way to get started. Set the model in your environment directly, through an `.env` file or in your app code. If you used `crewai create` to bootstrap your project, it will be set already.
```bash
# Required: Your API key for authentication
OPENAI_API_KEY=<your-api-key>
```bash .env
MODEL=model-id # e.g. gpt-4o, gemini-2.0-flash, claude-3-sonnet-...
# Optional: Default model selection
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
# Optional: Organization ID (if applicable)
OPENAI_ORGANIZATION_ID=<your-org-id>
# Be sure to set your API keys here too. See the Provider
# section below.
```
<Warning>
@@ -53,13 +49,13 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
<Tab title="2. YAML Configuration">
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
```yaml
```yaml agents.yaml {6}
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis
backstory: A dedicated research professional with years of experience
verbose: true
llm: openai/gpt-4o-mini # your model here
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
# (see provider configuration examples below for more)
```
@@ -74,23 +70,23 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
<Tab title="3. Direct Code">
For maximum flexibility, configure LLMs directly in your Python code:
```python
```python {4,8}
from crewai import LLM
# Basic configuration
llm = LLM(model="gpt-4")
llm = LLM(model="model-id-here") # gpt-4o, gemini-2.0-flash, anthropic/claude...
# Advanced configuration with detailed parameters
llm = LLM(
model="gpt-4o-mini",
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
temperature=0.7, # Higher for more creative outputs
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1, # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1 , # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
response_format={"type": "json"}, # For structured outputs
seed=42 # For reproducible results
seed=42 # For reproducible results
)
```
@@ -110,8 +106,7 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
## Provider Configuration Examples
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
<AccordionGroup>
@@ -121,7 +116,7 @@ In this section, you'll find detailed examples that help you select, configure,
```toml Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
@@ -158,7 +153,11 @@ In this section, you'll find detailed examples that help you select, configure,
<Accordion title="Anthropic">
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
@@ -170,19 +169,55 @@ In this section, you'll find detailed examples that help you select, configure,
```
</Accordion>
<Accordion title="Google">
Set the following environment variables in your `.env` file:
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
```toml Code
# Option 1: Gemini accessed with an API key.
```toml .env
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
)
```
### Gemini models
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
```python Code
import json
@@ -206,14 +241,18 @@ In this section, you'll find detailed examples that help you select, configure,
vertex_credentials=vertex_credentials_json
)
```
Google offers a range of powerful models optimized for different use cases:
| Model | Context Window | Best For |
|-----------------------|----------------|------------------------------------------------------------------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
</Accordion>
<Accordion title="Azure">
@@ -222,7 +261,7 @@ In this section, you'll find detailed examples that help you select, configure,
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
@@ -250,8 +289,42 @@ In this section, you'll find detailed examples that help you select, configure,
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
| Amazon Nova Pro | Up to 300k tokens | High-performance, model balancing accuracy, speed, and cost-effectiveness across diverse tasks. |
| Amazon Nova Micro | Up to 128k tokens | High-performance, cost-effective text-only model optimized for lowest latency responses. |
| Amazon Nova Lite | Up to 300k tokens | High-performance, affordable multimodal processing for images, video, and text with real-time capabilities. |
| Claude 3.7 Sonnet | Up to 128k tokens | High-performance, best for complex reasoning, coding & AI agents |
| Claude 3.5 Sonnet v2 | Up to 200k tokens | State-of-the-art model specialized in software engineering, agentic capabilities, and computer interaction at optimized cost. |
| Claude 3.5 Sonnet | Up to 200k tokens | High-performance model delivering superior intelligence and reasoning across diverse tasks with optimal speed-cost balance. |
| Claude 3.5 Haiku | Up to 200k tokens | Fast, compact multimodal model optimized for quick responses and seamless human-like interactions |
| Claude 3 Sonnet | Up to 200k tokens | Multimodal model balancing intelligence and speed for high-volume deployments. |
| Claude 3 Haiku | Up to 200k tokens | Compact, high-speed multimodal model optimized for quick responses and natural conversational interactions |
| Claude 3 Opus | Up to 200k tokens | Most advanced multimodal model exceling at complex tasks with human-like reasoning and superior contextual understanding. |
| Claude 2.1 | Up to 200k tokens | Enhanced version with expanded context window, improved reliability, and reduced hallucinations for long-form and RAG applications |
| Claude | Up to 100k tokens | Versatile model excelling in sophisticated dialogue, creative content, and precise instruction following. |
| Claude Instant | Up to 100k tokens | Fast, cost-effective model for everyday tasks like dialogue, analysis, summarization, and document Q&A |
| Llama 3.1 405B Instruct | Up to 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| Llama 3.1 70B Instruct | Up to 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3.1 8B Instruct | Up to 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| Llama 3 70B Instruct | Up to 8k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3 8B Instruct | Up to 8k tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| Titan Text G1 - Lite | Up to 4k tokens | Lightweight, cost-effective model optimized for English tasks and fine-tuning with focus on summarization and content generation. |
| Titan Text G1 - Express | Up to 8k tokens | Versatile model for general language tasks, chat, and RAG applications with support for English and 100+ languages. |
| Cohere Command | Up to 4k tokens | Model specialized in following user commands and delivering practical enterprise solutions. |
| Jurassic-2 Mid | Up to 8,191 tokens | Cost-effective model balancing quality and affordability for diverse language tasks like Q&A, summarization, and content generation. |
| Jurassic-2 Ultra | Up to 8,191 tokens | Model for advanced text generation and comprehension, excelling in complex tasks like analysis and content creation. |
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
</Accordion>
<Accordion title="Amazon SageMaker">
```toml Code
AWS_ACCESS_KEY_ID=<your-access-key>
@@ -345,7 +418,7 @@ In this section, you'll find detailed examples that help you select, configure,
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecture to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
@@ -368,6 +441,46 @@ In this section, you'll find detailed examples that help you select, configure,
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
3. Configure your crewai local models:
```python Code
from crewai.llm import LLM
local_nvidia_nim_llm = LLM(
model="openai/meta/llama-3.1-8b-instruct", # it's an openai-api compatible model
base_url="http://localhost:8000/v1",
api_key="<your_api_key|any text if you have not configured it>", # api_key is required, but you can use any text
)
# Then you can use it in your crew:
@CrewBase
class MyCrew():
# ...
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
llm=local_nvidia_nim_llm
)
# ...
```
</Accordion>
<Accordion title="Groq">
Set the following environment variables in your `.env` file:
@@ -396,7 +509,7 @@ In this section, you'll find detailed examples that help you select, configure,
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
# Optional
WATSONX_TOKEN=<your-token>
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
@@ -413,7 +526,7 @@ In this section, you'll find detailed examples that help you select, configure,
<Accordion title="Ollama (Local LLMs)">
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
2. Run a model: `ollama run llama3`
3. Configure:
```python Code
@@ -457,14 +570,13 @@ In this section, you'll find detailed examples that help you select, configure,
<Accordion title="Hugging Face">
Set the following environment variables in your `.env` file:
```toml Code
HUGGINGFACE_API_KEY=<your-api-key>
HF_TOKEN=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
base_url="your_api_endpoint"
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct"
)
```
</Accordion>
@@ -522,7 +634,7 @@ In this section, you'll find detailed examples that help you select, configure,
```toml Code
OPENROUTER_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
@@ -540,6 +652,46 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
</AccordionGroup>
## Streaming Responses
CrewAI supports streaming responses from LLMs, allowing your application to receive and process outputs in real-time as they're generated.
<Tabs>
<Tab title="Basic Setup">
Enable streaming by setting the `stream` parameter to `True` when initializing your LLM:
```python
from crewai import LLM
# Create an LLM with streaming enabled
llm = LLM(
model="openai/gpt-4o",
stream=True # Enable streaming
)
```
When streaming is enabled, responses are delivered in chunks as they're generated, creating a more responsive user experience.
</Tab>
<Tab title="Event Handling">
CrewAI emits events for each chunk received during streaming:
```python
from crewai import LLM
from crewai.utilities.events import EventHandler, LLMStreamChunkEvent
class MyEventHandler(EventHandler):
def on_llm_stream_chunk(self, event: LLMStreamChunkEvent):
# Process each chunk as it arrives
print(f"Received chunk: {event.chunk}")
# Register the event handler
from crewai.utilities.events import crewai_event_bus
crewai_event_bus.register_handler(MyEventHandler())
```
</Tab>
</Tabs>
## Structured LLM Calls
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
@@ -605,7 +757,7 @@ Learn how to get the most out of your LLM configuration:
- Small tasks (up to 4K tokens): Standard models
- Medium tasks (between 4K-32K): Enhanced models
- Large tasks (over 32K): Large context models
```python
# Configure model with appropriate settings
llm = LLM(
@@ -642,11 +794,11 @@ Learn how to get the most out of your LLM configuration:
<Warning>
Most authentication issues can be resolved by checking API key format and environment variable names.
</Warning>
```bash
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
```
@@ -655,11 +807,11 @@ Learn how to get the most out of your LLM configuration:
<Check>
Always include the provider prefix in model names
</Check>
```python
# Correct
llm = LLM(model="openai/gpt-4")
# Incorrect
llm = LLM(model="gpt-4")
```
@@ -668,47 +820,10 @@ Learn how to get the most out of your LLM configuration:
<Tip>
Use larger context models for extensive tasks
</Tip>
```python
# Large context model
llm = LLM(model="openai/gpt-4o") # 128K tokens
```
</Tab>
</Tabs>
## Getting Help
If you need assistance, these resources are available:
<CardGroup cols={3}>
<Card
title="LiteLLM Documentation"
href="https://docs.litellm.ai/docs/"
icon="book"
>
Comprehensive documentation for LiteLLM integration and troubleshooting common issues.
</Card>
<Card
title="GitHub Issues"
href="https://github.com/joaomdmoura/crewAI/issues"
icon="bug"
>
Report bugs, request features, or browse existing issues for solutions.
</Card>
<Card
title="Community Forum"
href="https://community.crewai.com"
icon="comment-question"
>
Connect with other CrewAI users, share experiences, and get help from the community.
</Card>
</CardGroup>
<Note>
Best Practices for API Key Security:
- Use environment variables or secure vaults
- Never commit keys to version control
- Rotate keys regularly
- Use separate keys for development and production
- Monitor key usage for unusual patterns
</Note>

View File

@@ -18,7 +18,8 @@ reason, and learn from past interactions.
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
| **External Memory** | Enables integration with external memory systems and providers (like Mem0), allowing for specialized memory storage and retrieval across different applications. Supports custom storage implementations for flexible memory management. |
| **User Memory** | ⚠️ **DEPRECATED**: This component is deprecated and will be removed in a future version. Please use [External Memory](#using-external-memory) instead. |
## How Memory Systems Empower Agents
@@ -60,7 +61,8 @@ my_crew = Crew(
```python Code
from crewai import Crew, Process
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
from crewai.memory.storage import LTMSQLiteStorage, RAGStorage
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from typing import List, Optional
# Assemble your crew with memory capabilities
@@ -119,7 +121,7 @@ Example using environment variables:
import os
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage import LTMSQLiteStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
# Configure storage path using environment variable
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
@@ -143,12 +145,13 @@ from crewai.memory import LongTermMemory
# Simple memory configuration
crew = Crew(memory=True) # Uses default storage locations
```
Note that External Memory wont be defined when `memory=True` is set, as we cant infer which external memory would be suitable for your case
### Custom Storage Configuration
```python
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage import LTMSQLiteStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
# Configure custom storage paths
crew = Crew(
@@ -163,7 +166,10 @@ crew = Crew(
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
### Using Mem0 API platform
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences. In this case `user_memory` is set to `MemoryClient` from mem0.
```python Code
@@ -174,18 +180,7 @@ from mem0 import MemoryClient
# Set environment variables for Mem0
os.environ["MEM0_API_KEY"] = "m0-xx"
# Step 1: Record preferences based on past conversation or user input
client = MemoryClient()
messages = [
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
{"role": "user", "content": "I am more of a beach person than a mountain person."},
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
{"role": "user", "content": "I like Airbnb more."},
]
client.add(messages, user_id="john")
# Step 2: Create a Crew with User Memory
# Step 1: Create a Crew with User Memory
crew = Crew(
agents=[...],
@@ -196,11 +191,12 @@ crew = Crew(
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
},
)
```
## Memory Configuration Options
#### Additional Memory Configuration Options
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
```python Code
@@ -214,10 +210,172 @@ crew = Crew(
memory_config={
"provider": "mem0",
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
},
)
```
### Using Local Mem0 memory
If you want to use local mem0 memory, with a custom configuration, you can set a parameter `local_mem0_config` in the config itself.
If both os environment key is set and local_mem0_config is given, the API platform takes higher priority over the local configuration.
Check [this](https://docs.mem0.ai/open-source/python-quickstart#run-mem0-locally) mem0 local configuration docs for more understanding.
In this case `user_memory` is set to `Memory` from mem0.
```python Code
from crewai import Crew
#local mem0 config
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333
}
},
"llm": {
"provider": "openai",
"config": {
"api_key": "your-api-key",
"model": "gpt-4"
}
},
"embedder": {
"provider": "openai",
"config": {
"api_key": "your-api-key",
"model": "text-embedding-3-small"
}
},
"graph_store": {
"provider": "neo4j",
"config": {
"url": "neo4j+s://your-instance",
"username": "neo4j",
"password": "password"
}
},
"history_db_path": "/path/to/history.db",
"version": "v1.1",
"custom_fact_extraction_prompt": "Optional custom prompt for fact extraction for memory",
"custom_update_memory_prompt": "Optional custom prompt for update memory"
}
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john", 'local_mem0_config': config},
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
},
)
```
### Using External Memory
External Memory is a powerful feature that allows you to integrate external memory systems with your CrewAI applications. This is particularly useful when you want to use specialized memory providers or maintain memory across different applications.
Since its an external memory, were not able to add a default value to it - unlike with Long Term and Short Term memory.
#### Basic Usage with Mem0
The most common way to use External Memory is with Mem0 as the provider:
```python
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
os.environ["MEM0_API_KEY"] = "YOUR-API-KEY"
agent = Agent(
role="You are a helpful assistant",
goal="Plan a vacation for the user",
backstory="You are a helpful assistant that can plan a vacation for the user",
verbose=True,
)
task = Task(
description="Give things related to the user's vacation",
expected_output="A plan for the vacation",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
external_memory=ExternalMemory(
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration
),
)
crew.kickoff(
inputs={"question": "which destination is better for a beach vacation?"}
)
```
#### Using External Memory with Custom Storage
You can also create custom storage implementations for External Memory. Here's an example of how to create a custom storage:
```python
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
from crewai.memory.storage.interface import Storage
class CustomStorage(Storage):
def __init__(self):
self.memories = []
def save(self, value, metadata=None, agent=None):
self.memories.append({"value": value, "metadata": metadata, "agent": agent})
def search(self, query, limit=10, score_threshold=0.5):
# Implement your search logic here
return []
def reset(self):
self.memories = []
# Create external memory with custom storage
external_memory = ExternalMemory(
storage=CustomStorage(),
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}},
)
agent = Agent(
role="You are a helpful assistant",
goal="Plan a vacation for the user",
backstory="You are a helpful assistant that can plan a vacation for the user",
verbose=True,
)
task = Task(
description="Give things related to the user's vacation",
expected_output="A plan for the vacation",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
external_memory=external_memory,
)
crew.kickoff(
inputs={"question": "which destination is better for a beach vacation?"}
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)
@@ -506,7 +664,7 @@ my_crew = Crew(
)
```
### Resetting Memory
### Resetting Memory via cli
```shell
crewai reset-memories [OPTIONS]
@@ -520,8 +678,46 @@ crewai reset-memories [OPTIONS]
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
| `-kn`, `--knowledge` | Reset KNOWLEDEGE storage | Flag (boolean) | False |
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
Note: To use the cli command you need to have your crew in a file called crew.py in the same directory.
### Resetting Memory via crew object
```python
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "custom",
"config": {
"embedder": CustomEmbedder()
}
}
)
my_crew.reset_memories(command_type = 'all') # Resets all the memory
```
#### Resetting Memory Options
| Command Type | Description |
| :----------------- | :------------------------------- |
| `long` | Reset LONG TERM memory. |
| `short` | Reset SHORT TERM memory. |
| `entities` | Reset ENTITIES memory. |
| `kickoff_outputs` | Reset LATEST KICKOFF TASK OUTPUTS. |
| `knowledge` | Reset KNOWLEDGE memory. |
| `all` | Reset ALL memories. |
## Benefits of Using CrewAI's Memory System

View File

@@ -12,6 +12,18 @@ Tasks provide all necessary details for execution, such as a description, the ag
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
<Note type="info" title="Enterprise Enhancement: Visual Task Builder">
CrewAI Enterprise includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
![Task Builder Screenshot](../images/enterprise/crew-studio-quickstart.png)
The Visual Task Builder enables:
- Drag-and-drop task creation
- Visual task dependencies and flow
- Real-time testing and validation
- Easy sharing and collaboration
</Note>
### Task Execution Flow
Tasks can be executed in two ways:
@@ -101,7 +113,7 @@ class LatestAiDevelopmentCrew():
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@@ -109,20 +121,20 @@ class LatestAiDevelopmentCrew():
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task']
config=self.tasks_config['research_task'] # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task']
config=self.tasks_config['reporting_task'] # type: ignore[index]
)
@crew
@@ -276,26 +288,20 @@ To add a guardrail to a task, provide a validation function through the `guardra
```python Code
from typing import Tuple, Union, Dict, Any
from crewai import TaskOutput
def validate_blog_content(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate blog content meets requirements."""
try:
# Check word count
word_count = len(result.split())
if word_count > 200:
return (False, {
"error": "Blog content exceeds 200 words",
"code": "WORD_COUNT_ERROR",
"context": {"word_count": word_count}
})
return (False, "Blog content exceeds 200 words")
# Additional validation logic here
return (True, result.strip())
except Exception as e:
return (False, {
"error": "Unexpected error during validation",
"code": "SYSTEM_ERROR"
})
return (False, "Unexpected error during validation")
blog_task = Task(
description="Write a blog post about AI",
@@ -313,29 +319,28 @@ blog_task = Task(
- Type hints are recommended but optional
2. **Return Values**:
- Success: Return `(True, validated_result)`
- Failure: Return `(False, error_details)`
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### LLMGuardrail
The `LLMGuardrail` class offers a robust mechanism for validating task outputs.
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
from crewai import TaskOutput
def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
try:
# Main validation logic
validated_data = perform_validation(result)
return (True, validated_data)
except ValidationError as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"input": result}
})
return (False, f"VALIDATION_ERROR: {str(e)}")
except Exception as e:
return (False, {
"error": "Unexpected error",
"code": "SYSTEM_ERROR"
})
return (False, str(e))
```
2. **Error Categories**:
@@ -346,28 +351,25 @@ def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
from crewai import TaskOutput
def complex_validation(result: str) -> Tuple[bool, Union[str, Dict[str, Any]]]:
def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
"""Chain multiple validation steps."""
# Step 1: Basic validation
if not result:
return (False, {"error": "Empty result", "code": "EMPTY_INPUT"})
return (False, "Empty result")
# Step 2: Content validation
try:
validated = validate_content(result)
if not validated:
return (False, {"error": "Invalid content", "code": "CONTENT_ERROR"})
return (False, "Invalid content")
# Step 3: Format validation
formatted = format_output(validated)
return (True, formatted)
except Exception as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"step": "content_validation"}
})
return (False, str(e))
```
### Handling Guardrail Results
@@ -382,19 +384,16 @@ When a guardrail returns `(False, error)`:
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
from crewai import TaskOutput, Task
def validate_json_output(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
def validate_json_output(result: TaskOutput) -> Tuple[bool, Any]:
"""Validate and parse JSON output."""
try:
# Try to parse as JSON
data = json.loads(result)
return (True, data)
except json.JSONDecodeError as e:
return (False, {
"error": "Invalid JSON format",
"code": "JSON_ERROR",
"context": {"line": e.lineno, "column": e.colno}
})
return (False, "Invalid JSON format")
task = Task(
description="Generate a JSON report",
@@ -414,7 +413,7 @@ It's also important to note that the output of the final task of a crew becomes
### Using `output_pydantic`
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
Heres an example demonstrating how to use output_pydantic:
Here's an example demonstrating how to use output_pydantic:
```python Code
import json
@@ -495,7 +494,7 @@ In this example:
### Using `output_json`
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
Heres an example demonstrating how to use `output_json`:
Here's an example demonstrating how to use `output_json`:
```python Code
import json
@@ -755,6 +754,8 @@ Task guardrails provide a powerful way to validate, transform, or filter task ou
### Basic Usage
#### Define your own logic to validate
```python Code
from typing import Tuple, Union
from crewai import Task
@@ -774,6 +775,57 @@ task = Task(
)
```
#### Leverage a no-code approach for validation
```python Code
from crewai import Task
task = Task(
description="Generate JSON data",
expected_output="Valid JSON object",
guardrail="Ensure the response is a valid JSON object"
)
```
#### Using YAML
```yaml
research_task:
...
guardrail: make sure each bullet contains a minimum of 100 words
...
```
```python Code
@CrewBase
class InternalCrew:
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
...
@task
def research_task(self):
return Task(config=self.tasks_config["research_task"]) # type: ignore[index]
...
```
#### Use custom models for code generation
```python Code
from crewai import Task
from crewai.llm import LLM
task = Task(
description="Generate JSON data",
expected_output="Valid JSON object",
guardrail=LLMGuardrail(
description="Ensure the response is a valid JSON object",
llm=LLM(model="gpt-4o-mini"),
)
)
```
### How Guardrails Work
1. **Optional Attribute**: Guardrails are an optional attribute at the task level, allowing you to add validation only where needed.
@@ -876,6 +928,19 @@ save_output_task = Task(
#...
```
Check out the video below to see how to use structured outputs in CrewAI:
<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
## Conclusion
Tasks are the driving force behind the actions of agents in CrewAI.

View File

@@ -15,6 +15,18 @@ A tool in CrewAI is a skill or function that agents can utilize to perform vario
This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools),
enabling everything from simple searches to complex interactions and effective teamwork among agents.
<Note type="info" title="Enterprise Enhancement: Tools Repository">
CrewAI Enterprise provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
![Tools Repository Screenshot](../images/enterprise/tools-repository.png)
The Enterprise Tools Repository includes:
- Pre-built connectors for popular enterprise systems
- Custom tool creation interface
- Version control and sharing capabilities
- Security and compliance features
</Note>
## Key Characteristics of Tools
- **Utility**: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
@@ -79,7 +91,7 @@ research = Task(
)
write = Task(
description='Write an engaging blog post about the AI industry, based on the research analysts summary. Draw inspiration from the latest blog posts in the directory.',
description='Write an engaging blog post about the AI industry, based on the research analyst's summary. Draw inspiration from the latest blog posts in the directory.',
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.',
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be saved here
@@ -106,6 +118,7 @@ Here is a list of the available tools and their descriptions:
| Tool | Description |
| :------------------------------- | :--------------------------------------------------------------------------------------------- |
| **ApifyActorsTool** | A tool that integrates Apify Actors with your workflows for web scraping and automation tasks. |
| **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
| **CodeInterpreterTool** | A tool for interpreting python code. |
@@ -140,7 +153,7 @@ Here is a list of the available tools and their descriptions:
## Creating your own Tools
<Tip>
Developers can craft `custom tools` tailored for their agents needs or
Developers can craft `custom tools` tailored for their agent's needs or
utilize pre-built options.
</Tip>
@@ -177,48 +190,6 @@ def my_tool(question: str) -> str:
return "Result from your custom tool"
```
### Structured Tools
The `StructuredTool` class wraps functions as tools, providing flexibility and validation while reducing boilerplate. It supports custom schemas and dynamic logic for seamless integration of complex functionalities.
#### Example:
Using `StructuredTool.from_function`, you can wrap a function that interacts with an external API or system, providing a structured interface. This enables robust validation and consistent execution, making it easier to integrate complex functionalities into your applications as demonstrated in the following example:
```python
from crewai.tools.structured_tool import CrewStructuredTool
from pydantic import BaseModel
# Define the schema for the tool's input using Pydantic
class APICallInput(BaseModel):
endpoint: str
parameters: dict
# Wrapper function to execute the API call
def tool_wrapper(*args, **kwargs):
# Here, you would typically call the API using the parameters
# For demonstration, we'll return a placeholder string
return f"Call the API at {kwargs['endpoint']} with parameters {kwargs['parameters']}"
# Create and return the structured tool
def create_structured_tool():
return CrewStructuredTool.from_function(
name='Wrapper API',
description="A tool to wrap API calls with structured input.",
args_schema=APICallInput,
func=tool_wrapper,
)
# Example usage
structured_tool = create_structured_tool()
# Execute the tool with structured input
result = structured_tool._run(**{
"endpoint": "https://example.com/api",
"parameters": {"key1": "value1", "key2": "value2"}
})
print(result) # Output: Call the API at https://example.com/api with parameters {'key1': 'value1', 'key2': 'value2'}
```
### Custom Caching Mechanism
<Tip>

BIN
docs/crews.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

303
docs/docs.json Normal file
View File

@@ -0,0 +1,303 @@
{
"$schema": "https://mintlify.com/docs.json",
"theme": "mint",
"name": "CrewAI",
"colors": {
"primary": "#EB6658",
"light": "#F3A78B",
"dark": "#C94C3C"
},
"favicon": "favicon.svg",
"contextual": {
"options": ["copy", "view", "chatgpt", "claude"]
},
"navigation": {
"tabs": [
{
"tab": "Documentation",
"groups": [
{
"group": "Get Started",
"pages": [
"introduction",
"installation",
"quickstart"
]
},
{
"group": "Guides",
"pages": [
{
"group": "Strategy",
"pages": [
"guides/concepts/evaluating-use-cases"
]
},
{
"group": "Agents",
"pages": [
"guides/agents/crafting-effective-agents"
]
},
{
"group": "Crews",
"pages": [
"guides/crews/first-crew"
]
},
{
"group": "Flows",
"pages": [
"guides/flows/first-flow",
"guides/flows/mastering-flow-state"
]
},
{
"group": "Advanced",
"pages": [
"guides/advanced/customizing-prompts",
"guides/advanced/fingerprinting"
]
}
]
},
{
"group": "Core Concepts",
"pages": [
"concepts/agents",
"concepts/tasks",
"concepts/crews",
"concepts/flows",
"concepts/knowledge",
"concepts/llms",
"concepts/processes",
"concepts/collaboration",
"concepts/training",
"concepts/memory",
"concepts/planning",
"concepts/testing",
"concepts/cli",
"concepts/tools",
"concepts/event-listener"
]
},
{
"group": "Tools",
"pages": [
"tools/aimindtool",
"tools/apifyactorstool",
"tools/bedrockinvokeagenttool",
"tools/bedrockkbretriever",
"tools/bravesearchtool",
"tools/browserbaseloadtool",
"tools/codedocssearchtool",
"tools/codeinterpretertool",
"tools/composiotool",
"tools/csvsearchtool",
"tools/dalletool",
"tools/directorysearchtool",
"tools/directoryreadtool",
"tools/docxsearchtool",
"tools/exasearchtool",
"tools/filereadtool",
"tools/filewritetool",
"tools/firecrawlcrawlwebsitetool",
"tools/firecrawlscrapewebsitetool",
"tools/firecrawlsearchtool",
"tools/githubsearchtool",
"tools/hyperbrowserloadtool",
"tools/linkupsearchtool",
"tools/llamaindextool",
"tools/langchaintool",
"tools/serperdevtool",
"tools/s3readertool",
"tools/s3writertool",
"tools/scrapegraphscrapetool",
"tools/scrapeelementfromwebsitetool",
"tools/jsonsearchtool",
"tools/mdxsearchtool",
"tools/mysqltool",
"tools/multiontool",
"tools/nl2sqltool",
"tools/patronustools",
"tools/pdfsearchtool",
"tools/pgsearchtool",
"tools/qdrantvectorsearchtool",
"tools/ragtool",
"tools/scrapewebsitetool",
"tools/scrapflyscrapetool",
"tools/seleniumscrapingtool",
"tools/snowflakesearchtool",
"tools/spidertool",
"tools/txtsearchtool",
"tools/visiontool",
"tools/weaviatevectorsearchtool",
"tools/websitesearchtool",
"tools/xmlsearchtool",
"tools/youtubechannelsearchtool",
"tools/youtubevideosearchtool"
]
},
{
"group": "Agent Monitoring & Observability",
"pages": [
"how-to/agentops-observability",
"how-to/arize-phoenix-observability",
"how-to/langfuse-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/opik-observability",
"how-to/portkey-observability",
"how-to/weave-integration"
]
},
{
"group": "Learn",
"pages": [
"how-to/conditional-tasks",
"how-to/coding-agents",
"how-to/create-custom-tools",
"how-to/custom-llm",
"how-to/custom-manager-agent",
"how-to/customizing-agents",
"how-to/force-tool-output-as-result",
"how-to/hierarchical-process",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/llm-connections",
"how-to/multimodal-agents",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/sequential-process"
]
},
{
"group": "Telemetry",
"pages": [
"telemetry"
]
}
]
},
{
"tab": "Enterprise",
"groups": [
{
"group": "Getting Started",
"pages": [
"enterprise/introduction"
]
},
{
"group": "How-To Guides",
"pages": [
"enterprise/guides/build-crew",
"enterprise/guides/deploy-crew",
"enterprise/guides/kickoff-crew",
"enterprise/guides/update-crew",
"enterprise/guides/use-crew-api",
"enterprise/guides/enable-crew-studio"
]
},
{
"group": "Features",
"pages": [
"enterprise/features/tool-repository",
"enterprise/features/webhook-streaming",
"enterprise/features/traces"
]
},
{
"group": "Resources",
"pages": [
"enterprise/resources/frequently-asked-questions"
]
}
]
},
{
"tab": "Examples",
"groups": [
{
"group": "Examples",
"pages": [
"examples/example"
]
}
]
},
{
"tab": "Releases",
"groups": [
{
"group": "Releases",
"pages": [
"changelog"
]
}
]
}
],
"global": {
"anchors": [
{
"anchor": "Website",
"href": "https://crewai.com",
"icon": "globe"
},
{
"anchor": "Forum",
"href": "https://community.crewai.com",
"icon": "discourse"
},
{
"anchor": "Get Help",
"href": "mailto:support@crewai.com",
"icon": "headset"
}
]
}
},
"logo": {
"light": "crew_only_logo.png",
"dark": "crew_only_logo.png"
},
"appearance": {
"default": "dark",
"strict": false
},
"navbar": {
"links": [
{
"label": "Start Free Trial",
"href": "https://app.crewai.com"
}
],
"primary": {
"type": "github",
"href": "https://github.com/crewAIInc/crewAI"
}
},
"search": {
"prompt": "Search CrewAI docs"
},
"seo": {
"indexing": "all"
},
"errors": {
"404": {
"redirect": true
}
},
"footer": {
"socials": {
"website": "https://crewai.com",
"x": "https://x.com/crewAIInc",
"github": "https://github.com/crewAIInc/crewAI",
"linkedin": "https://www.linkedin.com/company/crewai-inc",
"youtube": "https://youtube.com/@crewAIInc",
"reddit": "https://www.reddit.com/r/crewAIInc/"
}
}
}

View File

@@ -0,0 +1,106 @@
---
title: Tool Repository
description: "Using the Tool Repository to manage your tools"
icon: "toolbox"
---
## Overview
The Tool Repository is a package manager for CrewAI tools. It allows users to publish, install, and manage tools that integrate with CrewAI crews and flows.
Tools can be:
- **Private**: accessible only within your organization (default)
- **Public**: accessible to all CrewAI users if published with the `--public` flag
The repository is not a version control system. Use Git to track code changes and enable collaboration.
## Prerequisites
Before using the Tool Repository, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
- [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI Enterprise organization
## Installing Tools
To install a tool:
```bash
crewai tool install <tool-name>
```
This installs the tool and adds it to `pyproject.toml`.
## Creating and Publishing Tools
To create a new tool project:
```bash
crewai tool create <tool-name>
```
This generates a scaffolded tool project locally.
After making changes, initialize a Git repository and commit the code:
```bash
git init
git add .
git commit -m "Initial version"
```
To publish the tool:
```bash
crewai tool publish
```
By default, tools are published as private. To make a tool public:
```bash
crewai tool publish --public
```
For more details on how to build tools, see [Creating your own tools](https://docs.crewai.com/concepts/tools#creating-your-own-tools).
## Updating Tools
To update a published tool:
1. Modify the tool locally
2. Update the version in `pyproject.toml` (e.g., from `0.1.0` to `0.1.1`)
3. Commit the changes and publish
```bash
git commit -m "Update version to 0.1.1"
crewai tool publish
```
## Deleting Tools
To delete a tool:
1. Go to [CrewAI Enterprise](https://app.crewai.com)
2. Navigate to **Tools**
3. Select the tool
4. Click **Delete**
<Warning>
Deletion is permanent. Deleted tools cannot be restored or re-installed.
</Warning>
## Security Checks
Every published version undergoes automated security checks, and are only available to install after they pass.
You can check the security check status of a tool at:
`CrewAI Enterprise > Tools > Your Tool > Versions`
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,146 @@
---
title: Traces
description: "Using Traces to monitor your Crews"
icon: "timeline"
---
## Overview
Traces provide comprehensive visibility into your crew executions, helping you monitor performance, debug issues, and optimize your AI agent workflows.
## What are Traces?
Traces in CrewAI Enterprise are detailed execution records that capture every aspect of your crew's operation, from initial inputs to final outputs. They record:
- Agent thoughts and reasoning
- Task execution details
- Tool usage and outputs
- Token consumption metrics
- Execution times
- Cost estimates
<Frame>
![Traces Overview](/images/enterprise/traces-overview.png)
</Frame>
## Accessing Traces
<Steps>
<Step title="Navigate to the Traces Tab">
Once in your CrewAI Enterprise dashboard, click on the **Traces** to view all execution records.
</Step>
<Step title="Select an Execution">
You'll see a list of all crew executions, sorted by date. Click on any execution to view its detailed trace.
</Step>
</Steps>
## Understanding the Trace Interface
The trace interface is divided into several sections, each providing different insights into your crew's execution:
### 1. Execution Summary
The top section displays high-level metrics about the execution:
- **Total Tokens**: Number of tokens consumed across all tasks
- **Prompt Tokens**: Tokens used in prompts to the LLM
- **Completion Tokens**: Tokens generated in LLM responses
- **Requests**: Number of API calls made
- **Execution Time**: Total duration of the crew run
- **Estimated Cost**: Approximate cost based on token usage
<Frame>
![Execution Summary](/images/enterprise/trace-summary.png)
</Frame>
### 2. Tasks & Agents
This section shows all tasks and agents that were part of the crew execution:
- Task name and agent assignment
- Agents and LLMs used for each task
- Status (completed/failed)
- Individual execution time of the task
<Frame>
![Task List](/images/enterprise/trace-tasks.png)
</Frame>
### 3. Final Output
Displays the final result produced by the crew after all tasks are completed.
<Frame>
![Final Output](/images/enterprise/final-output.png)
</Frame>
### 4. Execution Timeline
A visual representation of when each task started and ended, helping you identify bottlenecks or parallel execution patterns.
<Frame>
![Execution Timeline](/images/enterprise/trace-timeline.png)
</Frame>
### 5. Detailed Task View
When you click on a specific task in the timeline or task list, you'll see:
<Frame>
![Detailed Task View](/images/enterprise/trace-detailed-task.png)
</Frame>
- **Task Key**: Unique identifier for the task
- **Task ID**: Technical identifier in the system
- **Status**: Current state (completed/running/failed)
- **Agent**: Which agent performed the task
- **LLM**: Language model used for this task
- **Start/End Time**: When the task began and completed
- **Execution Time**: Duration of this specific task
- **Task Description**: What the agent was instructed to do
- **Expected Output**: What output format was requested
- **Input**: Any input provided to this task from previous tasks
- **Output**: The actual result produced by the agent
## Using Traces for Debugging
Traces are invaluable for troubleshooting issues with your crews:
<Steps>
<Step title="Identify Failure Points">
When a crew execution doesn't produce the expected results, examine the trace to find where things went wrong. Look for:
- Failed tasks
- Unexpected agent decisions
- Tool usage errors
- Misinterpreted instructions
<Frame>
![Failure Points](/images/enterprise/failure.png)
</Frame>
</Step>
<Step title="Optimize Performance">
Use execution metrics to identify performance bottlenecks:
- Tasks that took longer than expected
- Excessive token usage
- Redundant tool operations
- Unnecessary API calls
</Step>
<Step title="Improve Cost Efficiency">
Analyze token usage and cost estimates to optimize your crew's efficiency:
- Consider using smaller models for simpler tasks
- Refine prompts to be more concise
- Cache frequently accessed information
- Structure tasks to minimize redundant operations
</Step>
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with trace analysis or any other CrewAI Enterprise features.
</Card>

View File

@@ -0,0 +1,82 @@
---
title: Webhook Streaming
description: "Using Webhook Streaming to stream events to your webhook"
icon: "webhook"
---
## Overview
Enterprise Event Streaming lets you receive real-time webhook updates about your crews and flows deployed to
CrewAI Enterprise, such as model calls, tool usage, and flow steps.
## Usage
When using the Kickoff API, include a `webhooks` object to your request, for example:
```json
{
"inputs": {"foo": "bar"},
"webhooks": {
"events": ["crew_kickoff_started", "llm_call_started"],
"url": "https://your.endpoint/webhook",
"realtime": false,
"authentication": {
"strategy": "bearer",
"token": "my-secret-token"
}
}
}
```
If `realtime` is set to `true`, each event is delivered individually and immediately, at the cost of crew/flow performance.
## Webhook Format
Each webhook sends a list of events:
```json
{
"events": [
{
"id": "event-id",
"execution_id": "crew-run-id",
"timestamp": "2025-02-16T10:58:44.965Z",
"type": "llm_call_started",
"data": {
"model": "gpt-4",
"messages": [
{"role": "system", "content": "You are an assistant."},
{"role": "user", "content": "Summarize this article."}
]
}
}
]
}
```
The `data` object structure varies by event type. Refer to the [event list](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) on GitHub.
As requests are sent over HTTP, the order of events can't be guaranteed. If you need ordering, use the `timestamp` field.
## Supported Events
CrewAI supports both system events and custom events in Enterprise Event Streaming. These events are sent to your configured webhook endpoint during crew and flow execution.
- `crew_kickoff_started`
- `crew_step_started`
- `crew_step_completed`
- `crew_execution_completed`
- `llm_call_started`
- `llm_call_completed`
- `tool_usage_started`
- `tool_usage_completed`
- `crew_test_failed`
- *...and others*
Event names match the internal event bus. See [GitHub source](https://github.com/crewAIInc/crewAI/tree/main/src/crewai/utilities/events) for the full list.
You can emit your own custom events, and they will be delivered through the webhook stream alongside system events.
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with webhook integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,43 @@
---
title: "Build Crew"
description: "A Crew is a group of agents that work together to complete a task."
icon: "people-arrows"
---
## Overview
[CrewAI Enterprise](https://app.crewai.com) streamlines the process of **creating**, **deploying**, and **managing** your AI agents in production environments.
## Getting Started
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/-kSOTtYzgEw"
title="Building Crews with CrewAI CLI"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
### Installation and Setup
<Card title="Follow Standard Installation" icon="wrench" href="/installation">
Follow our standard installation guide to set up CrewAI CLI and create your first project.
</Card>
### Building Your Crew
<Card title="Quickstart Tutorial" icon="rocket" href="/quickstart">
Follow our quickstart guide to create your first agent crew using YAML configuration.
</Card>
## Support and Resources
For Enterprise-specific support or questions, contact our dedicated support team at [support@crewai.com](mailto:support@crewai.com).
<Card title="Schedule a Demo" icon="calendar" href="mailto:support@crewai.com">
Book time with our team to learn more about Enterprise features and how they can benefit your organization.
</Card>

View File

@@ -0,0 +1,237 @@
---
title: "Deploy Crew"
description: "Deploy your local CrewAI project to the Enterprise platform"
icon: "cloud-arrow-up"
---
## Overview
This guide will walk you through the process of deploying your locally developed CrewAI project to the CrewAI Enterprise platform,
transforming it into a production-ready API endpoint.
## Option 1: CLI Deployment
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="Deploying a Crew to CrewAI Enterprise"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
### Prerequisites
Before starting the deployment process, make sure you have:
- A CrewAI project built locally ([follow our quickstart guide](/quickstart) if you haven't created one yet)
- Your code pushed to a GitHub repository
- The latest version of the CrewAI CLI installed (`uv tool install crewai`)
<Note>
For a quick reference project, you can clone our example repository at [github.com/tonykipkemboi/crewai-latest-ai-development](https://github.com/tonykipkemboi/crewai-latest-ai-development).
</Note>
<Steps>
<Step title="Authenticate with the Enterprise Platform">
First, you need to authenticate your CLI with the CrewAI Enterprise platform:
```bash
# If you already have a CrewAI Enterprise account
crewai login
# If you're creating a new account
crewai signup
```
When you run either command, the CLI will:
1. Display a URL and a unique device code
2. Open your browser to the authentication page
3. Prompt you to confirm the device
4. Complete the authentication process
Upon successful authentication, you'll see a confirmation message in your terminal!
</Step>
<Step title="Create a Deployment">
From your project directory, run:
```bash
crewai deploy create
```
This command will:
1. Detect your GitHub repository information
2. Identify environment variables in your local `.env` file
3. Securely transfer these variables to the Enterprise platform
4. Create a new deployment with a unique identifier
On successful creation, you'll see a message like:
```shell
Deployment created successfully!
Name: your_project_name
Deployment ID: 01234567-89ab-cdef-0123-456789abcdef
Current Status: Deploy Enqueued
```
</Step>
<Step title="Monitor Deployment Progress">
Track the deployment status with:
```bash
crewai deploy status
```
For detailed logs of the build process:
```bash
crewai deploy logs
```
<Tip>
The first deployment typically takes 10-15 minutes as it builds the container images. Subsequent deployments are much faster.
</Tip>
</Step>
</Steps>
## Additional CLI Commands
The CrewAI CLI offers several commands to manage your deployments:
```bash
# List all your deployments
crewai deploy list
# Get the status of your deployment
crewai deploy status
# View the logs of your deployment
crewai deploy logs
# Push updates after code changes
crewai deploy push
# Remove a deployment
crewai deploy remove <deployment_id>
```
## Option 2: Deploy Directly via Web Interface
You can also deploy your crews directly through the CrewAI Enterprise web interface by connecting your GitHub account. This approach doesn't require using the CLI on your local machine.
<Steps>
<Step title="Pushing to GitHub">
You need to push your crew to a GitHub repository. If you haven't created a crew yet, you can [follow this tutorial](/quickstart).
</Step>
<Step title="Connecting GitHub to CrewAI Enterprise">
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the button "Connect GitHub"
<Frame>
![Connect GitHub Button](/images/enterprise/connect-github.png)
</Frame>
</Step>
<Step title="Select the Repository">
After connecting your GitHub account, you'll be able to select which repository to deploy:
<Frame>
![Select Repository](/images/enterprise/select-repo.png)
</Frame>
</Step>
<Step title="Set Environment Variables">
Before deploying, you'll need to set up your environment variables to connect to your LLM provider or other services:
1. You can add variables individually or in bulk
2. Enter your environment variables in `KEY=VALUE` format (one per line)
<Frame>
![Set Environment Variables](/images/enterprise/set-env-variables.png)
</Frame>
</Step>
<Step title="Deploy Your Crew">
1. Click the "Deploy" button to start the deployment process
2. You can monitor the progress through the progress bar
3. The first deployment typically takes around 10-15 minutes; subsequent deployments will be faster
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)
</Frame>
Once deployment is complete, you'll see:
- Your crew's unique URL
- A Bearer token to protect your crew API
- A "Delete" button if you need to remove the deployment
</Step>
</Steps>
### Interact with Your Deployed Crew
Once deployment is complete, you can access your crew through:
1. **REST API**: The platform generates a unique HTTPS endpoint with these key routes:
- `/inputs`: Lists the required input parameters
- `/kickoff`: Initiates an execution with provided inputs
- `/status/{kickoff_id}`: Checks the execution status
2. **Web Interface**: Visit [app.crewai.com](https://app.crewai.com) to access:
- **Status tab**: View deployment information, API endpoint details, and authentication token
- **Run tab**: Visual representation of your crew's structure
- **Executions tab**: History of all executions
- **Metrics tab**: Performance analytics
- **Traces tab**: Detailed execution insights
### Trigger an Execution
From the Enterprise dashboard, you can:
1. Click on your crew's name to open its details
2. Select "Trigger Crew" from the management interface
3. Enter the required inputs in the modal that appears
4. Monitor progress as the execution moves through the pipeline
### Monitoring and Analytics
The Enterprise platform provides comprehensive observability features:
- **Execution Management**: Track active and completed runs
- **Traces**: Detailed breakdowns of each execution
- **Metrics**: Token usage, execution times, and costs
- **Timeline View**: Visual representation of task sequences
### Advanced Features
The Enterprise platform also offers:
- **Environment Variables Management**: Securely store and manage API keys
- **LLM Connections**: Configure integrations with various LLM providers
- **Custom Tools Repository**: Create, share, and install tools
- **Crew Studio**: Build crews through a chat interface without writing code
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with deployment issues or questions about the Enterprise platform.
</Card>

View File

@@ -0,0 +1,166 @@
---
title: "Enable Crew Studio"
description: "Enabling Crew Studio on CrewAI Enterprise"
icon: "comments"
---
<Tip>
Crew Studio is a powerful **no-code/low-code** tool that allows you to quickly scaffold or build Crews through a conversational interface.
</Tip>
## What is Crew Studio?
Crew Studio is an innovative way to create AI agent crews without writing code.
<Frame>
![Crew Studio Interface](/images/enterprise/crew-studio-interface.png)
</Frame>
With Crew Studio, you can:
- Chat with the Crew Assistant to describe your problem
- Automatically generate agents and tasks
- Select appropriate tools
- Configure necessary inputs
- Generate downloadable code for customization
- Deploy directly to the CrewAI Enterprise platform
## Configuration Steps
Before you can start using Crew Studio, you need to configure your LLM connections:
<Steps>
<Step title="Set Up LLM Connection">
Go to the **LLM Connections** tab in your CrewAI Enterprise dashboard and create a new LLM connection.
<Note>
Feel free to use any LLM provider you want that is supported by CrewAI.
</Note>
Configure your LLM connection:
- Enter a `Connection Name` (e.g., `OpenAI`)
- Select your model provider: `openai` or `azure`
- Select models you'd like to use in your Studio-generated Crews
- We recommend at least `gpt-4o`, `o1-mini`, and `gpt-4o-mini`
- Add your API key as an environment variable:
- For OpenAI: Add `OPENAI_API_KEY` with your API key
- For Azure OpenAI: Refer to [this article](https://blog.crewai.com/configuring-azure-openai-with-crewai-a-comprehensive-guide/) for configuration details
- Click `Add Connection` to save your configuration
<Frame>
![LLM Connection Configuration](/images/enterprise/llm-connection-config.png)
</Frame>
</Step>
<Step title="Verify Connection Added">
Once you complete the setup, you'll see your new connection added to the list of available connections.
<Frame>
![Connection Added](/images/enterprise/connection-added.png)
</Frame>
</Step>
<Step title="Configure LLM Defaults">
In the main menu, go to **Settings → Defaults** and configure the LLM Defaults settings:
- Select default models for agents and other components
- Set default configurations for Crew Studio
Click `Save Settings` to apply your changes.
<Frame>
![LLM Defaults Configuration](/images/enterprise/llm-defaults.png)
</Frame>
</Step>
</Steps>
## Using Crew Studio
Now that you've configured your LLM connection and default settings, you're ready to start using Crew Studio!
<Steps>
<Step title="Access Studio">
Navigate to the **Studio** section in your CrewAI Enterprise dashboard.
</Step>
<Step title="Start a Conversation">
Start a conversation with the Crew Assistant by describing the problem you want to solve:
```md
I need a crew that can research the latest AI developments and create a summary report.
```
The Crew Assistant will ask clarifying questions to better understand your requirements.
</Step>
<Step title="Review Generated Crew">
Review the generated crew configuration, including:
- Agents and their roles
- Tasks to be performed
- Required inputs
- Tools to be used
This is your opportunity to refine the configuration before proceeding.
</Step>
<Step title="Deploy or Download">
Once you're satisfied with the configuration, you can:
- Download the generated code for local customization
- Deploy the crew directly to the CrewAI Enterprise platform
- Modify the configuration and regenerate the crew
</Step>
<Step title="Test Your Crew">
After deployment, test your crew with sample inputs to ensure it performs as expected.
</Step>
</Steps>
<Tip>
For best results, provide clear, detailed descriptions of what you want your crew to accomplish. Include specific inputs and expected outputs in your description.
</Tip>
## Example Workflow
Here's a typical workflow for creating a crew with Crew Studio:
<Steps>
<Step title="Describe Your Problem">
Start by describing your problem:
```md
I need a crew that can analyze financial news and provide investment recommendations
```
</Step>
<Step title="Answer Questions">
Respond to clarifying questions from the Crew Assistant to refine your requirements.
</Step>
<Step title="Review the Plan">
Review the generated crew plan, which might include:
- A Research Agent to gather financial news
- An Analysis Agent to interpret the data
- A Recommendations Agent to provide investment advice
</Step>
<Step title="Approve or Modify">
Approve the plan or request changes if necessary.
</Step>
<Step title="Download or Deploy">
Download the code for customization or deploy directly to the platform.
</Step>
<Step title="Test and Refine">
Test your crew with sample inputs and refine as needed.
</Step>
</Steps>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with Crew Studio or any other CrewAI Enterprise features.
</Card>

View File

@@ -0,0 +1,186 @@
---
title: "Kickoff Crew"
description: "Kickoff a Crew on CrewAI Enterprise"
icon: "flag-checkered"
---
## Overview
Once you've deployed your crew to the CrewAI Enterprise platform, you can kickoff executions through the web interface or the API. This guide covers both approaches.
## Method 1: Using the Web Interface
### Step 1: Navigate to Your Deployed Crew
1. Log in to [CrewAI Enterprise](https://app.crewai.com)
2. Click on the crew name from your projects list
3. You'll be taken to the crew's detail page
<Frame>
![Crew Dashboard](/images/enterprise/crew-dashboard.png)
</Frame>
### Step 2: Initiate Execution
From your crew's detail page, you have two options to kickoff an execution:
#### Option A: Quick Kickoff
1. Click the `Kickoff` link in the Test Endpoints section
2. Enter the required input parameters for your crew in the JSON editor
3. Click the `Send Request` button
<Frame>
![Kickoff Endpoint](/images/enterprise/kickoff-endpoint.png)
</Frame>
#### Option B: Using the Visual Interface
1. Click the `Run` tab in the crew detail page
2. Enter the required inputs in the form fields
3. Click the `Run Crew` button
<Frame>
![Run Crew](/images/enterprise/run-crew.png)
</Frame>
### Step 3: Monitor Execution Progress
After initiating the execution:
1. You'll receive a response containing a `kickoff_id` - **copy this ID**
2. This ID is essential for tracking your execution
<Frame>
![Copy Task ID](/images/enterprise/copy-task-id.png)
</Frame>
### Step 4: Check Execution Status
To monitor the progress of your execution:
1. Click the "Status" endpoint in the Test Endpoints section
2. Paste the `kickoff_id` into the designated field
3. Click the "Get Status" button
<Frame>
![Get Status](/images/enterprise/get-status.png)
</Frame>
The status response will show:
- Current execution state (`running`, `completed`, etc.)
- Details about which tasks are in progress
- Any outputs produced so far
### Step 5: View Final Results
Once execution is complete:
1. The status will change to `completed`
2. You can view the full execution results and outputs
3. For a more detailed view, check the `Executions` tab in the crew detail page
## Method 2: Using the API
You can also kickoff crews programmatically using the CrewAI Enterprise REST API.
### Authentication
All API requests require a bearer token for authentication:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
Your bearer token is available on the Status tab of your crew's detail page.
### Checking Crew Health
Before executing operations, you can verify that your crew is running properly:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com
```
A successful response will return a message indicating the crew is operational:
```
Healthy%
```
### Step 1: Retrieve Required Inputs
First, determine what inputs your crew requires:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
The response will be a JSON object containing an array of required input parameters, for example:
```json
{"inputs":["topic","current_year"]}
```
This example shows that this particular crew requires two inputs: `topic` and `current_year`.
### Step 2: Kickoff Execution
Initiate execution by providing the required inputs:
```bash
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
-d '{"inputs": {"topic": "AI Agent Frameworks", "current_year": "2025"}}' \
https://your-crew-url.crewai.com/kickoff
```
The response will include a `kickoff_id` that you'll need for tracking:
```json
{"kickoff_id":"abcd1234-5678-90ef-ghij-klmnopqrstuv"}
```
### Step 3: Check Execution Status
Monitor the execution progress using the kickoff_id:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
## Handling Executions
### Long-Running Executions
For executions that may take a long time:
1. Consider implementing a polling mechanism to check status periodically
2. Use webhooks (if available) for notification when execution completes
3. Implement error handling for potential timeouts
### Execution Context
The execution context includes:
- Inputs provided at kickoff
- Environment variables configured during deployment
- Any state maintained between tasks
### Debugging Failed Executions
If an execution fails:
1. Check the "Executions" tab for detailed logs
2. Review the "Traces" tab for step-by-step execution details
3. Look for LLM responses and tool usage in the trace details
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with execution issues or questions about the Enterprise platform.
</Card>

View File

@@ -0,0 +1,89 @@
---
title: "Update Crew"
description: "Updating a Crew on CrewAI Enterprise"
icon: "pencil"
---
<Note>
After deploying your crew to CrewAI Enterprise, you may need to make updates to the code, security settings, or configuration.
This guide explains how to perform these common update operations.
</Note>
## Why Update Your Crew?
CrewAI won't automatically pick up GitHub updates by default, so you'll need to manually trigger updates, unless you checked the `Auto-update` option when deploying your crew.
There are several reasons you might want to update your crew deployment:
- You want to update the code with a latest commit you pushed to GitHub
- You want to reset the bearer token for security reasons
- You want to update environment variables
## 1. Updating Your Crew Code for a Latest Commit
When you've pushed new commits to your GitHub repository and want to update your deployment:
1. Navigate to your crew in the CrewAI Enterprise platform
2. Click on the `Re-deploy` button on your crew details page
<Frame>
![Re-deploy Button](/images/enterprise/redeploy-button.png)
</Frame>
This will trigger an update that you can track using the progress bar. The system will pull the latest code from your repository and rebuild your deployment.
## 2. Resetting Bearer Token
If you need to generate a new bearer token (for example, if you suspect the current token might have been compromised):
1. Navigate to your crew in the CrewAI Enterprise platform
2. Find the `Bearer Token` section
3. Click the `Reset` button next to your current token
<Frame>
![Reset Token](/images/enterprise/reset-token.png)
</Frame>
<Warning>
Resetting your bearer token will invalidate the previous token immediately. Make sure to update any applications or scripts that are using the old token.
</Warning>
## 3. Updating Environment Variables
To update the environment variables for your crew:
1. First access the deployment page by clicking on your crew's name
<Frame>
![Environment Variables Button](/images/enterprise/env-vars-button.png)
</Frame>
2. Locate the `Environment Variables` section (you will need to click the `Settings` icon to access it)
3. Edit the existing variables or add new ones in the fields provided
4. Click the `Update` button next to each variable you modify
<Frame>
![Update Environment Variables](/images/enterprise/update-env-vars.png)
</Frame>
5. Finally, click the `Update Deployment` button at the bottom of the page to apply the changes
<Note>
Updating environment variables will trigger a new deployment, but this will only update the environment configuration and not the code itself.
</Note>
## After Updating
After performing any update:
1. The system will rebuild and redeploy your crew
2. You can monitor the deployment progress in real-time
3. Once complete, test your crew to ensure the changes are working as expected
<Tip>
If you encounter any issues after updating, you can view deployment logs in the platform or contact support for assistance.
</Tip>
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with updating your crew or troubleshooting deployment issues.
</Card>

View File

@@ -0,0 +1,319 @@
---
title: "Trigger Deployed Crew API"
description: "Using your deployed crew's API on CrewAI Enterprise"
icon: "arrow-up-right-from-square"
---
Once you have deployed your crew to CrewAI Enterprise, it automatically becomes available as a REST API. This guide explains how to interact with your crew programmatically.
## API Basics
Your deployed crew exposes several endpoints that allow you to:
1. Discover required inputs
2. Start crew executions
3. Monitor execution status
4. Receive results
### Authentication
All API requests require a bearer token for authentication, which is generated when you deploy your crew:
```bash
curl -H "Authorization: Bearer YOUR_CREW_TOKEN" https://your-crew-url.crewai.com/...
```
<Tip>
You can find your bearer token in the Status tab of your crew's detail page in the CrewAI Enterprise dashboard.
</Tip>
<Frame>
![Bearer Token](/images/enterprise/bearer-token.png)
</Frame>
## Available Endpoints
Your crew API provides three main endpoints:
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/inputs` | GET | Lists all required inputs for crew execution |
| `/kickoff` | POST | Starts a crew execution with provided inputs |
| `/status/{kickoff_id}` | GET | Retrieves the status and results of an execution |
## GET /inputs
The inputs endpoint allows you to discover what parameters your crew requires:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/inputs
```
### Response
```json
{
"inputs": ["budget", "interests", "duration", "age"]
}
```
This response indicates that your crew expects four input parameters: `budget`, `interests`, `duration`, and `age`.
## POST /kickoff
The kickoff endpoint starts a new crew execution:
```bash
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
-d '{
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
}
}' \
https://your-crew-url.crewai.com/kickoff
```
### Request Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `inputs` | Object | Yes | Key-value pairs of all required inputs |
| `meta` | Object | No | Additional metadata to pass to the crew |
| `taskWebhookUrl` | String | No | Callback URL executed after each task |
| `stepWebhookUrl` | String | No | Callback URL executed after each agent thought |
| `crewWebhookUrl` | String | No | Callback URL executed when the crew finishes |
### Example with Webhooks
```json
{
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
},
"meta": {
"requestId": "user-request-12345",
"source": "mobile-app"
},
"taskWebhookUrl": "https://your-server.com/webhooks/task",
"stepWebhookUrl": "https://your-server.com/webhooks/step",
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
}
```
### Response
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv"
}
```
The `kickoff_id` is used to track and retrieve the execution results.
## GET /status/{kickoff_id}
The status endpoint allows you to check the progress and results of a crew execution:
```bash
curl -X GET \
-H "Authorization: Bearer YOUR_CREW_TOKEN" \
https://your-crew-url.crewai.com/status/abcd1234-5678-90ef-ghij-klmnopqrstuv
```
### Response Structure
The response structure will vary depending on the execution state:
#### In Progress
```json
{
"status": "running",
"current_task": "research_task",
"progress": {
"completed_tasks": 0,
"total_tasks": 2
}
}
```
#### Completed
```json
{
"status": "completed",
"result": {
"output": "Comprehensive travel itinerary...",
"tasks": [
{
"task_id": "research_task",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
},
{
"task_id": "planning_task",
"output": "7-day itinerary plan...",
"agent": "Trip Planner",
"execution_time": 62.8
}
]
},
"execution_time": 108.5
}
```
## Webhook Integration
When you provide webhook URLs in your kickoff request, the system will make POST requests to those URLs at specific points in the execution:
### taskWebhookUrl
Called when each task completes:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"status": "completed",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
}
```
### stepWebhookUrl
Called after each agent thought or action:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"task_id": "research_task",
"agent": "Researcher",
"step_type": "thought",
"content": "I should first search for popular destinations that match these interests..."
}
```
### crewWebhookUrl
Called when the entire crew execution completes:
```json
{
"kickoff_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
"status": "completed",
"result": {
"output": "Comprehensive travel itinerary...",
"tasks": [
{
"task_id": "research_task",
"output": "Research findings...",
"agent": "Researcher",
"execution_time": 45.2
},
{
"task_id": "planning_task",
"output": "7-day itinerary plan...",
"agent": "Trip Planner",
"execution_time": 62.8
}
]
},
"execution_time": 108.5,
"meta": {
"requestId": "user-request-12345",
"source": "mobile-app"
}
}
```
## Best Practices
### Handling Long-Running Executions
Crew executions can take anywhere from seconds to minutes depending on their complexity. Consider these approaches:
1. **Webhooks (Recommended)**: Set up webhook endpoints to receive notifications when the execution completes
2. **Polling**: Implement a polling mechanism with exponential backoff
3. **Client-Side Timeout**: Set appropriate timeouts for your API requests
### Error Handling
The API may return various error codes:
| Code | Description | Recommended Action |
|------|-------------|-------------------|
| 401 | Unauthorized | Check your bearer token |
| 404 | Not Found | Verify your crew URL and kickoff_id |
| 422 | Validation Error | Ensure all required inputs are provided |
| 500 | Server Error | Contact support with the error details |
### Sample Code
Here's a complete Python example for interacting with your crew API:
```python
import requests
import time
# Configuration
CREW_URL = "https://your-crew-url.crewai.com"
BEARER_TOKEN = "your-crew-token"
HEADERS = {
"Authorization": f"Bearer {BEARER_TOKEN}",
"Content-Type": "application/json"
}
# 1. Get required inputs
response = requests.get(f"{CREW_URL}/inputs", headers=HEADERS)
required_inputs = response.json()["inputs"]
print(f"Required inputs: {required_inputs}")
# 2. Start crew execution
payload = {
"inputs": {
"budget": "1000 USD",
"interests": "games, tech, ai, relaxing hikes, amazing food",
"duration": "7 days",
"age": "35"
}
}
response = requests.post(f"{CREW_URL}/kickoff", headers=HEADERS, json=payload)
kickoff_id = response.json()["kickoff_id"]
print(f"Execution started with ID: {kickoff_id}")
# 3. Poll for results
MAX_RETRIES = 30
POLL_INTERVAL = 10 # seconds
for i in range(MAX_RETRIES):
print(f"Checking status (attempt {i+1}/{MAX_RETRIES})...")
response = requests.get(f"{CREW_URL}/status/{kickoff_id}", headers=HEADERS)
data = response.json()
if data["status"] == "completed":
print("Execution completed!")
print(f"Result: {data['result']['output']}")
break
elif data["status"] == "error":
print(f"Execution failed: {data.get('error', 'Unknown error')}")
break
else:
print(f"Status: {data['status']}, waiting {POLL_INTERVAL} seconds...")
time.sleep(POLL_INTERVAL)
```
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with API integration or troubleshooting.
</Card>

View File

@@ -0,0 +1,99 @@
---
title: "CrewAI Enterprise"
description: "Deploy, monitor, and scale your AI agent workflows"
icon: "globe"
---
## Introduction
CrewAI Enterprise provides a platform for deploying, monitoring, and scaling your crews and agents in a production environment.
<Frame>
<img src="/images/enterprise/crewai-enterprise-dashboard.png" alt="CrewAI Enterprise Dashboard" />
</Frame>
CrewAI Enterprise extends the power of the open-source framework with features designed for production deployments, collaboration, and scalability. Deploy your crews to a managed infrastructure and monitor their execution in real-time.
## Key Features
<CardGroup cols={2}>
<Card title="Crew Deployments" icon="rocket">
Deploy your crews to a managed infrastructure with a few clicks
</Card>
<Card title="API Access" icon="code">
Access your deployed crews via REST API for integration with existing systems
</Card>
<Card title="Observability" icon="chart-line">
Monitor your crews with detailed execution traces and logs
</Card>
<Card title="Tool Repository" icon="toolbox">
Publish and install tools to enhance your crews' capabilities
</Card>
<Card title="Webhook Streaming" icon="webhook">
Stream real-time events and updates to your systems
</Card>
<Card title="Crew Studio" icon="paintbrush">
Create and customize crews using a no-code/low-code interface
</Card>
</CardGroup>
## Deployment Options
<CardGroup cols={3}>
<Card title="GitHub Integration" icon="github">
Connect directly to your GitHub repositories to deploy code
</Card>
<Card title="Crew Studio" icon="palette">
Deploy crews created through the no-code Crew Studio interface
</Card>
<Card title="CLI Deployment" icon="terminal">
Use the CrewAI CLI for more advanced deployment workflows
</Card>
</CardGroup>
## Getting Started
<Steps>
<Step title="Sign up for an account">
Create your account at [app.crewai.com](https://app.crewai.com)
<Card
title="Sign Up"
icon="user"
href="https://app.crewai.com/signup"
>
Sign Up
</Card>
</Step>
<Step title="Create your first crew">
Use code or Crew Studio to create your crew
<Card
title="Create Crew"
icon="paintbrush"
href="/enterprise/guides/create-crew"
>
Create Crew
</Card>
</Step>
<Step title="Deploy your crew">
Deploy your crew to the Enterprise platform
<Card
title="Deploy Crew"
icon="rocket"
href="/enterprise/guides/deploy-crew"
>
Deploy Crew
</Card>
</Step>
<Step title="Access your crew">
Integrate with your crew via the generated API endpoints
<Card
title="API Access"
icon="code"
href="/enterprise/guides/use-crew-api"
>
Use the Crew API
</Card>
</Step>
</Steps>
For detailed instructions, check out our [deployment guide](/enterprise/guides/deploy-crew) or click the button below to get started.

View File

@@ -0,0 +1,964 @@
---
title: FAQs
description: "Frequently asked questions about CrewAI Enterprise"
icon: "code"
---
<AccordionGroup>
<Accordion title="How is task execution handled in the hierarchical process?">
In the hierarchical process, a manager agent is automatically created and coordinates the workflow, delegating tasks and validating outcomes for
streamlined and effective execution. The manager agent utilizes tools to facilitate task delegation and execution by agents under the manager's guidance.
The manager LLM is crucial for the hierarchical process and must be set up correctly for proper function.
</Accordion>
<Accordion title="Where can I get the latest CrewAI documentation?">
The most up-to-date documentation for CrewAI is available on our official documentation website; https://docs.crewai.com/
<Card href="https://docs.crewai.com/" icon="books">CrewAI Docs</Card>
</Accordion>
<Accordion title="What are the key differences between Hierarchical and Sequential Processes in CrewAI?">
#### Hierarchical Process:
Tasks are delegated and executed based on a structured chain of command.
A manager language model (`manager_llm`) must be specified for the manager agent.
Manager agent oversees task execution, planning, delegation, and validation.
Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities.
#### Sequential Process:
Tasks are executed one after another, ensuring tasks are completed in an orderly progression.
Output of one task serves as context for the next.
Task execution follows the predefined order in the task list.
#### Which Process is Better Suited for Complex Projects?
The hierarchical process is better suited for complex projects because it allows for:
- **Dynamic task allocation and delegation**: Manager agent can assign tasks based on agent capabilities, allowing for efficient resource utilization.
- **Structured validation and oversight**: Manager agent reviews task outputs and ensures task completion, increasing reliability and accuracy.
- **Complex task management**: Assigning tools at the agent level allows for precise control over tool availability, facilitating the execution of intricate tasks.
</Accordion>
<Accordion title="What are the benefits of using memory in the CrewAI framework?">
- **Adaptive Learning**: Crews become more efficient over time, adapting to new information and refining their approach to tasks.
- **Enhanced Personalization**: Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- **Improved Problem Solving**: Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
</Accordion>
<Accordion title="What is the purpose of setting a maximum RPM limit for an agent?">
Setting a maximum RPM limit for an agent prevents the agent from making too many requests to external services, which can help to avoid rate limits and improve performance.
</Accordion>
<Accordion title="What role does human input play in the execution of tasks within a CrewAI crew?">
It allows agents to request additional information or clarification when necessary.
This feature is crucial in complex decision-making processes or when agents require more details to complete a task effectively.
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer.
This input can provide extra context, clarify ambiguities, or validate the agent's output.
</Accordion>
<Accordion title="What advanced customization options are available for tailoring and enhancing agent behavior and capabilities in CrewAI?">
CrewAI provides a range of advanced customization options to tailor and enhance agent behavior and capabilities:
- **Language Model Customization**: Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities.
- **Performance and Debugging Settings**: Adjust an agent's performance and monitor its operations for efficient task execution.
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization.
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`).
- **Maximum Iterations for Task Execution**: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions.
- **Delegation and Autonomy**: Control an agent's ability to delegate or ask questions, tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to True, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
- **Human Input in Agent Execution**: Human input is critical in several agent execution scenarios, allowing agents to request additional information or clarification when necessary. This feature is especially useful in complex decision-making processes or when agents require more details to complete a task effectively.
</Accordion>
<Accordion title="In what scenarios is human input particularly useful in agent execution?">
Human input is particularly useful in agent execution when:
- **Agents require additional information or clarification**: When agents encounter ambiguity or incomplete data, human input can provide the necessary context to complete the task effectively.
- **Agents need to make complex or sensitive decisions**: Human input can assist agents in ethical or nuanced decision-making, ensuring responsible and informed outcomes.
- **Oversight and validation of agent output**: Human input can help validate the results generated by agents, ensuring accuracy and preventing any misinterpretation or errors.
- **Customizing agent behavior**: Human input can provide feedback on agent responses, allowing users to refine the agent's behavior and responses over time.
- **Identifying and resolving errors or limitations**: Human input can help identify and address any errors or limitations in the agent's capabilities, enabling continuous improvement and optimization.
</Accordion>
<Accordion title="What are the different types of memory that are available in crewAI?">
The different types of memory available in CrewAI are:
- `short-term memory`
- `long-term memory`
- `entity memory`
- `contextual memory`
Learn more about the different types of memory here:
<Card href="https://docs.crewai.com/concepts/memory" icon="brain">CrewAI Memory</Card>
</Accordion>
<Accordion title="How do I use Output Pydantic in a Task?">
To use Output Pydantic in a task, you need to define the expected output of the task as a Pydantic model. Here's an example:
<Steps>
<Step title="Define a Pydantic model">
First, you need to define a Pydantic model. For instance, let's create a simple model for a user:
```python
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
```
</Step>
<Step title="Then, when creating a task, specify the expected output as this Pydantic model:">
```python
from crewai import Task, Crew, Agent
# Import the User model
from my_models import User
# Create a task with Output Pydantic
task = Task(
description="Create a user with the provided name and age",
expected_output=User, # This is the Pydantic model
agent=agent,
tools=[tool1, tool2]
)
```
</Step>
<Step title="In your agent, make sure to set the output_pydantic attribute to the Pydantic model you're using:">
```python
from crewai import Agent
# Import the User model
from my_models import User
# Create an agent with Output Pydantic
agent = Agent(
role='User Creator',
goal='Create users',
backstory='I am skilled in creating user accounts',
tools=[tool1, tool2],
output_pydantic=User
)
```
</Step>
<Step title="When executing the crew, the output of the task will be a User object:">
```python
from crewai import Crew
# Create a crew with the agent and task
crew = Crew(agents=[agent], tasks=[task])
# Kick off the crew
result = crew.kickoff()
# The output of the task will be a User object
print(result.tasks[0].output)
```
</Step>
</Steps>
Here's a tutorial on how to consistently get structured outputs from your agents:
<Frame>
<iframe
height="400"
width="100%"
src="https://www.youtube.com/embed/dNpKQk5uxHw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen></iframe>
</Frame>
</Accordion>
<Accordion title="How can I create custom tools for my CrewAI agents?">
You can create custom tools by subclassing the `BaseTool` class provided by CrewAI or by using the tool decorator. Subclassing involves defining a new class that inherits from `BaseTool`, specifying the name, description, and the `_run` method for operational logic. The tool decorator allows you to create a `Tool` object directly with the required attributes and a functional logic.
Click here for more details:
<Card href="https://docs.crewai.com/how-to/create-custom-tools" icon="code">CrewAI Tools</Card>
</Accordion>
<Accordion title="How to Kickoff a Crew from Slack">
This guide explains how to start a crew directly from Slack using the CrewAI integration.
**Prerequisites:**
<ul>
<li>CrewAI integration installed and connected to your Slack workspace</li>
<li>At least one crew configured in CrewAI</li>
</ul>
**Steps:**
<Steps>
<Step title="Ensure the CrewAI Slack integration is set up">
In the CrewAI dashboard, navigate to the **Integrations** section.
<Frame>
<img src="/images/enterprise/slack-integration.png" alt="CrewAI Slack Integration" />
</Frame>
Verify that Slack is listed and is connected.
</Step>
<Step title="Open your Slack channel">
- Navigate to the channel where you want to kickoff the crew.
- Type the slash command "**/kickoff**" to initiate the crew kickoff process.
- You should see a "**Kickoff crew**" appear as you type:
<Frame>
<img src="/images/enterprise/kickoff-slack-crew.png" alt="Kickoff crew" />
</Frame>
- Press Enter or select the "**Kickoff crew**" option. A dialog box titled "**Kickoff an AI Crew**" will appear.
</Step>
<Step title="Select the crew you want to start">
- In the dropdown menu labeled "**Select of the crews online:**", choose the crew you want to start.
- In the example below, "**prep-for-meeting**" is selected:
<Frame>
<img src="/images/enterprise/kickoff-slack-crew-dropdown.png" alt="Kickoff crew dropdown" />
</Frame>
- If your crew requires any inputs, click the "**Add Inputs**" button to provide them.
<Note>
The "**Add Inputs**" button is shown in the example above but is not yet clicked.
</Note>
</Step>
<Step title="Click Kickoff and wait for the crew to complete">
- Once you've selected the crew and added any necessary inputs, click "**Kickoff**" to start the crew.
<Frame>
<img src="/images/enterprise/kickoff-slack-crew-kickoff.png" alt="Kickoff crew" />
</Frame>
- The crew will start executing and you will see the results in the Slack channel.
<Frame>
<img src="/images/enterprise/kickoff-slack-crew-results.png" alt="Kickoff crew results" />
</Frame>
</Step>
</Steps>
<Tip>
- Make sure you have the necessary permissions to use the `/kickoff` command in your Slack workspace.
- If you don't see your desired crew in the dropdown, ensure it's properly configured and online in CrewAI.
</Tip>
</Accordion>
<Accordion title="How to export and use a React Component">
Click on the ellipsis (three dots on the right of your deployed crew) and select the export option and save the file locally. We will be using `CrewLead.jsx` for our example.
<Frame>
<img src="/images/enterprise/export-react-component.png" alt="Export React Component" />
</Frame>
To run this React component locally, you'll need to set up a React development environment and integrate this component into a React project. Here's a step-by-step guide to get you started:
<Steps>
<Step title="Install Node.js">
- Download and install Node.js from the official website: https://nodejs.org/
- Choose the LTS (Long Term Support) version for stability.
</Step>
<Step title="Create a new React project">
- Open Command Prompt or PowerShell
- Navigate to the directory where you want to create your project
- Run the following command to create a new React project:
```bash
npx create-react-app my-crew-app
```
- Change into the project directory:
```bash
cd my-crew-app
```
</Step>
<Step title="Install necessary dependencies">
```bash
npm install react-dom
```
</Step>
<Step title="Create the CrewLead component">
- Move the downloaded file `CrewLead.jsx` into the `src` folder of your project,
</Step>
<Step title="Modify your `App.js` to use the `CrewLead` component">
- Open `src/App.js`
- Replace its contents with something like this:
```jsx
import React from 'react';
import CrewLead from './CrewLead';
function App() {
return (
<div className="App">
<CrewLead baseUrl="YOUR_API_BASE_URL" bearerToken="YOUR_BEARER_TOKEN" />
</div>
);
}
export default App;
```
- Replace `YOUR_API_BASE_URL` and `YOUR_BEARER_TOKEN` with the actual values for your API.
</Step>
<Step title="Start the development server">
- In your project directory, run:
```bash
npm start
```
- This will start the development server, and your default web browser should open automatically to http://localhost:3000, where you'll see your React app running.
</Step>
</Steps>
You can then customise the `CrewLead.jsx` to add color, title etc
<Frame>
<img src="/images/enterprise/customise-react-component.png" alt="Customise React Component" />
</Frame>
<Frame>
<img src="/images/enterprise/customise-react-component-2.png" alt="Customise React Component" />
</Frame>
</Accordion>
<Accordion title="How to Invite Team Members to Your CrewAI Enterprise Organization">
As an administrator of a CrewAI Enterprise account, you can easily invite new team members to join your organization. This article will guide you through the process step-by-step.
<Steps>
<Step title="Access the Settings Page">
- Log in to your CrewAI Enterprise account
- Look for the gear icon (⚙️) in the top right corner of the dashboard
- Click on the gear icon to access the **Settings** page:
<Frame>
<img src="/images/enterprise/settings-page.png" alt="Settings Page" />
</Frame>
</Step>
<Step title="Navigate to the Members Section">
- On the Settings page, you'll see a `General configuration` header
- Below this, find and click on the `Members` tab
</Step>
<Step title="Invite New Members">
- In the Members section, you'll see a list of current members (including yourself)
- At the bottom of the list, locate the `Email` input field
- Enter the email address of the person you want to invite
- Click the `Invite` button next to the email field
</Step>
<Step title="Repeat as Needed">
- You can repeat this process to invite multiple team members
- Each invited member will receive an email invitation to join your organization
</Step>
<Step title="Important Notes">
- Only users with administrative privileges can invite new members
- Ensure you have the correct email addresses for your team members
- Invited members will need to accept the invitation to join your organization
- You may want to inform your team members to check their email (including spam folders) for the invitation
</Step>
</Steps>
By following these steps, you can easily expand your team and collaborate more effectively within your CrewAI Enterprise organization.
</Accordion>
<Accordion title="Using Webhooks in CrewAI Enterprise">
CrewAI Enterprise allows you to automate your workflow using webhooks.
This article will guide you through the process of setting up and using webhooks to kickoff your crew execution, with a focus on integration with ActivePieces,
a workflow automation platform similar to Zapier and Make.com. We will be setting up webhooks in the CrewAI Enterprise UI.
<Steps>
<Step title="Accessing the Kickoff Interface">
- Navigate to the CrewAI Enterprise dashboard
- Look for the `/kickoff` section, which is used to start the crew execution
<Frame>
<img src="/images/enterprise/kickoff-interface.png" alt="Kickoff Interface" />
</Frame>
</Step>
<Step title="Configuring the JSON Content">
In the JSON Content section, you'll need to provide the following information:
- **inputs**: A JSON object containing:
- `company`: The name of the company (e.g., "tesla")
- `product_name`: The name of the product (e.g., "crewai")
- `form_response`: The type of response (e.g., "financial")
- `icp_description`: A brief description of the Ideal Customer Profile
- `product_description`: A short description of the product
- `taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`: URLs for various webhook endpoints (ActivePieces, Zapier, Make.com or another compatible platform)
</Step>
<Step title="Integrating with ActivePieces">
In this example we will be using ActivePieces. You can use other platforms such as Zapier and Make.com
To integrate with ActivePieces:
1. Set up a new flow in ActivePieces
2. Add a trigger (e.g., `Every Day` schedule)
<Frame>
<img src="/images/enterprise/activepieces-trigger.png" alt="ActivePieces Trigger" />
</Frame>
3. Add an HTTP action step
- Set the action to `Send HTTP request`
- Use `POST` as the method
- Set the URL to your CrewAI Enterprise kickoff endpoint
- Add necessary headers (e.g., `Bearer Token`)
<Frame>
<img src="/images/enterprise/activepieces-headers.png" alt="ActivePieces Headers" />
</Frame>
- In the body, include the JSON content as configured in step 2
<Frame>
<img src="/images/enterprise/activepieces-body.png" alt="ActivePieces Body" />
</Frame>
- The crew will then kickoff at the pre-defined time.
</Step>
<Step title="Setting Up the Webhook">
1. Create a new flow in ActivePieces and name it
<Frame>
<img src="/images/enterprise/activepieces-flow.png" alt="ActivePieces Flow" />
</Frame>
2. Add a webhook step as the trigger:
- Select `Catch Webhook` as the trigger type
- This will generate a unique URL that will receive HTTP requests and trigger your flow
<Frame>
<img src="/images/enterprise/activepieces-webhook.png" alt="ActivePieces Webhook" />
</Frame>
- Configure the email to use crew webhook body text
<Frame>
<img src="/images/enterprise/activepieces-email.png" alt="ActivePieces Email" />
</Frame>
</Step>
<Step title="Generated output">
1. `stepWebhookUrl` - Callback that will be executed upon each agent inner thought
```json
{
"action": "**Preliminary Research Report on the Financial Industry for crewai Enterprise Solution**\n1. Industry Overview and Trends\nThe financial industry in ....\nConclusion:\nThe financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"task_id": "97eba64f-958c-40a0-b61c-625fe635a3c0"
}
```
2. `taskWebhookUrl` - Callback that will be executed upon the end of each task
```json
{
"description": "Using the information gathered from the lead's data, conduct preliminary research on the lead's industry, company background, and potential use cases for crewai. Focus on finding relevant data that can aid in scoring the lead and planning a strategy to pitch them crewai.The financial industry presents a fertile ground for implementing AI solutions like crewai, particularly in areas such as digital customer engagement, risk management, and regulatory compliance. Further engagement with the lead is recommended to better tailor the crewai solution to their specific needs and scale.",
"task_id": "97eba64f-958c-40a0-b61c-625fe635a3c0"
}
```
3. `crewWebhookUrl` - Callback that will be executed upon the end of the crew execution
```json
{
"task_id": "97eba64f-958c-40a0-b61c-625fe635a3c0",
"result": {
"lead_score": "Customer service enhancement, and compliance are particularly relevant.",
"talking_points": [
"Highlight how crewai's AI solutions can transform customer service with automated, personalized experiences and 24/7 support, improving both customer satisfaction and operational efficiency.",
"Discuss crewai's potential to help the institution achieve its sustainability goals through better data analysis and decision-making, contributing to responsible investing and green initiatives.",
"Emphasize crewai's ability to enhance compliance with evolving regulations through efficient data processing and reporting, reducing the risk of non-compliance penalties.",
"Stress the adaptability of crewai to support both extensive multinational operations and smaller, targeted projects, ensuring the solution grows with the institution's needs."
]
}
}
```
</Step>
</Steps>
</Accordion>
<Accordion title="How to use the crewai custom GPT to create a crew">
<Steps>
<Step title="Navigate to the CrewAI custom GPT">
Click here https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant to access the CrewAI custom GPT
<Card href="https://chatgpt.com/g/g-qqTuUWsBY-crewai-assistant" icon="comments">CrewAI custom GPT</Card>
</Step>
<Step title="Describe your project idea">
For example:
```text
Suggest some agents and tasks to retrieve LinkedIn profile details for a given person and a domain.
```
</Step>
<Step title="The GPT will provide you with a list of suggested agents and tasks">
Here's an example of the response you will get:
<Frame>
<img src="/images/enterprise/crewai-custom-gpt-1.png" alt="CrewAI custom GPT 1" />
</Frame>
</Step>
<Step title="Create the project structure in your terminal by entering:">
```bash
crewai create crew linkedin-profile
```
This will create a new crew called `linkedin-profile` in the current directory.
Follow the full instructions in the https://docs.crewai.com/quickstart to create a crew.
<Card href="https://docs.crewai.com/quickstart" icon="code">CrewAI Docs</Card>
</Step>
<Step title="Ask the GPT to convert the agents and tasks to YAML format.">
Here's an example of the final output you will have to save in the `agents.yaml` and `tasks.yaml` files:
<Frame>
<img src="/images/enterprise/crewai-custom-gpt-2.png" alt="CrewAI custom GPT 2" />
</Frame>
- Now replace the `agents.yaml` and `tasks.yaml` with the above code
- Ask GPT to create the custom LinkedIn Tool
- Ask the GPT to put everything together into the `crew.py` file
- You will now have a fully working crew.
</Step>
</Steps>
</Accordion>
<Accordion title="How to generate images using Dall-E">
CrewAI supports integration with OpenAI's DALL-E, allowing your AI agents to generate images as part of their tasks. This guide will walk you through how to set up and use the DALL-E tool in your CrewAI projects.
**Prerequisites**
- crewAI installed (latest version)
- OpenAI API key with access to DALL-E
**Setting Up the DALL-E Tool**
To use the DALL-E tool in your CrewAI project, follow these steps:
<Steps>
<Step title="Import the DALL-E tool">
```python
from crewai_tools import DallETool
```
</Step>
<Step title="Add the DALL-E tool to your agent configuration">
```python
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
tools=[SerperDevTool(), DallETool()], # Add DallETool to the list of tools
allow_delegation=False,
verbose=True
)
```
</Step>
</Steps>
**Using the DALL-E Tool**
Once you've added the DALL-E tool to your agent, it can generate images based on text prompts.
The tool will return a URL to the generated image, which can be used in the agent's output or passed to other agents for further processing.
Example usage within a task:
```YAML
role: >
LinkedIn Profile Senior Data Researcher
goal: >
Uncover detailed LinkedIn profiles based on provided name {name} and domain {domain}
Generate a Dall-e image based on domain {domain}
backstory: >
You're a seasoned researcher with a knack for uncovering the most relevant LinkedIn profiles.
Known for your ability to navigate LinkedIn efficiently, you excel at gathering and presenting
professional information clearly and concisely.
```
The agent with the DALL-E tool will be able to generate the image and provide a URL in its response. You can then download the image.
<Frame>
<img src="/images/enterprise/dall-e-image.png" alt="DALL-E Image" />
</Frame>
**Best Practices**
1. Be specific in your image generation prompts to get the best results.
2. Remember that image generation can take some time, so factor this into your task planning.
3. Always comply with OpenAI's usage policies when generating images.
**Troubleshooting**
1. Ensure your OpenAI API key has access to DALL-E.
2. Check that you're using the latest version of crewAI and crewai-tools.
3. Verify that the DALL-E tool is correctly added to the agent's tool list.
</Accordion>
<Accordion title="How to use Annotations in crew.py">
This guide explains how to use annotations to properly reference **agents**, **tasks**, and other components in the `crew.py` file.
**Introduction**
Annotations in the framework are used to decorate classes and methods, providing metadata and functionality to various components of your crew.
These annotations help in organizing and structuring your code, making it more readable and maintainable.
**Available Annotations**
The CrewAI framework provides the following annotations:
- `@CrewBase`: Used to decorate the main crew class.
- `@agent`: Decorates methods that define and return Agent objects.
- `@task`: Decorates methods that define and return Task objects.
- `@crew`: Decorates the method that creates and returns the Crew object.
- `@llm`: Decorates methods that initialize and return Language Model objects.
- `@tool`: Decorates methods that initialize and return Tool objects.
- `@callback`: (Not shown in the example, but available) Used for defining callback methods.
- `@output_json`: (Not shown in the example, but available) Used for methods that output JSON data.
- `@output_pydantic`: (Not shown in the example, but available) Used for methods that output Pydantic models.
- `@cache_handler`: (Not shown in the example, but available) Used for defining cache handling methods.
**Usage Examples**
Let's go through examples of how to use these annotations based on the provided LinkedinProfileCrew class:
**1. Crew Base Class**
```python
@CrewBase
class LinkedinProfileCrew():
"""LinkedinProfile crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
```
The `@CrewBase` annotation is used to decorate the main crew class.
This class typically contains configurations and methods for creating agents, tasks, and the crew itself.
**2. Tool Definition**
```python
@tool
def myLinkedInProfileTool(self):
return LinkedInProfileTool()
```
The `@tool` annotation is used to decorate methods that return tool objects. These tools can be used by agents to perform specific tasks.
**3. LLM Definition**
```python
@llm
def groq_llm(self):
api_key = os.getenv('api_key')
return ChatGroq(api_key=api_key, temperature=0, model_name="mixtral-8x7b-32768")
```
The `@llm` annotation is used to decorate methods that initialize and return Language Model objects. These LLMs are used by agents for natural language processing tasks.
**4. Agent Definition**
```python
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher']
)
```
The `@agent` annotation is used to decorate methods that define and return Agent objects.
**5. Task Definition**
```python
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_linkedin_task'],
agent=self.researcher()
)
```
The `@task` annotation is used to decorate methods that define and return Task objects. These methods specify the task configuration and the agent responsible for the task.
**6. Crew Creation**
```python
@crew
def crew(self) -> Crew:
"""Creates the LinkedinProfile crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True
)
```
The `@crew` annotation is used to decorate the method that creates and returns the `Crew` object. This method assembles all the components (agents and tasks) into a functional crew.
**YAML Configuration**
The agent configurations are typically stored in a YAML file. Here's an example of how the `agents.yaml` file might look for the researcher agent:
```yaml
researcher:
role: >
LinkedIn Profile Senior Data Researcher
goal: >
Uncover detailed LinkedIn profiles based on provided name {name} and domain {domain}
Generate a Dall-E image based on domain {domain}
backstory: >
You're a seasoned researcher with a knack for uncovering the most relevant LinkedIn profiles.
Known for your ability to navigate LinkedIn efficiently, you excel at gathering and presenting
professional information clearly and concisely.
allow_delegation: False
verbose: True
llm: groq_llm
tools:
- myLinkedInProfileTool
- mySerperDevTool
- myDallETool
```
This YAML configuration corresponds to the researcher agent defined in the `LinkedinProfileCrew` class. The configuration specifies the agent's role, goal, backstory, and other properties such as the LLM and tools it uses.
Note how the `llm` and `tools` in the YAML file correspond to the methods decorated with `@llm` and `@tool` in the Python class. This connection allows for a flexible and modular design where you can easily update agent configurations without changing the core code.
**Best Practices**
- **Consistent Naming**: Use clear and consistent naming conventions for your methods. For example, agent methods could be named after their roles (e.g., researcher, reporting_analyst).
- **Environment Variables**: Use environment variables for sensitive information like API keys.
- **Flexibility**: Design your crew to be flexible by allowing easy addition or removal of agents and tasks.
- **YAML-Code Correspondence**: Ensure that the names and structures in your YAML files correspond correctly to the decorated methods in your Python code.
By following these guidelines and properly using annotations, you can create well-structured and maintainable crews using the CrewAI framework.
</Accordion>
<Accordion title="How to Integrate CrewAI Enterprise with Zapier">
This guide will walk you through the process of integrating CrewAI Enterprise with Zapier, allowing you to automate workflows between CrewAI Enterprise and other applications.
**Prerequisites**
- A CrewAI Enterprise account
- A Zapier account
- A Slack account (for this specific integration)
**Step-by-Step Guide**
<Steps>
<Step title="Set Up the Slack Trigger">
- In Zapier, create a new Zap.
<Frame>
<img src="/images/enterprise/zapier-1.png" alt="Zapier 1" />
</Frame>
</Step>
<Step title="Choose Slack as your trigger app.">
<Frame>
<img src="/images/enterprise/zapier-2.png" alt="Zapier 2" />
</Frame>
- Select `New Pushed Message` as the Trigger Event.
- Connect your Slack account if you haven't already.
</Step>
<Step title="Configure the CrewAI Enterprise Action">
- Add a new action step to your Zap.
- Choose CrewAI+ as your action app and Kickoff as the Action Event
<Frame>
<img src="/images/enterprise/zapier-3.png" alt="Zapier 5" />
</Frame>
</Step>
<Step title="Connect your CrewAI Enterprise account.">
- Connect your CrewAI Enterprise account.
- Select the appropriate Crew for your workflow.
<Frame>
<img src="/images/enterprise/zapier-4.png" alt="Zapier 6" />
</Frame>
- Configure the inputs for the Crew using the data from the Slack message.
</Step>
<Step title="Format the CrewAI Enterprise Output">
- Add another action step to format the text output from CrewAI Enterprise.
- Use Zapier's formatting tools to convert the Markdown output to HTML.
<Frame>
<img src="/images/enterprise/zapier-5.png" alt="Zapier 8" />
</Frame>
<Frame>
<img src="/images/enterprise/zapier-6.png" alt="Zapier 9" />
</Frame>
</Step>
<Step title="Send the Output via Email">
- Add a final action step to send the formatted output via email.
- Choose your preferred email service (e.g., Gmail, Outlook).
- Configure the email details, including recipient, subject, and body.
- Insert the formatted CrewAI Enterprise output into the email body.
<Frame>
<img src="/images/enterprise/zapier-7.png" alt="Zapier 7" />
</Frame>
</Step>
<Step title="Kick Off the crew from Slack">
- Enter the text in your Slack channel
<Frame>
<img src="/images/enterprise/zapier-7b.png" alt="Zapier 10" />
</Frame>
- Select the 3 ellipsis button and then chose Push to Zapier
<Frame>
<img src="/images/enterprise/zapier-8.png" alt="Zapier 11" />
</Frame>
</Step>
<Step title="Select the crew and then Push to Kick Off">
<Frame>
<img src="/images/enterprise/zapier-9.png" alt="Zapier 12" />
</Frame>
</Step>
</Steps>
**Tips for Success**
- Ensure that your CrewAI Enterprise inputs are correctly mapped from the Slack message.
- Test your Zap thoroughly before turning it on to catch any potential issues.
- Consider adding error handling steps to manage potential failures in the workflow.
By following these steps, you'll have successfully integrated CrewAI Enterprise with Zapier, allowing for automated workflows triggered by Slack messages and resulting in email notifications with CrewAI Enterprise output.
</Accordion>
<Accordion title="How to Integrate CrewAI Enterprise with HubSpot">
This guide provides a step-by-step process to integrate CrewAI Enterprise with HubSpot, enabling you to initiate crews directly from HubSpot Workflows.
**Prerequisites**
- A CrewAI Enterprise account
- A HubSpot account with the [HubSpot Workflows](https://knowledge.hubspot.com/workflows/create-workflows) feature
**Step-by-Step Guide**
<Steps>
<Step title="Connect your HubSpot account with CrewAI Enterprise">
- Log in to your `CrewAI Enterprise account > Integrations`
- Select `HubSpot` from the list of available integrations
- Choose the HubSpot account you want to integrate with CrewAI Enterprise
- Follow the on-screen prompts to authorize CrewAI Enterprise access to your HubSpot account
- A confirmation message will appear once HubSpot is successfully linked with CrewAI Enterprise
</Step>
<Step title="Create a HubSpot Workflow">
- Log in to your `HubSpot account > Automations > Workflows > New workflow`
- Select the workflow type that fits your needs (e.g., Start from scratch)
- In the workflow builder, click the Plus (+) icon to add a new action.
- Choose `Integrated apps > CrewAI > Kickoff a Crew`.
- Select the Crew you want to initiate.
- Click `Save` to add the action to your workflow
<Frame>
<img src="/images/enterprise/hubspot-workflow-1.png" alt="HubSpot Workflow 1" />
</Frame>
</Step>
<Step title="Use Crew results with other actions">
- After the Kickoff a Crew step, click the Plus (+) icon to add a new action.
- For example, to send an internal email notification, choose `Communications > Send internal email notification`
- In the Body field, click `Insert data`, select `View properties or action outputs from > Action outputs > Crew Result` to include Crew data in the email
<Frame>
<img src="/images/enterprise/hubspot-workflow-2.png" alt="HubSpot Workflow 2" />
</Frame>
- Configure any additional actions as needed
- Review your workflow steps to ensure everything is set up correctly
- Activate the workflow
<Frame>
<img src="/images/enterprise/hubspot-workflow-3.png" alt="HubSpot Workflow 3" />
</Frame>
</Step>
</Steps>
For more detailed information on available actions and customization options, refer to the [HubSpot Workflows Documentation](https://knowledge.hubspot.com/workflows/create-workflows).
</Accordion>
<Accordion title="How to connect Azure OpenAI with Crew Studio?">
1. In Azure, go to `Azure AI Services > select your deployment > open Azure OpenAI Studio`.
2. On the left menu, click `Deployments`. If you dont have one, create a deployment with your desired model.
3. Once created, select your deployment and locate the `Target URI` and `Key` on the right side of the page. Keep this page open, as youll need this information.
<Frame>
<img src="/images/enterprise/azure-openai-studio.png" alt="Azure OpenAI Studio" />
</Frame>
4. In another tab, open `CrewAI Enterprise > LLM Connections`. Name your LLM Connection, select Azure as the provider, and choose the same model you selected in Azure.
5. On the same page, add environment variables from step 3:
- One named `AZURE_DEPLOYMENT_TARGET_URL` (using the Target URI). The URL should look like this: https://your-deployment.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview
- Another named `AZURE_API_KEY` (using the Key).
6. Click `Add Connection` to save your LLM Connection.
7. In `CrewAI Enterprise > Settings > Defaults > Crew Studio LLM Settings`, set the new LLM Connection and model as defaults.
8. Ensure network access settings:
- In Azure, go to `Azure OpenAI > select your deployment`.
- Navigate to `Resource Management > Networking`.
- Ensure that `Allow access from all networks` is enabled. If this setting is restricted, CrewAI may be blocked from accessing your Azure OpenAI endpoint.
You're all set! Crew Studio will now use your Azure OpenAI connection.
</Accordion>
<Accordion title="How to use HITL?">
Human-in-the-Loop (HITL) Instructions
HITL is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. Follow these steps to implement HITL within CrewAI:
<Steps>
<Step title="Configure Your Task">
Set up your task with human input enabled:
<Frame>
<img src="/images/enterprise/crew-human-input.png" alt="Crew Human Input" />
</Frame>
</Step>
<Step title="Provide Webhook URL">
When kicking off your crew, include a webhook URL for human input:
<Frame>
<img src="/images/enterprise/crew-webhook-url.png" alt="Crew Webhook URL" />
</Frame>
</Step>
<Step title="Receive Webhook Notification">
Once the crew completes the task requiring human input, you'll receive a webhook notification containing:
- Execution ID
- Task ID
- Task output
</Step>
<Step title="Review Task Output">
The system will pause in the `Pending Human Input` state. Review the task output carefully.
</Step>
<Step title="Submit Human Feedback">
Call the resume endpoint of your crew with the following information:
<Frame>
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
</Frame>
<Warning>
**Feedback Impact on Task Execution**:
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
</Warning>
This means:
- All information in your feedback becomes part of the task's context.
- Irrelevant details may negatively influence it.
- Concise, relevant feedback helps maintain task focus and efficiency.
- Always review your feedback carefully before submission to ensure it contains only pertinent information that will positively guide the task's execution.
</Step>
<Step title="Handle Negative Feedback">
If you provide negative feedback:
- The crew will retry the task with added context from your feedback.
- You'll receive another webhook notification for further review.
- Repeat steps 4-6 until satisfied.
</Step>
<Step title="Execution Continuation">
When you submit positive feedback, the execution will proceed to the next steps.
</Step>
</Steps>
</Accordion>
<Accordion title="How to configure Salesforce with CrewAI Enterprise">
**Salesforce Demo**
Salesforce is a leading customer relationship management (CRM) platform that helps businesses streamline their sales, service, and marketing operations.
<Frame>
<iframe width="100%" height="400" src="https://www.youtube.com/embed/oJunVqjjfu4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</Frame>
</Accordion>
<Accordion title="How can you control the maximum number of requests per minute that the entire crew can perform?">
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Accordion>
</AccordionGroup>

BIN
docs/flows.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

View File

@@ -0,0 +1,157 @@
---
title: Customizing Prompts
description: Dive deeper into low-level prompt customization for CrewAI, enabling super custom and complex use cases for different models and languages.
icon: message-pen
---
# Customizing Prompts at a Low Level
## Why Customize Prompts?
Although CrewAI's default prompts work well for many scenarios, low-level customization opens the door to significantly more flexible and powerful agent behavior. Heres why you might want to take advantage of this deeper control:
1. **Optimize for specific LLMs** Different models (such as GPT-4, Claude, or Llama) thrive with prompt formats tailored to their unique architectures.
2. **Change the language** Build agents that operate exclusively in languages beyond English, handling nuances with precision.
3. **Specialize for complex domains** Adapt prompts for highly specialized industries like healthcare, finance, or legal.
4. **Adjust tone and style** Make agents more formal, casual, creative, or analytical.
5. **Support super custom use cases** Utilize advanced prompt structures and formatting to meet intricate, project-specific requirements.
This guide explores how to tap into CrewAI's prompts at a lower level, giving you fine-grained control over how agents think and interact.
## Understanding CrewAI's Prompt System
Under the hood, CrewAI employs a modular prompt system that you can customize extensively:
- **Agent templates** Govern each agents approach to their assigned role.
- **Prompt slices** Control specialized behaviors such as tasks, tool usage, and output structure.
- **Error handling** Direct how agents respond to failures, exceptions, or timeouts.
- **Tool-specific prompts** Define detailed instructions for how tools are invoked or utilized.
Check out the [original prompt templates in CrewAI's repository](https://github.com/crewAIInc/crewAI/blob/main/src/crewai/translations/en.json) to see how these elements are organized. From there, you can override or adapt them as needed to unlock advanced behaviors.
## Best Practices for Managing Prompt Files
When engaging in low-level prompt customization, follow these guidelines to keep things organized and maintainable:
1. **Keep files separate** Store your customized prompts in dedicated JSON files outside your main codebase.
2. **Version control** Track changes within your repository, ensuring clear documentation of prompt adjustments over time.
3. **Organize by model or language** Use naming schemes like `prompts_llama.json` or `prompts_es.json` to quickly identify specialized configurations.
4. **Document changes** Provide comments or maintain a README detailing the purpose and scope of your customizations.
5. **Minimize alterations** Only override the specific slices you genuinely need to adjust, keeping default functionality intact for everything else.
## The Simplest Way to Customize Prompts
One straightforward approach is to create a JSON file for the prompts you want to override and then point your Crew at that file:
1. Craft a JSON file with your updated prompt slices.
2. Reference that file via the `prompt_file` parameter in your Crew.
CrewAI then merges your customizations with the defaults, so you dont have to redefine every prompt. Heres how:
### Example: Basic Prompt Customization
Create a `custom_prompts.json` file with the prompts you want to modify. Ensure you list all top-level prompts it should contain, not just your changes:
```json
{
"slices": {
"format": "When responding, follow this structure:\n\nTHOUGHTS: Your step-by-step thinking\nACTION: Any tool you're using\nRESULT: Your final answer or conclusion"
}
}
```
Then integrate it like so:
```python
from crewai import Agent, Crew, Task, Process
# Create agents and tasks as normal
researcher = Agent(
role="Research Specialist",
goal="Find information on quantum computing",
backstory="You are a quantum physics expert",
verbose=True
)
research_task = Task(
description="Research quantum computing applications",
expected_output="A summary of practical applications",
agent=researcher
)
# Create a crew with your custom prompt file
crew = Crew(
agents=[researcher],
tasks=[research_task],
prompt_file="path/to/custom_prompts.json",
verbose=True
)
# Run the crew
result = crew.kickoff()
```
With these few edits, you gain low-level control over how your agents communicate and solve tasks.
## Optimizing for Specific Models
Different models thrive on differently structured prompts. Making deeper adjustments can significantly boost performance by aligning your prompts with a models nuances.
### Example: Llama 3.3 Prompting Template
For instance, when dealing with Metas Llama 3.3, deeper-level customization may reflect the recommended structure described at:
https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/#prompt-template
Heres an example to highlight how you might fine-tune an Agent to leverage Llama 3.3 in code:
```python
from crewai import Agent, Crew, Task, Process
from crewai_tools import DirectoryReadTool, FileReadTool
# Define templates for system, user (prompt), and assistant (response) messages
system_template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ .System }}<|eot_id|>"""
prompt_template = """<|start_header_id|>user<|end_header_id|>{{ .Prompt }}<|eot_id|>"""
response_template = """<|start_header_id|>assistant<|end_header_id|>{{ .Response }}<|eot_id|>"""
# Create an Agent using Llama-specific layouts
principal_engineer = Agent(
role="Principal Engineer",
goal="Oversee AI architecture and make high-level decisions",
backstory="You are the lead engineer responsible for critical AI systems",
verbose=True,
llm="groq/llama-3.3-70b-versatile", # Using the Llama 3 model
system_template=system_template,
prompt_template=prompt_template,
response_template=response_template,
tools=[DirectoryReadTool(), FileReadTool()]
)
# Define a sample task
engineering_task = Task(
description="Review AI implementation files for potential improvements",
expected_output="A summary of key findings and recommendations",
agent=principal_engineer
)
# Create a Crew for the task
llama_crew = Crew(
agents=[principal_engineer],
tasks=[engineering_task],
process=Process.sequential,
verbose=True
)
# Execute the crew
result = llama_crew.kickoff()
print(result.raw)
```
Through this deeper configuration, you can exercise comprehensive, low-level control over your Llama-based workflows without needing a separate JSON file.
## Conclusion
Low-level prompt customization in CrewAI opens the door to super custom, complex use cases. By establishing well-organized prompt files (or direct inline templates), you can accommodate various models, languages, and specialized domains. This level of flexibility ensures you can craft precisely the AI behavior you need, all while knowing CrewAI still provides reliable defaults when you dont override them.
<Check>
You now have the foundation for advanced prompt customizations in CrewAI. Whether youre adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.
</Check>

View File

@@ -0,0 +1,135 @@
---
title: Fingerprinting
description: Learn how to use CrewAI's fingerprinting system to uniquely identify and track components throughout their lifecycle.
icon: fingerprint
---
# Fingerprinting in CrewAI
## Overview
Fingerprints in CrewAI provide a way to uniquely identify and track components throughout their lifecycle. Each `Agent`, `Crew`, and `Task` automatically receives a unique fingerprint when created, which cannot be manually overridden.
These fingerprints can be used for:
- Auditing and tracking component usage
- Ensuring component identity integrity
- Attaching metadata to components
- Creating a traceable chain of operations
## How Fingerprints Work
A fingerprint is an instance of the `Fingerprint` class from the `crewai.security` module. Each fingerprint contains:
- A UUID string: A unique identifier for the component that is automatically generated and cannot be manually set
- A creation timestamp: When the fingerprint was generated, automatically set and cannot be manually modified
- Metadata: A dictionary of additional information that can be customized
Fingerprints are automatically generated and assigned when a component is created. Each component exposes its fingerprint through a read-only property.
## Basic Usage
### Accessing Fingerprints
```python
from crewai import Agent, Crew, Task
# Create components - fingerprints are automatically generated
agent = Agent(
role="Data Scientist",
goal="Analyze data",
backstory="Expert in data analysis"
)
crew = Crew(
agents=[agent],
tasks=[]
)
task = Task(
description="Analyze customer data",
expected_output="Insights from data analysis",
agent=agent
)
# Access the fingerprints
agent_fingerprint = agent.fingerprint
crew_fingerprint = crew.fingerprint
task_fingerprint = task.fingerprint
# Print the UUID strings
print(f"Agent fingerprint: {agent_fingerprint.uuid_str}")
print(f"Crew fingerprint: {crew_fingerprint.uuid_str}")
print(f"Task fingerprint: {task_fingerprint.uuid_str}")
```
### Working with Fingerprint Metadata
You can add metadata to fingerprints for additional context:
```python
# Add metadata to the agent's fingerprint
agent.security_config.fingerprint.metadata = {
"version": "1.0",
"department": "Data Science",
"project": "Customer Analysis"
}
# Access the metadata
print(f"Agent metadata: {agent.fingerprint.metadata}")
```
## Fingerprint Persistence
Fingerprints are designed to persist and remain unchanged throughout a component's lifecycle. If you modify a component, the fingerprint remains the same:
```python
original_fingerprint = agent.fingerprint.uuid_str
# Modify the agent
agent.goal = "New goal for analysis"
# The fingerprint remains unchanged
assert agent.fingerprint.uuid_str == original_fingerprint
```
## Deterministic Fingerprints
While you cannot directly set the UUID and creation timestamp, you can create deterministic fingerprints using the `generate` method with a seed:
```python
from crewai.security import Fingerprint
# Create a deterministic fingerprint using a seed string
deterministic_fingerprint = Fingerprint.generate(seed="my-agent-id")
# The same seed always produces the same fingerprint
same_fingerprint = Fingerprint.generate(seed="my-agent-id")
assert deterministic_fingerprint.uuid_str == same_fingerprint.uuid_str
# You can also set metadata
custom_fingerprint = Fingerprint.generate(
seed="my-agent-id",
metadata={"version": "1.0"}
)
```
## Advanced Usage
### Fingerprint Structure
Each fingerprint has the following structure:
```python
from crewai.security import Fingerprint
fingerprint = agent.fingerprint
# UUID string - the unique identifier (auto-generated)
uuid_str = fingerprint.uuid_str # e.g., "123e4567-e89b-12d3-a456-426614174000"
# Creation timestamp (auto-generated)
created_at = fingerprint.created_at # A datetime object
# Metadata - for additional information (can be customized)
metadata = fingerprint.metadata # A dictionary, defaults to {}
```

View File

@@ -0,0 +1,454 @@
---
title: Crafting Effective Agents
description: Learn best practices for designing powerful, specialized AI agents that collaborate effectively to solve complex problems.
icon: robot
---
# Crafting Effective Agents
## The Art and Science of Agent Design
At the heart of CrewAI lies the agent - a specialized AI entity designed to perform specific roles within a collaborative framework. While creating basic agents is simple, crafting truly effective agents that produce exceptional results requires understanding key design principles and best practices.
This guide will help you master the art of agent design, enabling you to create specialized AI personas that collaborate effectively, think critically, and produce high-quality outputs tailored to your specific needs.
### Why Agent Design Matters
The way you define your agents significantly impacts:
1. **Output quality**: Well-designed agents produce more relevant, high-quality results
2. **Collaboration effectiveness**: Agents with complementary skills work together more efficiently
3. **Task performance**: Agents with clear roles and goals execute tasks more effectively
4. **System scalability**: Thoughtfully designed agents can be reused across multiple crews and contexts
Let's explore best practices for creating agents that excel in these dimensions.
## The 80/20 Rule: Focus on Tasks Over Agents
When building effective AI systems, remember this crucial principle: **80% of your effort should go into designing tasks, and only 20% into defining agents**.
Why? Because even the most perfectly defined agent will fail with poorly designed tasks, but well-designed tasks can elevate even a simple agent. This means:
- Spend most of your time writing clear task instructions
- Define detailed inputs and expected outputs
- Add examples and context to guide execution
- Dedicate the remaining time to agent role, goal, and backstory
This doesn't mean agent design isn't important - it absolutely is. But task design is where most execution failures occur, so prioritize accordingly.
## Core Principles of Effective Agent Design
### 1. The Role-Goal-Backstory Framework
The most powerful agents in CrewAI are built on a strong foundation of three key elements:
#### Role: The Agent's Specialized Function
The role defines what the agent does and their area of expertise. When crafting roles:
- **Be specific and specialized**: Instead of "Writer," use "Technical Documentation Specialist" or "Creative Storyteller"
- **Align with real-world professions**: Base roles on recognizable professional archetypes
- **Include domain expertise**: Specify the agent's field of knowledge (e.g., "Financial Analyst specializing in market trends")
**Examples of effective roles:**
```yaml
role: "Senior UX Researcher specializing in user interview analysis"
role: "Full-Stack Software Architect with expertise in distributed systems"
role: "Corporate Communications Director specializing in crisis management"
```
#### Goal: The Agent's Purpose and Motivation
The goal directs the agent's efforts and shapes their decision-making process. Effective goals should:
- **Be clear and outcome-focused**: Define what the agent is trying to achieve
- **Emphasize quality standards**: Include expectations about the quality of work
- **Incorporate success criteria**: Help the agent understand what "good" looks like
**Examples of effective goals:**
```yaml
goal: "Uncover actionable user insights by analyzing interview data and identifying recurring patterns, unmet needs, and improvement opportunities"
goal: "Design robust, scalable system architectures that balance performance, maintainability, and cost-effectiveness"
goal: "Craft clear, empathetic crisis communications that address stakeholder concerns while protecting organizational reputation"
```
#### Backstory: The Agent's Experience and Perspective
The backstory gives depth to the agent, influencing how they approach problems and interact with others. Good backstories:
- **Establish expertise and experience**: Explain how the agent gained their skills
- **Define working style and values**: Describe how the agent approaches their work
- **Create a cohesive persona**: Ensure all elements of the backstory align with the role and goal
**Examples of effective backstories:**
```yaml
backstory: "You have spent 15 years conducting and analyzing user research for top tech companies. You have a talent for reading between the lines and identifying patterns that others miss. You believe that good UX is invisible and that the best insights come from listening to what users don't say as much as what they do say."
backstory: "With 20+ years of experience building distributed systems at scale, you've developed a pragmatic approach to software architecture. You've seen both successful and failed systems and have learned valuable lessons from each. You balance theoretical best practices with practical constraints and always consider the maintenance and operational aspects of your designs."
backstory: "As a seasoned communications professional who has guided multiple organizations through high-profile crises, you understand the importance of transparency, speed, and empathy in crisis response. You have a methodical approach to crafting messages that address concerns while maintaining organizational credibility."
```
### 2. Specialists Over Generalists
Agents perform significantly better when given specialized roles rather than general ones. A highly focused agent delivers more precise, relevant outputs:
**Generic (Less Effective):**
```yaml
role: "Writer"
```
**Specialized (More Effective):**
```yaml
role: "Technical Blog Writer specializing in explaining complex AI concepts to non-technical audiences"
```
**Specialist Benefits:**
- Clearer understanding of expected output
- More consistent performance
- Better alignment with specific tasks
- Improved ability to make domain-specific judgments
### 3. Balancing Specialization and Versatility
Effective agents strike the right balance between specialization (doing one thing extremely well) and versatility (being adaptable to various situations):
- **Specialize in role, versatile in application**: Create agents with specialized skills that can be applied across multiple contexts
- **Avoid overly narrow definitions**: Ensure agents can handle variations within their domain of expertise
- **Consider the collaborative context**: Design agents whose specializations complement the other agents they'll work with
### 4. Setting Appropriate Expertise Levels
The expertise level you assign to your agent shapes how they approach tasks:
- **Novice agents**: Good for straightforward tasks, brainstorming, or initial drafts
- **Intermediate agents**: Suitable for most standard tasks with reliable execution
- **Expert agents**: Best for complex, specialized tasks requiring depth and nuance
- **World-class agents**: Reserved for critical tasks where exceptional quality is needed
Choose the appropriate expertise level based on task complexity and quality requirements. For most collaborative crews, a mix of expertise levels often works best, with higher expertise assigned to core specialized functions.
## Practical Examples: Before and After
Let's look at some examples of agent definitions before and after applying these best practices:
### Example 1: Content Creation Agent
**Before:**
```yaml
role: "Writer"
goal: "Write good content"
backstory: "You are a writer who creates content for websites."
```
**After:**
```yaml
role: "B2B Technology Content Strategist"
goal: "Create compelling, technically accurate content that explains complex topics in accessible language while driving reader engagement and supporting business objectives"
backstory: "You have spent a decade creating content for leading technology companies, specializing in translating technical concepts for business audiences. You excel at research, interviewing subject matter experts, and structuring information for maximum clarity and impact. You believe that the best B2B content educates first and sells second, building trust through genuine expertise rather than marketing hype."
```
### Example 2: Research Agent
**Before:**
```yaml
role: "Researcher"
goal: "Find information"
backstory: "You are good at finding information online."
```
**After:**
```yaml
role: "Academic Research Specialist in Emerging Technologies"
goal: "Discover and synthesize cutting-edge research, identifying key trends, methodologies, and findings while evaluating the quality and reliability of sources"
backstory: "With a background in both computer science and library science, you've mastered the art of digital research. You've worked with research teams at prestigious universities and know how to navigate academic databases, evaluate research quality, and synthesize findings across disciplines. You're methodical in your approach, always cross-referencing information and tracing claims to primary sources before drawing conclusions."
```
## Crafting Effective Tasks for Your Agents
While agent design is important, task design is critical for successful execution. Here are best practices for designing tasks that set your agents up for success:
### The Anatomy of an Effective Task
A well-designed task has two key components that serve different purposes:
#### Task Description: The Process
The description should focus on what to do and how to do it, including:
- Detailed instructions for execution
- Context and background information
- Scope and constraints
- Process steps to follow
#### Expected Output: The Deliverable
The expected output should define what the final result should look like:
- Format specifications (markdown, JSON, etc.)
- Structure requirements
- Quality criteria
- Examples of good outputs (when possible)
### Task Design Best Practices
#### 1. Single Purpose, Single Output
Tasks perform best when focused on one clear objective:
**Bad Example (Too Broad):**
```yaml
task_description: "Research market trends, analyze the data, and create a visualization."
```
**Good Example (Focused):**
```yaml
# Task 1
research_task:
description: "Research the top 5 market trends in the AI industry for 2024."
expected_output: "A markdown list of the 5 trends with supporting evidence."
# Task 2
analysis_task:
description: "Analyze the identified trends to determine potential business impacts."
expected_output: "A structured analysis with impact ratings (High/Medium/Low)."
# Task 3
visualization_task:
description: "Create a visual representation of the analyzed trends."
expected_output: "A description of a chart showing trends and their impact ratings."
```
#### 2. Be Explicit About Inputs and Outputs
Always clearly specify what inputs the task will use and what the output should look like:
**Example:**
```yaml
analysis_task:
description: >
Analyze the customer feedback data from the CSV file.
Focus on identifying recurring themes related to product usability.
Consider sentiment and frequency when determining importance.
expected_output: >
A markdown report with the following sections:
1. Executive summary (3-5 bullet points)
2. Top 3 usability issues with supporting data
3. Recommendations for improvement
```
#### 3. Include Purpose and Context
Explain why the task matters and how it fits into the larger workflow:
**Example:**
```yaml
competitor_analysis_task:
description: >
Analyze our three main competitors' pricing strategies.
This analysis will inform our upcoming pricing model revision.
Focus on identifying patterns in how they price premium features
and how they structure their tiered offerings.
```
#### 4. Use Structured Output Tools
For machine-readable outputs, specify the format clearly:
**Example:**
```yaml
data_extraction_task:
description: "Extract key metrics from the quarterly report."
expected_output: "JSON object with the following keys: revenue, growth_rate, customer_acquisition_cost, and retention_rate."
```
## Common Mistakes to Avoid
Based on lessons learned from real-world implementations, here are the most common pitfalls in agent and task design:
### 1. Unclear Task Instructions
**Problem:** Tasks lack sufficient detail, making it difficult for agents to execute effectively.
**Example of Poor Design:**
```yaml
research_task:
description: "Research AI trends."
expected_output: "A report on AI trends."
```
**Improved Version:**
```yaml
research_task:
description: >
Research the top emerging AI trends for 2024 with a focus on:
1. Enterprise adoption patterns
2. Technical breakthroughs in the past 6 months
3. Regulatory developments affecting implementation
For each trend, identify key companies, technologies, and potential business impacts.
expected_output: >
A comprehensive markdown report with:
- Executive summary (5 bullet points)
- 5-7 major trends with supporting evidence
- For each trend: definition, examples, and business implications
- References to authoritative sources
```
### 2. "God Tasks" That Try to Do Too Much
**Problem:** Tasks that combine multiple complex operations into one instruction set.
**Example of Poor Design:**
```yaml
comprehensive_task:
description: "Research market trends, analyze competitor strategies, create a marketing plan, and design a launch timeline."
```
**Improved Version:**
Break this into sequential, focused tasks:
```yaml
# Task 1: Research
market_research_task:
description: "Research current market trends in the SaaS project management space."
expected_output: "A markdown summary of key market trends."
# Task 2: Competitive Analysis
competitor_analysis_task:
description: "Analyze strategies of the top 3 competitors based on the market research."
expected_output: "A comparison table of competitor strategies."
context: [market_research_task]
# Continue with additional focused tasks...
```
### 3. Misaligned Description and Expected Output
**Problem:** The task description asks for one thing while the expected output specifies something different.
**Example of Poor Design:**
```yaml
analysis_task:
description: "Analyze customer feedback to find areas of improvement."
expected_output: "A marketing plan for the next quarter."
```
**Improved Version:**
```yaml
analysis_task:
description: "Analyze customer feedback to identify the top 3 areas for product improvement."
expected_output: "A report listing the 3 priority improvement areas with supporting customer quotes and data points."
```
### 4. Not Understanding the Process Yourself
**Problem:** Asking agents to execute tasks that you yourself don't fully understand.
**Solution:**
1. Try to perform the task manually first
2. Document your process, decision points, and information sources
3. Use this documentation as the basis for your task description
### 5. Premature Use of Hierarchical Structures
**Problem:** Creating unnecessarily complex agent hierarchies where sequential processes would work better.
**Solution:** Start with sequential processes and only move to hierarchical models when the workflow complexity truly requires it.
### 6. Vague or Generic Agent Definitions
**Problem:** Generic agent definitions lead to generic outputs.
**Example of Poor Design:**
```yaml
agent:
role: "Business Analyst"
goal: "Analyze business data"
backstory: "You are good at business analysis."
```
**Improved Version:**
```yaml
agent:
role: "SaaS Metrics Specialist focusing on growth-stage startups"
goal: "Identify actionable insights from business data that can directly impact customer retention and revenue growth"
backstory: "With 10+ years analyzing SaaS business models, you've developed a keen eye for the metrics that truly matter for sustainable growth. You've helped numerous companies identify the leverage points that turned around their business trajectory. You believe in connecting data to specific, actionable recommendations rather than general observations."
```
## Advanced Agent Design Strategies
### Designing for Collaboration
When creating agents that will work together in a crew, consider:
- **Complementary skills**: Design agents with distinct but complementary abilities
- **Handoff points**: Define clear interfaces for how work passes between agents
- **Constructive tension**: Sometimes, creating agents with slightly different perspectives can lead to better outcomes through productive dialogue
For example, a content creation crew might include:
```yaml
# Research Agent
role: "Research Specialist for technical topics"
goal: "Gather comprehensive, accurate information from authoritative sources"
backstory: "You are a meticulous researcher with a background in library science..."
# Writer Agent
role: "Technical Content Writer"
goal: "Transform research into engaging, clear content that educates and informs"
backstory: "You are an experienced writer who excels at explaining complex concepts..."
# Editor Agent
role: "Content Quality Editor"
goal: "Ensure content is accurate, well-structured, and polished while maintaining consistency"
backstory: "With years of experience in publishing, you have a keen eye for detail..."
```
### Creating Specialized Tool Users
Some agents can be designed specifically to leverage certain tools effectively:
```yaml
role: "Data Analysis Specialist"
goal: "Derive meaningful insights from complex datasets through statistical analysis"
backstory: "With a background in data science, you excel at working with structured and unstructured data..."
tools: [PythonREPLTool, DataVisualizationTool, CSVAnalysisTool]
```
### Tailoring Agents to LLM Capabilities
Different LLMs have different strengths. Design your agents with these capabilities in mind:
```yaml
# For complex reasoning tasks
analyst:
role: "Data Insights Analyst"
goal: "..."
backstory: "..."
llm: openai/gpt-4o
# For creative content
writer:
role: "Creative Content Writer"
goal: "..."
backstory: "..."
llm: anthropic/claude-3-opus
```
## Testing and Iterating on Agent Design
Agent design is often an iterative process. Here's a practical approach:
1. **Start with a prototype**: Create an initial agent definition
2. **Test with sample tasks**: Evaluate performance on representative tasks
3. **Analyze outputs**: Identify strengths and weaknesses
4. **Refine the definition**: Adjust role, goal, and backstory based on observations
5. **Test in collaboration**: Evaluate how the agent performs in a crew setting
## Conclusion
Crafting effective agents is both an art and a science. By carefully defining roles, goals, and backstories that align with your specific needs, and combining them with well-designed tasks, you can create specialized AI collaborators that produce exceptional results.
Remember that agent and task design is an iterative process. Start with these best practices, observe your agents in action, and refine your approach based on what you learn. And always keep in mind the 80/20 rule - focus most of your effort on creating clear, focused tasks to get the best results from your agents.
<Check>
Congratulations! You now understand the principles and practices of effective agent design. Apply these techniques to create powerful, specialized agents that work together seamlessly to accomplish complex tasks.
</Check>
## Next Steps
- Experiment with different agent configurations for your specific use case
- Learn about [building your first crew](/guides/crews/first-crew) to see how agents work together
- Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced orchestration

View File

@@ -0,0 +1,505 @@
---
title: Evaluating Use Cases for CrewAI
description: Learn how to assess your AI application needs and choose the right approach between Crews and Flows based on complexity and precision requirements.
icon: scale-balanced
---
# Evaluating Use Cases for CrewAI
## Understanding the Decision Framework
When building AI applications with CrewAI, one of the most important decisions you'll make is choosing the right approach for your specific use case. Should you use a Crew? A Flow? A combination of both? This guide will help you evaluate your requirements and make informed architectural decisions.
At the heart of this decision is understanding the relationship between **complexity** and **precision** in your application:
<Frame caption="Complexity vs. Precision Matrix for CrewAI Applications">
<img src="../..//complexity_precision.png" alt="Complexity vs. Precision Matrix" />
</Frame>
This matrix helps visualize how different approaches align with varying requirements for complexity and precision. Let's explore what each quadrant means and how it guides your architectural choices.
## The Complexity-Precision Matrix Explained
### What is Complexity?
In the context of CrewAI applications, **complexity** refers to:
- The number of distinct steps or operations required
- The diversity of tasks that need to be performed
- The interdependencies between different components
- The need for conditional logic and branching
- The sophistication of the overall workflow
### What is Precision?
**Precision** in this context refers to:
- The accuracy required in the final output
- The need for structured, predictable results
- The importance of reproducibility
- The level of control needed over each step
- The tolerance for variation in outputs
### The Four Quadrants
#### 1. Low Complexity, Low Precision
**Characteristics:**
- Simple, straightforward tasks
- Tolerance for some variation in outputs
- Limited number of steps
- Creative or exploratory applications
**Recommended Approach:** Simple Crews with minimal agents
**Example Use Cases:**
- Basic content generation
- Idea brainstorming
- Simple summarization tasks
- Creative writing assistance
#### 2. Low Complexity, High Precision
**Characteristics:**
- Simple workflows that require exact, structured outputs
- Need for reproducible results
- Limited steps but high accuracy requirements
- Often involves data processing or transformation
**Recommended Approach:** Flows with direct LLM calls or simple Crews with structured outputs
**Example Use Cases:**
- Data extraction and transformation
- Form filling and validation
- Structured content generation (JSON, XML)
- Simple classification tasks
#### 3. High Complexity, Low Precision
**Characteristics:**
- Multi-stage processes with many steps
- Creative or exploratory outputs
- Complex interactions between components
- Tolerance for variation in final results
**Recommended Approach:** Complex Crews with multiple specialized agents
**Example Use Cases:**
- Research and analysis
- Content creation pipelines
- Exploratory data analysis
- Creative problem-solving
#### 4. High Complexity, High Precision
**Characteristics:**
- Complex workflows requiring structured outputs
- Multiple interdependent steps with strict accuracy requirements
- Need for both sophisticated processing and precise results
- Often mission-critical applications
**Recommended Approach:** Flows orchestrating multiple Crews with validation steps
**Example Use Cases:**
- Enterprise decision support systems
- Complex data processing pipelines
- Multi-stage document processing
- Regulated industry applications
## Choosing Between Crews and Flows
### When to Choose Crews
Crews are ideal when:
1. **You need collaborative intelligence** - Multiple agents with different specializations need to work together
2. **The problem requires emergent thinking** - The solution benefits from different perspectives and approaches
3. **The task is primarily creative or analytical** - The work involves research, content creation, or analysis
4. **You value adaptability over strict structure** - The workflow can benefit from agent autonomy
5. **The output format can be somewhat flexible** - Some variation in output structure is acceptable
```python
# Example: Research Crew for market analysis
from crewai import Agent, Crew, Process, Task
# Create specialized agents
researcher = Agent(
role="Market Research Specialist",
goal="Find comprehensive market data on emerging technologies",
backstory="You are an expert at discovering market trends and gathering data."
)
analyst = Agent(
role="Market Analyst",
goal="Analyze market data and identify key opportunities",
backstory="You excel at interpreting market data and spotting valuable insights."
)
# Define their tasks
research_task = Task(
description="Research the current market landscape for AI-powered healthcare solutions",
expected_output="Comprehensive market data including key players, market size, and growth trends",
agent=researcher
)
analysis_task = Task(
description="Analyze the market data and identify the top 3 investment opportunities",
expected_output="Analysis report with 3 recommended investment opportunities and rationale",
agent=analyst,
context=[research_task]
)
# Create the crew
market_analysis_crew = Crew(
agents=[researcher, analyst],
tasks=[research_task, analysis_task],
process=Process.sequential,
verbose=True
)
# Run the crew
result = market_analysis_crew.kickoff()
```
### When to Choose Flows
Flows are ideal when:
1. **You need precise control over execution** - The workflow requires exact sequencing and state management
2. **The application has complex state requirements** - You need to maintain and transform state across multiple steps
3. **You need structured, predictable outputs** - The application requires consistent, formatted results
4. **The workflow involves conditional logic** - Different paths need to be taken based on intermediate results
5. **You need to combine AI with procedural code** - The solution requires both AI capabilities and traditional programming
```python
# Example: Customer Support Flow with structured processing
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
from typing import List, Dict
# Define structured state
class SupportTicketState(BaseModel):
ticket_id: str = ""
customer_name: str = ""
issue_description: str = ""
category: str = ""
priority: str = "medium"
resolution: str = ""
satisfaction_score: int = 0
class CustomerSupportFlow(Flow[SupportTicketState]):
@start()
def receive_ticket(self):
# In a real app, this might come from an API
self.state.ticket_id = "TKT-12345"
self.state.customer_name = "Alex Johnson"
self.state.issue_description = "Unable to access premium features after payment"
return "Ticket received"
@listen(receive_ticket)
def categorize_ticket(self, _):
# Use a direct LLM call for categorization
from crewai import LLM
llm = LLM(model="openai/gpt-4o-mini")
prompt = f"""
Categorize the following customer support issue into one of these categories:
- Billing
- Account Access
- Technical Issue
- Feature Request
- Other
Issue: {self.state.issue_description}
Return only the category name.
"""
self.state.category = llm.call(prompt).strip()
return self.state.category
@router(categorize_ticket)
def route_by_category(self, category):
# Route to different handlers based on category
return category.lower().replace(" ", "_")
@listen("billing")
def handle_billing_issue(self):
# Handle billing-specific logic
self.state.priority = "high"
# More billing-specific processing...
return "Billing issue handled"
@listen("account_access")
def handle_access_issue(self):
# Handle access-specific logic
self.state.priority = "high"
# More access-specific processing...
return "Access issue handled"
# Additional category handlers...
@listen("billing", "account_access", "technical_issue", "feature_request", "other")
def resolve_ticket(self, resolution_info):
# Final resolution step
self.state.resolution = f"Issue resolved: {resolution_info}"
return self.state.resolution
# Run the flow
support_flow = CustomerSupportFlow()
result = support_flow.kickoff()
```
### When to Combine Crews and Flows
The most sophisticated applications often benefit from combining Crews and Flows:
1. **Complex multi-stage processes** - Use Flows to orchestrate the overall process and Crews for complex subtasks
2. **Applications requiring both creativity and structure** - Use Crews for creative tasks and Flows for structured processing
3. **Enterprise-grade AI applications** - Use Flows to manage state and process flow while leveraging Crews for specialized work
```python
# Example: Content Production Pipeline combining Crews and Flows
from crewai.flow.flow import Flow, listen, start
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
from typing import List, Dict
class ContentState(BaseModel):
topic: str = ""
target_audience: str = ""
content_type: str = ""
outline: Dict = {}
draft_content: str = ""
final_content: str = ""
seo_score: int = 0
class ContentProductionFlow(Flow[ContentState]):
@start()
def initialize_project(self):
# Set initial parameters
self.state.topic = "Sustainable Investing"
self.state.target_audience = "Millennial Investors"
self.state.content_type = "Blog Post"
return "Project initialized"
@listen(initialize_project)
def create_outline(self, _):
# Use a research crew to create an outline
researcher = Agent(
role="Content Researcher",
goal=f"Research {self.state.topic} for {self.state.target_audience}",
backstory="You are an expert researcher with deep knowledge of content creation."
)
outliner = Agent(
role="Content Strategist",
goal=f"Create an engaging outline for a {self.state.content_type}",
backstory="You excel at structuring content for maximum engagement."
)
research_task = Task(
description=f"Research {self.state.topic} focusing on what would interest {self.state.target_audience}",
expected_output="Comprehensive research notes with key points and statistics",
agent=researcher
)
outline_task = Task(
description=f"Create an outline for a {self.state.content_type} about {self.state.topic}",
expected_output="Detailed content outline with sections and key points",
agent=outliner,
context=[research_task]
)
outline_crew = Crew(
agents=[researcher, outliner],
tasks=[research_task, outline_task],
process=Process.sequential,
verbose=True
)
# Run the crew and store the result
result = outline_crew.kickoff()
# Parse the outline (in a real app, you might use a more robust parsing approach)
import json
try:
self.state.outline = json.loads(result.raw)
except:
# Fallback if not valid JSON
self.state.outline = {"sections": result.raw}
return "Outline created"
@listen(create_outline)
def write_content(self, _):
# Use a writing crew to create the content
writer = Agent(
role="Content Writer",
goal=f"Write engaging content for {self.state.target_audience}",
backstory="You are a skilled writer who creates compelling content."
)
editor = Agent(
role="Content Editor",
goal="Ensure content is polished, accurate, and engaging",
backstory="You have a keen eye for detail and a talent for improving content."
)
writing_task = Task(
description=f"Write a {self.state.content_type} about {self.state.topic} following this outline: {self.state.outline}",
expected_output="Complete draft content in markdown format",
agent=writer
)
editing_task = Task(
description="Edit and improve the draft content for clarity, engagement, and accuracy",
expected_output="Polished final content in markdown format",
agent=editor,
context=[writing_task]
)
writing_crew = Crew(
agents=[writer, editor],
tasks=[writing_task, editing_task],
process=Process.sequential,
verbose=True
)
# Run the crew and store the result
result = writing_crew.kickoff()
self.state.final_content = result.raw
return "Content created"
@listen(write_content)
def optimize_for_seo(self, _):
# Use a direct LLM call for SEO optimization
from crewai import LLM
llm = LLM(model="openai/gpt-4o-mini")
prompt = f"""
Analyze this content for SEO effectiveness for the keyword "{self.state.topic}".
Rate it on a scale of 1-100 and provide 3 specific recommendations for improvement.
Content: {self.state.final_content[:1000]}... (truncated for brevity)
Format your response as JSON with the following structure:
{{
"score": 85,
"recommendations": [
"Recommendation 1",
"Recommendation 2",
"Recommendation 3"
]
}}
"""
seo_analysis = llm.call(prompt)
# Parse the SEO analysis
import json
try:
analysis = json.loads(seo_analysis)
self.state.seo_score = analysis.get("score", 0)
return analysis
except:
self.state.seo_score = 50
return {"score": 50, "recommendations": ["Unable to parse SEO analysis"]}
# Run the flow
content_flow = ContentProductionFlow()
result = content_flow.kickoff()
```
## Practical Evaluation Framework
To determine the right approach for your specific use case, follow this step-by-step evaluation framework:
### Step 1: Assess Complexity
Rate your application's complexity on a scale of 1-10 by considering:
1. **Number of steps**: How many distinct operations are required?
- 1-3 steps: Low complexity (1-3)
- 4-7 steps: Medium complexity (4-7)
- 8+ steps: High complexity (8-10)
2. **Interdependencies**: How interconnected are the different parts?
- Few dependencies: Low complexity (1-3)
- Some dependencies: Medium complexity (4-7)
- Many complex dependencies: High complexity (8-10)
3. **Conditional logic**: How much branching and decision-making is needed?
- Linear process: Low complexity (1-3)
- Some branching: Medium complexity (4-7)
- Complex decision trees: High complexity (8-10)
4. **Domain knowledge**: How specialized is the knowledge required?
- General knowledge: Low complexity (1-3)
- Some specialized knowledge: Medium complexity (4-7)
- Deep expertise in multiple domains: High complexity (8-10)
Calculate your average score to determine overall complexity.
### Step 2: Assess Precision Requirements
Rate your precision requirements on a scale of 1-10 by considering:
1. **Output structure**: How structured must the output be?
- Free-form text: Low precision (1-3)
- Semi-structured: Medium precision (4-7)
- Strictly formatted (JSON, XML): High precision (8-10)
2. **Accuracy needs**: How important is factual accuracy?
- Creative content: Low precision (1-3)
- Informational content: Medium precision (4-7)
- Critical information: High precision (8-10)
3. **Reproducibility**: How consistent must results be across runs?
- Variation acceptable: Low precision (1-3)
- Some consistency needed: Medium precision (4-7)
- Exact reproducibility required: High precision (8-10)
4. **Error tolerance**: What is the impact of errors?
- Low impact: Low precision (1-3)
- Moderate impact: Medium precision (4-7)
- High impact: High precision (8-10)
Calculate your average score to determine overall precision requirements.
### Step 3: Map to the Matrix
Plot your complexity and precision scores on the matrix:
- **Low Complexity (1-4), Low Precision (1-4)**: Simple Crews
- **Low Complexity (1-4), High Precision (5-10)**: Flows with direct LLM calls
- **High Complexity (5-10), Low Precision (1-4)**: Complex Crews
- **High Complexity (5-10), High Precision (5-10)**: Flows orchestrating Crews
### Step 4: Consider Additional Factors
Beyond complexity and precision, consider:
1. **Development time**: Crews are often faster to prototype
2. **Maintenance needs**: Flows provide better long-term maintainability
3. **Team expertise**: Consider your team's familiarity with different approaches
4. **Scalability requirements**: Flows typically scale better for complex applications
5. **Integration needs**: Consider how the solution will integrate with existing systems
## Conclusion
Choosing between Crews and Flows—or combining them—is a critical architectural decision that impacts the effectiveness, maintainability, and scalability of your CrewAI application. By evaluating your use case along the dimensions of complexity and precision, you can make informed decisions that align with your specific requirements.
Remember that the best approach often evolves as your application matures. Start with the simplest solution that meets your needs, and be prepared to refine your architecture as you gain experience and your requirements become clearer.
<Check>
You now have a framework for evaluating CrewAI use cases and choosing the right approach based on complexity and precision requirements. This will help you build more effective, maintainable, and scalable AI applications.
</Check>
## Next Steps
- Learn more about [crafting effective agents](/guides/agents/crafting-effective-agents)
- Explore [building your first crew](/guides/crews/first-crew)
- Dive into [mastering flow state management](/guides/flows/mastering-flow-state)
- Check out the [core concepts](/concepts/agents) for deeper understanding

View File

@@ -0,0 +1,397 @@
---
title: Build Your First Crew
description: Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
icon: users-gear
---
# Build Your First Crew
## Unleashing the Power of Collaborative AI
Imagine having a team of specialized AI agents working together seamlessly to solve complex problems, each contributing their unique skills to achieve a common goal. This is the power of CrewAI - a framework that enables you to create collaborative AI systems that can accomplish tasks far beyond what a single AI could achieve alone.
In this guide, we'll walk through creating a research crew that will help us research and analyze a topic, then create a comprehensive report. This practical example demonstrates how AI agents can collaborate to accomplish complex tasks, but it's just the beginning of what's possible with CrewAI.
### What You'll Build and Learn
By the end of this guide, you'll have:
1. **Created a specialized AI research team** with distinct roles and responsibilities
2. **Orchestrated collaboration** between multiple AI agents
3. **Automated a complex workflow** that involves gathering information, analysis, and report generation
4. **Built foundational skills** that you can apply to more ambitious projects
While we're building a simple research crew in this guide, the same patterns and techniques can be applied to create much more sophisticated teams for tasks like:
- Multi-stage content creation with specialized writers, editors, and fact-checkers
- Complex customer service systems with tiered support agents
- Autonomous business analysts that gather data, create visualizations, and generate insights
- Product development teams that ideate, design, and plan implementation
Let's get started building your first crew!
### Prerequisites
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/concepts/llms#setting-up-your-llm)
3. Basic understanding of Python
## Step 1: Create a New CrewAI Project
First, let's create a new CrewAI project using the CLI. This command will set up a complete project structure with all the necessary files, allowing you to focus on defining your agents and their tasks rather than setting up boilerplate code.
```bash
crewai create crew research_crew
cd research_crew
```
This will generate a project with the basic structure needed for your crew. The CLI automatically creates:
- A project directory with the necessary files
- Configuration files for agents and tasks
- A basic crew implementation
- A main script to run the crew
<Frame caption="CrewAI Framework Overview">
<img src="../../crews.png" alt="CrewAI Framework Overview" />
</Frame>
## Step 2: Explore the Project Structure
Let's take a moment to understand the project structure created by the CLI. CrewAI follows best practices for Python projects, making it easy to maintain and extend your code as your crews become more complex.
```
research_crew/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── research_crew/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
This structure follows best practices for Python projects and makes it easy to organize your code. The separation of configuration files (in YAML) from implementation code (in Python) makes it easy to modify your crew's behavior without changing the underlying code.
## Step 3: Configure Your Agents
Now comes the fun part - defining your AI agents! In CrewAI, agents are specialized entities with specific roles, goals, and backstories that shape their behavior. Think of them as characters in a play, each with their own personality and purpose.
For our research crew, we'll create two agents:
1. A **researcher** who excels at finding and organizing information
2. An **analyst** who can interpret research findings and create insightful reports
Let's modify the `agents.yaml` file to define these specialized agents. Be sure
to set `llm` to the provider you are using.
```yaml
# src/research_crew/config/agents.yaml
researcher:
role: >
Senior Research Specialist for {topic}
goal: >
Find comprehensive and accurate information about {topic}
with a focus on recent developments and key insights
backstory: >
You are an experienced research specialist with a talent for
finding relevant information from various sources. You excel at
organizing information in a clear and structured manner, making
complex topics accessible to others.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
analyst:
role: >
Data Analyst and Report Writer for {topic}
goal: >
Analyze research findings and create a comprehensive, well-structured
report that presents insights in a clear and engaging way
backstory: >
You are a skilled analyst with a background in data interpretation
and technical writing. You have a talent for identifying patterns
and extracting meaningful insights from research data, then
communicating those insights effectively through well-crafted reports.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
```
Notice how each agent has a distinct role, goal, and backstory. These elements aren't just descriptive - they actively shape how the agent approaches its tasks. By crafting these carefully, you can create agents with specialized skills and perspectives that complement each other.
## Step 4: Define Your Tasks
With our agents defined, we now need to give them specific tasks to perform. Tasks in CrewAI represent the concrete work that agents will perform, with detailed instructions and expected outputs.
For our research crew, we'll define two main tasks:
1. A **research task** for gathering comprehensive information
2. An **analysis task** for creating an insightful report
Let's modify the `tasks.yaml` file:
```yaml
# src/research_crew/config/tasks.yaml
research_task:
description: >
Conduct thorough research on {topic}. Focus on:
1. Key concepts and definitions
2. Historical development and recent trends
3. Major challenges and opportunities
4. Notable applications or case studies
5. Future outlook and potential developments
Make sure to organize your findings in a structured format with clear sections.
expected_output: >
A comprehensive research document with well-organized sections covering
all the requested aspects of {topic}. Include specific facts, figures,
and examples where relevant.
agent: researcher
analysis_task:
description: >
Analyze the research findings and create a comprehensive report on {topic}.
Your report should:
1. Begin with an executive summary
2. Include all key information from the research
3. Provide insightful analysis of trends and patterns
4. Offer recommendations or future considerations
5. Be formatted in a professional, easy-to-read style with clear headings
expected_output: >
A polished, professional report on {topic} that presents the research
findings with added analysis and insights. The report should be well-structured
with an executive summary, main sections, and conclusion.
agent: analyst
context:
- research_task
output_file: output/report.md
```
Note the `context` field in the analysis task - this is a powerful feature that allows the analyst to access the output of the research task. This creates a workflow where information flows naturally between agents, just as it would in a human team.
## Step 5: Configure Your Crew
Now it's time to bring everything together by configuring our crew. The crew is the container that orchestrates how agents work together to complete tasks.
Let's modify the `crew.py` file:
```python
# src/research_crew/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class ResearchCrew():
"""Research crew for comprehensive topic analysis and reporting"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@agent
def analyst(self) -> Agent:
return Agent(
config=self.agents_config['analyst'], # type: ignore[index]
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'] # type: ignore[index]
)
@task
def analysis_task(self) -> Task:
return Task(
config=self.tasks_config['analysis_task'], # type: ignore[index]
output_file='output/report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the research crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
In this code, we're:
1. Creating the researcher agent and equipping it with the SerperDevTool to search the web
2. Creating the analyst agent
3. Setting up the research and analysis tasks
4. Configuring the crew to run tasks sequentially (the analyst will wait for the researcher to finish)
This is where the magic happens - with just a few lines of code, we've defined a collaborative AI system where specialized agents work together in a coordinated process.
## Step 6: Set Up Your Main Script
Now, let's set up the main script that will run our crew. This is where we provide the specific topic we want our crew to research.
```python
#!/usr/bin/env python
# src/research_crew/main.py
import os
from research_crew.crew import ResearchCrew
# Create output directory if it doesn't exist
os.makedirs('output', exist_ok=True)
def run():
"""
Run the research crew.
"""
inputs = {
'topic': 'Artificial Intelligence in Healthcare'
}
# Create and run the crew
result = ResearchCrew().crew().kickoff(inputs=inputs)
# Print the result
print("\n\n=== FINAL REPORT ===\n\n")
print(result.raw)
print("\n\nReport has been saved to output/report.md")
if __name__ == "__main__":
run()
```
This script prepares the environment, specifies our research topic, and kicks off the crew's work. The power of CrewAI is evident in how simple this code is - all the complexity of managing multiple AI agents is handled by the framework.
## Step 7: Set Up Your Environment Variables
Create a `.env` file in your project root with your API keys:
```sh
SERPER_API_KEY=your_serper_api_key
# Add your provider's API key here too.
```
See the [LLM Setup guide](/concepts/llms#setting-up-your-llm) for details on configuring your provider of choice. You can get a Serper API key from [Serper.dev](https://serper.dev/).
## Step 8: Install Dependencies
Install the required dependencies using the CrewAI CLI:
```bash
crewai install
```
This command will:
1. Read the dependencies from your project configuration
2. Create a virtual environment if needed
3. Install all required packages
## Step 9: Run Your Crew
Now for the exciting moment - it's time to run your crew and see AI collaboration in action!
```bash
crewai run
```
When you run this command, you'll see your crew spring to life. The researcher will gather information about the specified topic, and the analyst will then create a comprehensive report based on that research. You'll see the agents' thought processes, actions, and outputs in real-time as they work together to complete their tasks.
## Step 10: Review the Output
Once the crew completes its work, you'll find the final report in the `output/report.md` file. The report will include:
1. An executive summary
2. Detailed information about the topic
3. Analysis and insights
4. Recommendations or future considerations
Take a moment to appreciate what you've accomplished - you've created a system where multiple AI agents collaborated on a complex task, each contributing their specialized skills to produce a result that's greater than what any single agent could achieve alone.
## Exploring Other CLI Commands
CrewAI offers several other useful CLI commands for working with crews:
```bash
# View all available commands
crewai --help
# Run the crew
crewai run
# Test the crew
crewai test
# Reset crew memories
crewai reset-memories
# Replay from a specific task
crewai replay -t <task_id>
```
## The Art of the Possible: Beyond Your First Crew
What you've built in this guide is just the beginning. The skills and patterns you've learned can be applied to create increasingly sophisticated AI systems. Here are some ways you could extend this basic research crew:
### Expanding Your Crew
You could add more specialized agents to your crew:
- A **fact-checker** to verify research findings
- A **data visualizer** to create charts and graphs
- A **domain expert** with specialized knowledge in a particular area
- A **critic** to identify weaknesses in the analysis
### Adding Tools and Capabilities
You could enhance your agents with additional tools:
- Web browsing tools for real-time research
- CSV/database tools for data analysis
- Code execution tools for data processing
- API connections to external services
### Creating More Complex Workflows
You could implement more sophisticated processes:
- Hierarchical processes where manager agents delegate to worker agents
- Iterative processes with feedback loops for refinement
- Parallel processes where multiple agents work simultaneously
- Dynamic processes that adapt based on intermediate results
### Applying to Different Domains
The same patterns can be applied to create crews for:
- **Content creation**: Writers, editors, fact-checkers, and designers working together
- **Customer service**: Triage agents, specialists, and quality control working together
- **Product development**: Researchers, designers, and planners collaborating
- **Data analysis**: Data collectors, analysts, and visualization specialists
## Next Steps
Now that you've built your first crew, you can:
1. Experiment with different agent configurations and personalities
2. Try more complex task structures and workflows
3. Implement custom tools to give your agents new capabilities
4. Apply your crew to different topics or problem domains
5. Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced workflows with procedural programming
<Check>
Congratulations! You've successfully built your first CrewAI crew that can research and analyze any topic you provide. This foundational experience has equipped you with the skills to create increasingly sophisticated AI systems that can tackle complex, multi-stage problems through collaborative intelligence.
</Check>

View File

@@ -0,0 +1,624 @@
---
title: Build Your First Flow
description: Learn how to create structured, event-driven workflows with precise control over execution.
icon: diagram-project
---
# Build Your First Flow
## Taking Control of AI Workflows with Flows
CrewAI Flows represent the next level in AI orchestration - combining the collaborative power of AI agent crews with the precision and flexibility of procedural programming. While crews excel at agent collaboration, flows give you fine-grained control over exactly how and when different components of your AI system interact.
In this guide, we'll walk through creating a powerful CrewAI Flow that generates a comprehensive learning guide on any topic. This tutorial will demonstrate how Flows provide structured, event-driven control over your AI workflows by combining regular code, direct LLM calls, and crew-based processing.
### What Makes Flows Powerful
Flows enable you to:
1. **Combine different AI interaction patterns** - Use crews for complex collaborative tasks, direct LLM calls for simpler operations, and regular code for procedural logic
2. **Build event-driven systems** - Define how components respond to specific events and data changes
3. **Maintain state across components** - Share and transform data between different parts of your application
4. **Integrate with external systems** - Seamlessly connect your AI workflow with databases, APIs, and user interfaces
5. **Create complex execution paths** - Design conditional branches, parallel processing, and dynamic workflows
### What You'll Build and Learn
By the end of this guide, you'll have:
1. **Created a sophisticated content generation system** that combines user input, AI planning, and multi-agent content creation
2. **Orchestrated the flow of information** between different components of your system
3. **Implemented event-driven architecture** where each step responds to the completion of previous steps
4. **Built a foundation for more complex AI applications** that you can expand and customize
This guide creator flow demonstrates fundamental patterns that can be applied to create much more advanced applications, such as:
- Interactive AI assistants that combine multiple specialized subsystems
- Complex data processing pipelines with AI-enhanced transformations
- Autonomous agents that integrate with external services and APIs
- Multi-stage decision-making systems with human-in-the-loop processes
Let's dive in and build your first flow!
## Prerequisites
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your LLM API key in your environment, following the [LLM setup
guide](/concepts/llms#setting-up-your-llm)
3. Basic understanding of Python
## Step 1: Create a New CrewAI Flow Project
First, let's create a new CrewAI Flow project using the CLI. This command sets up a scaffolded project with all the necessary directories and template files for your flow.
```bash
crewai create flow guide_creator_flow
cd guide_creator_flow
```
This will generate a project with the basic structure needed for your flow.
<Frame caption="CrewAI Framework Overview">
<img src="../../flows.png" alt="CrewAI Framework Overview" />
</Frame>
## Step 2: Understanding the Project Structure
The generated project has the following structure. Take a moment to familiarize yourself with it, as understanding this structure will help you create more complex flows in the future.
```
guide_creator_flow/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
├── main.py
├── crews/
│ └── poem_crew/
│ ├── config/
│ │ ├── agents.yaml
│ │ └── tasks.yaml
│ └── poem_crew.py
└── tools/
└── custom_tool.py
```
This structure provides a clear separation between different components of your flow:
- The main flow logic in the `main.py` file
- Specialized crews in the `crews` directory
- Custom tools in the `tools` directory
We'll modify this structure to create our guide creator flow, which will orchestrate the process of generating comprehensive learning guides.
## Step 3: Add a Content Writer Crew
Our flow will need a specialized crew to handle the content creation process. Let's use the CrewAI CLI to add a content writer crew:
```bash
crewai flow add-crew content-crew
```
This command automatically creates the necessary directories and template files for your crew. The content writer crew will be responsible for writing and reviewing sections of our guide, working within the overall flow orchestrated by our main application.
## Step 4: Configure the Content Writer Crew
Now, let's modify the generated files for the content writer crew. We'll set up two specialized agents - a writer and a reviewer - that will collaborate to create high-quality content for our guide.
1. First, update the agents configuration file to define our content creation team:
Remember to set `llm` to the provider you are using.
```yaml
# src/guide_creator_flow/crews/content_crew/config/agents.yaml
content_writer:
role: >
Educational Content Writer
goal: >
Create engaging, informative content that thoroughly explains the assigned topic
and provides valuable insights to the reader
backstory: >
You are a talented educational writer with expertise in creating clear, engaging
content. You have a gift for explaining complex concepts in accessible language
and organizing information in a way that helps readers build their understanding.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
content_reviewer:
role: >
Educational Content Reviewer and Editor
goal: >
Ensure content is accurate, comprehensive, well-structured, and maintains
consistency with previously written sections
backstory: >
You are a meticulous editor with years of experience reviewing educational
content. You have an eye for detail, clarity, and coherence. You excel at
improving content while maintaining the original author's voice and ensuring
consistent quality across multiple sections.
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
```
These agent definitions establish the specialized roles and perspectives that will shape how our AI agents approach content creation. Notice how each agent has a distinct purpose and expertise.
2. Next, update the tasks configuration file to define the specific writing and reviewing tasks:
```yaml
# src/guide_creator_flow/crews/content_crew/config/tasks.yaml
write_section_task:
description: >
Write a comprehensive section on the topic: "{section_title}"
Section description: {section_description}
Target audience: {audience_level} level learners
Your content should:
1. Begin with a brief introduction to the section topic
2. Explain all key concepts clearly with examples
3. Include practical applications or exercises where appropriate
4. End with a summary of key points
5. Be approximately 500-800 words in length
Format your content in Markdown with appropriate headings, lists, and emphasis.
Previously written sections:
{previous_sections}
Make sure your content maintains consistency with previously written sections
and builds upon concepts that have already been explained.
expected_output: >
A well-structured, comprehensive section in Markdown format that thoroughly
explains the topic and is appropriate for the target audience.
agent: content_writer
review_section_task:
description: >
Review and improve the following section on "{section_title}":
{draft_content}
Target audience: {audience_level} level learners
Previously written sections:
{previous_sections}
Your review should:
1. Fix any grammatical or spelling errors
2. Improve clarity and readability
3. Ensure content is comprehensive and accurate
4. Verify consistency with previously written sections
5. Enhance the structure and flow
6. Add any missing key information
Provide the improved version of the section in Markdown format.
expected_output: >
An improved, polished version of the section that maintains the original
structure but enhances clarity, accuracy, and consistency.
agent: content_reviewer
context:
- write_section_task
```
These task definitions provide detailed instructions to our agents, ensuring they produce content that meets our quality standards. Note how the `context` parameter in the review task creates a workflow where the reviewer has access to the writer's output.
3. Now, update the crew implementation file to define how our agents and tasks work together:
```python
# src/guide_creator_flow/crews/content_crew/content_crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class ContentCrew():
"""Content writing crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def content_writer(self) -> Agent:
return Agent(
config=self.agents_config['content_writer'], # type: ignore[index]
verbose=True
)
@agent
def content_reviewer(self) -> Agent:
return Agent(
config=self.agents_config['content_reviewer'], # type: ignore[index]
verbose=True
)
@task
def write_section_task(self) -> Task:
return Task(
config=self.tasks_config['write_section_task'] # type: ignore[index]
)
@task
def review_section_task(self) -> Task:
return Task(
config=self.tasks_config['review_section_task'], # type: ignore[index]
context=[self.write_section_task()]
)
@crew
def crew(self) -> Crew:
"""Creates the content writing crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
This crew definition establishes the relationship between our agents and tasks, setting up a sequential process where the content writer creates a draft and then the reviewer improves it. While this crew can function independently, in our flow it will be orchestrated as part of a larger system.
## Step 5: Create the Flow
Now comes the exciting part - creating the flow that will orchestrate the entire guide creation process. This is where we'll combine regular Python code, direct LLM calls, and our content creation crew into a cohesive system.
Our flow will:
1. Get user input for a topic and audience level
2. Make a direct LLM call to create a structured guide outline
3. Process each section sequentially using the content writer crew
4. Combine everything into a final comprehensive document
Let's create our flow in the `main.py` file:
```python
#!/usr/bin/env python
import json
import os
from typing import List, Dict
from pydantic import BaseModel, Field
from crewai import LLM
from crewai.flow.flow import Flow, listen, start
from guide_creator_flow.crews.content_crew.content_crew import ContentCrew
# Define our models for structured data
class Section(BaseModel):
title: str = Field(description="Title of the section")
description: str = Field(description="Brief description of what the section should cover")
class GuideOutline(BaseModel):
title: str = Field(description="Title of the guide")
introduction: str = Field(description="Introduction to the topic")
target_audience: str = Field(description="Description of the target audience")
sections: List[Section] = Field(description="List of sections in the guide")
conclusion: str = Field(description="Conclusion or summary of the guide")
# Define our flow state
class GuideCreatorState(BaseModel):
topic: str = ""
audience_level: str = ""
guide_outline: GuideOutline = None
sections_content: Dict[str, str] = {}
class GuideCreatorFlow(Flow[GuideCreatorState]):
"""Flow for creating a comprehensive guide on any topic"""
@start()
def get_user_input(self):
"""Get input from the user about the guide topic and audience"""
print("\n=== Create Your Comprehensive Guide ===\n")
# Get user input
self.state.topic = input("What topic would you like to create a guide for? ")
# Get audience level with validation
while True:
audience = input("Who is your target audience? (beginner/intermediate/advanced) ").lower()
if audience in ["beginner", "intermediate", "advanced"]:
self.state.audience_level = audience
break
print("Please enter 'beginner', 'intermediate', or 'advanced'")
print(f"\nCreating a guide on {self.state.topic} for {self.state.audience_level} audience...\n")
return self.state
@listen(get_user_input)
def create_guide_outline(self, state):
"""Create a structured outline for the guide using a direct LLM call"""
print("Creating guide outline...")
# Initialize the LLM
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
# Create the messages for the outline
messages = [
{"role": "system", "content": "You are a helpful assistant designed to output JSON."},
{"role": "user", "content": f"""
Create a detailed outline for a comprehensive guide on "{state.topic}" for {state.audience_level} level learners.
The outline should include:
1. A compelling title for the guide
2. An introduction to the topic
3. 4-6 main sections that cover the most important aspects of the topic
4. A conclusion or summary
For each section, provide a clear title and a brief description of what it should cover.
"""}
]
# Make the LLM call with JSON response format
response = llm.call(messages=messages)
# Parse the JSON response
outline_dict = json.loads(response)
self.state.guide_outline = GuideOutline(**outline_dict)
# Ensure output directory exists before saving
os.makedirs("output", exist_ok=True)
# Save the outline to a file
with open("output/guide_outline.json", "w") as f:
json.dump(outline_dict, f, indent=2)
print(f"Guide outline created with {len(self.state.guide_outline.sections)} sections")
return self.state.guide_outline
@listen(create_guide_outline)
def write_and_compile_guide(self, outline):
"""Write all sections and compile the guide"""
print("Writing guide sections and compiling...")
completed_sections = []
# Process sections one by one to maintain context flow
for section in outline.sections:
print(f"Processing section: {section.title}")
# Build context from previous sections
previous_sections_text = ""
if completed_sections:
previous_sections_text = "# Previously Written Sections\n\n"
for title in completed_sections:
previous_sections_text += f"## {title}\n\n"
previous_sections_text += self.state.sections_content.get(title, "") + "\n\n"
else:
previous_sections_text = "No previous sections written yet."
# Run the content crew for this section
result = ContentCrew().crew().kickoff(inputs={
"section_title": section.title,
"section_description": section.description,
"audience_level": self.state.audience_level,
"previous_sections": previous_sections_text,
"draft_content": ""
})
# Store the content
self.state.sections_content[section.title] = result.raw
completed_sections.append(section.title)
print(f"Section completed: {section.title}")
# Compile the final guide
guide_content = f"# {outline.title}\n\n"
guide_content += f"## Introduction\n\n{outline.introduction}\n\n"
# Add each section in order
for section in outline.sections:
section_content = self.state.sections_content.get(section.title, "")
guide_content += f"\n\n{section_content}\n\n"
# Add conclusion
guide_content += f"## Conclusion\n\n{outline.conclusion}\n\n"
# Save the guide
with open("output/complete_guide.md", "w") as f:
f.write(guide_content)
print("\nComplete guide compiled and saved to output/complete_guide.md")
return "Guide creation completed successfully"
def kickoff():
"""Run the guide creator flow"""
GuideCreatorFlow().kickoff()
print("\n=== Flow Complete ===")
print("Your comprehensive guide is ready in the output directory.")
print("Open output/complete_guide.md to view it.")
def plot():
"""Generate a visualization of the flow"""
flow = GuideCreatorFlow()
flow.plot("guide_creator_flow")
print("Flow visualization saved to guide_creator_flow.html")
if __name__ == "__main__":
kickoff()
```
Let's analyze what's happening in this flow:
1. We define Pydantic models for structured data, ensuring type safety and clear data representation
2. We create a state class to maintain data across different steps of the flow
3. We implement three main flow steps:
- Getting user input with the `@start()` decorator
- Creating a guide outline with a direct LLM call
- Processing sections with our content crew
4. We use the `@listen()` decorator to establish event-driven relationships between steps
This is the power of flows - combining different types of processing (user interaction, direct LLM calls, crew-based tasks) into a coherent, event-driven system.
## Step 6: Set Up Your Environment Variables
Create a `.env` file in your project root with your API keys. See the [LLM setup
guide](/concepts/llms#setting-up-your-llm) for details on configuring a provider.
```sh .env
OPENAI_API_KEY=your_openai_api_key
# or
GEMINI_API_KEY=your_gemini_api_key
# or
ANTHROPIC_API_KEY=your_anthropic_api_key
```
## Step 7: Install Dependencies
Install the required dependencies:
```bash
crewai install
```
## Step 8: Run Your Flow
Now it's time to see your flow in action! Run it using the CrewAI CLI:
```bash
crewai flow kickoff
```
When you run this command, you'll see your flow spring to life:
1. It will prompt you for a topic and audience level
2. It will create a structured outline for your guide
3. It will process each section, with the content writer and reviewer collaborating on each
4. Finally, it will compile everything into a comprehensive guide
This demonstrates the power of flows to orchestrate complex processes involving multiple components, both AI and non-AI.
## Step 9: Visualize Your Flow
One of the powerful features of flows is the ability to visualize their structure:
```bash
crewai flow plot
```
This will create an HTML file that shows the structure of your flow, including the relationships between different steps and the data that flows between them. This visualization can be invaluable for understanding and debugging complex flows.
## Step 10: Review the Output
Once the flow completes, you'll find two files in the `output` directory:
1. `guide_outline.json`: Contains the structured outline of the guide
2. `complete_guide.md`: The comprehensive guide with all sections
Take a moment to review these files and appreciate what you've built - a system that combines user input, direct AI interactions, and collaborative agent work to produce a complex, high-quality output.
## The Art of the Possible: Beyond Your First Flow
What you've learned in this guide provides a foundation for creating much more sophisticated AI systems. Here are some ways you could extend this basic flow:
### Enhancing User Interaction
You could create more interactive flows with:
- Web interfaces for input and output
- Real-time progress updates
- Interactive feedback and refinement loops
- Multi-stage user interactions
### Adding More Processing Steps
You could expand your flow with additional steps for:
- Research before outline creation
- Image generation for illustrations
- Code snippet generation for technical guides
- Final quality assurance and fact-checking
### Creating More Complex Flows
You could implement more sophisticated flow patterns:
- Conditional branching based on user preferences or content type
- Parallel processing of independent sections
- Iterative refinement loops with feedback
- Integration with external APIs and services
### Applying to Different Domains
The same patterns can be applied to create flows for:
- **Interactive storytelling**: Create personalized stories based on user input
- **Business intelligence**: Process data, generate insights, and create reports
- **Product development**: Facilitate ideation, design, and planning
- **Educational systems**: Create personalized learning experiences
## Key Features Demonstrated
This guide creator flow demonstrates several powerful features of CrewAI:
1. **User interaction**: The flow collects input directly from the user
2. **Direct LLM calls**: Uses the LLM class for efficient, single-purpose AI interactions
3. **Structured data with Pydantic**: Uses Pydantic models to ensure type safety
4. **Sequential processing with context**: Writes sections in order, providing previous sections for context
5. **Multi-agent crews**: Leverages specialized agents (writer and reviewer) for content creation
6. **State management**: Maintains state across different steps of the process
7. **Event-driven architecture**: Uses the `@listen` decorator to respond to events
## Understanding the Flow Structure
Let's break down the key components of flows to help you understand how to build your own:
### 1. Direct LLM Calls
Flows allow you to make direct calls to language models when you need simple, structured responses:
```python
llm = LLM(
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
response_format=GuideOutline
)
response = llm.call(messages=messages)
```
This is more efficient than using a crew when you need a specific, structured output.
### 2. Event-Driven Architecture
Flows use decorators to establish relationships between components:
```python
@start()
def get_user_input(self):
# First step in the flow
# ...
@listen(get_user_input)
def create_guide_outline(self, state):
# This runs when get_user_input completes
# ...
```
This creates a clear, declarative structure for your application.
### 3. State Management
Flows maintain state across steps, making it easy to share data:
```python
class GuideCreatorState(BaseModel):
topic: str = ""
audience_level: str = ""
guide_outline: GuideOutline = None
sections_content: Dict[str, str] = {}
```
This provides a type-safe way to track and transform data throughout your flow.
### 4. Crew Integration
Flows can seamlessly integrate with crews for complex collaborative tasks:
```python
result = ContentCrew().crew().kickoff(inputs={
"section_title": section.title,
# ...
})
```
This allows you to use the right tool for each part of your application - direct LLM calls for simple tasks and crews for complex collaboration.
## Next Steps
Now that you've built your first flow, you can:
1. Experiment with more complex flow structures and patterns
2. Try using `@router()` to create conditional branches in your flows
3. Explore the `and_` and `or_` functions for more complex parallel execution
4. Connect your flow to external APIs, databases, or user interfaces
5. Combine multiple specialized crews in a single flow
<Check>
Congratulations! You've successfully built your first CrewAI Flow that combines regular code, direct LLM calls, and crew-based processing to create a comprehensive guide. These foundational skills enable you to create increasingly sophisticated AI applications that can tackle complex, multi-stage problems through a combination of procedural control and collaborative intelligence.
</Check>

View File

@@ -0,0 +1,771 @@
---
title: Mastering Flow State Management
description: A comprehensive guide to managing, persisting, and leveraging state in CrewAI Flows for building robust AI applications.
icon: diagram-project
---
# Mastering Flow State Management
## Understanding the Power of State in Flows
State management is the backbone of any sophisticated AI workflow. In CrewAI Flows, the state system allows you to maintain context, share data between steps, and build complex application logic. Mastering state management is essential for creating reliable, maintainable, and powerful AI applications.
This guide will walk you through everything you need to know about managing state in CrewAI Flows, from basic concepts to advanced techniques, with practical code examples along the way.
### Why State Management Matters
Effective state management enables you to:
1. **Maintain context across execution steps** - Pass information seamlessly between different stages of your workflow
2. **Build complex conditional logic** - Make decisions based on accumulated data
3. **Create persistent applications** - Save and restore workflow progress
4. **Handle errors gracefully** - Implement recovery patterns for more robust applications
5. **Scale your applications** - Support complex workflows with proper data organization
6. **Enable conversational applications** - Store and access conversation history for context-aware AI interactions
Let's explore how to leverage these capabilities effectively.
## State Management Fundamentals
### The Flow State Lifecycle
In CrewAI Flows, the state follows a predictable lifecycle:
1. **Initialization** - When a flow is created, its state is initialized (either as an empty dictionary or a Pydantic model instance)
2. **Modification** - Flow methods access and modify the state as they execute
3. **Transmission** - State is passed automatically between flow methods
4. **Persistence** (optional) - State can be saved to storage and later retrieved
5. **Completion** - The final state reflects the cumulative changes from all executed methods
Understanding this lifecycle is crucial for designing effective flows.
### Two Approaches to State Management
CrewAI offers two ways to manage state in your flows:
1. **Unstructured State** - Using dictionary-like objects for flexibility
2. **Structured State** - Using Pydantic models for type safety and validation
Let's examine each approach in detail.
## Unstructured State Management
Unstructured state uses a dictionary-like approach, offering flexibility and simplicity for straightforward applications.
### How It Works
With unstructured state:
- You access state via `self.state` which behaves like a dictionary
- You can freely add, modify, or remove keys at any point
- All state is automatically available to all flow methods
### Basic Example
Here's a simple example of unstructured state management:
```python
from crewai.flow.flow import Flow, listen, start
class UnstructuredStateFlow(Flow):
@start()
def initialize_data(self):
print("Initializing flow data")
# Add key-value pairs to state
self.state["user_name"] = "Alex"
self.state["preferences"] = {
"theme": "dark",
"language": "English"
}
self.state["items"] = []
# The flow state automatically gets a unique ID
print(f"Flow ID: {self.state['id']}")
return "Initialized"
@listen(initialize_data)
def process_data(self, previous_result):
print(f"Previous step returned: {previous_result}")
# Access and modify state
user = self.state["user_name"]
print(f"Processing data for {user}")
# Add items to a list in state
self.state["items"].append("item1")
self.state["items"].append("item2")
# Add a new key-value pair
self.state["processed"] = True
return "Processed"
@listen(process_data)
def generate_summary(self, previous_result):
# Access multiple state values
user = self.state["user_name"]
theme = self.state["preferences"]["theme"]
items = self.state["items"]
processed = self.state.get("processed", False)
summary = f"User {user} has {len(items)} items with {theme} theme. "
summary += "Data is processed." if processed else "Data is not processed."
return summary
# Run the flow
flow = UnstructuredStateFlow()
result = flow.kickoff()
print(f"Final result: {result}")
print(f"Final state: {flow.state}")
```
### When to Use Unstructured State
Unstructured state is ideal for:
- Quick prototyping and simple flows
- Dynamically evolving state needs
- Cases where the structure may not be known in advance
- Flows with simple state requirements
While flexible, unstructured state lacks type checking and schema validation, which can lead to errors in complex applications.
## Structured State Management
Structured state uses Pydantic models to define a schema for your flow's state, providing type safety, validation, and better developer experience.
### How It Works
With structured state:
- You define a Pydantic model that represents your state structure
- You pass this model type to your Flow class as a type parameter
- You access state via `self.state`, which behaves like a Pydantic model instance
- All fields are validated according to their defined types
- You get IDE autocompletion and type checking support
### Basic Example
Here's how to implement structured state management:
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel, Field
from typing import List, Dict, Optional
# Define your state model
class UserPreferences(BaseModel):
theme: str = "light"
language: str = "English"
class AppState(BaseModel):
user_name: str = ""
preferences: UserPreferences = UserPreferences()
items: List[str] = []
processed: bool = False
completion_percentage: float = 0.0
# Create a flow with typed state
class StructuredStateFlow(Flow[AppState]):
@start()
def initialize_data(self):
print("Initializing flow data")
# Set state values (type-checked)
self.state.user_name = "Taylor"
self.state.preferences.theme = "dark"
# The ID field is automatically available
print(f"Flow ID: {self.state.id}")
return "Initialized"
@listen(initialize_data)
def process_data(self, previous_result):
print(f"Processing data for {self.state.user_name}")
# Modify state (with type checking)
self.state.items.append("item1")
self.state.items.append("item2")
self.state.processed = True
self.state.completion_percentage = 50.0
return "Processed"
@listen(process_data)
def generate_summary(self, previous_result):
# Access state (with autocompletion)
summary = f"User {self.state.user_name} has {len(self.state.items)} items "
summary += f"with {self.state.preferences.theme} theme. "
summary += "Data is processed." if self.state.processed else "Data is not processed."
summary += f" Completion: {self.state.completion_percentage}%"
return summary
# Run the flow
flow = StructuredStateFlow()
result = flow.kickoff()
print(f"Final result: {result}")
print(f"Final state: {flow.state}")
```
### Benefits of Structured State
Using structured state provides several advantages:
1. **Type Safety** - Catch type errors at development time
2. **Self-Documentation** - The state model clearly documents what data is available
3. **Validation** - Automatic validation of data types and constraints
4. **IDE Support** - Get autocomplete and inline documentation
5. **Default Values** - Easily define fallbacks for missing data
### When to Use Structured State
Structured state is recommended for:
- Complex flows with well-defined data schemas
- Team projects where multiple developers work on the same code
- Applications where data validation is important
- Flows that need to enforce specific data types and constraints
## The Automatic State ID
Both unstructured and structured states automatically receive a unique identifier (UUID) to help track and manage state instances.
### How It Works
- For unstructured state, the ID is accessible as `self.state["id"]`
- For structured state, the ID is accessible as `self.state.id`
- This ID is generated automatically when the flow is created
- The ID remains the same throughout the flow's lifecycle
- The ID can be used for tracking, logging, and retrieving persisted states
This UUID is particularly valuable when implementing persistence or tracking multiple flow executions.
## Dynamic State Updates
Regardless of whether you're using structured or unstructured state, you can update state dynamically throughout your flow's execution.
### Passing Data Between Steps
Flow methods can return values that are then passed as arguments to listening methods:
```python
from crewai.flow.flow import Flow, listen, start
class DataPassingFlow(Flow):
@start()
def generate_data(self):
# This return value will be passed to listening methods
return "Generated data"
@listen(generate_data)
def process_data(self, data_from_previous_step):
print(f"Received: {data_from_previous_step}")
# You can modify the data and pass it along
processed_data = f"{data_from_previous_step} - processed"
# Also update state
self.state["last_processed"] = processed_data
return processed_data
@listen(process_data)
def finalize_data(self, processed_data):
print(f"Received processed data: {processed_data}")
# Access both the passed data and state
last_processed = self.state.get("last_processed", "")
return f"Final: {processed_data} (from state: {last_processed})"
```
This pattern allows you to combine direct data passing with state updates for maximum flexibility.
## Persisting Flow State
One of CrewAI's most powerful features is the ability to persist flow state across executions. This enables workflows that can be paused, resumed, and even recovered after failures.
### The @persist Decorator
The `@persist` decorator automates state persistence, saving your flow's state at key points in execution.
#### Class-Level Persistence
When applied at the class level, `@persist` saves state after every method execution:
```python
from crewai.flow.flow import Flow, listen, persist, start
from pydantic import BaseModel
class CounterState(BaseModel):
value: int = 0
@persist # Apply to the entire flow class
class PersistentCounterFlow(Flow[CounterState]):
@start()
def increment(self):
self.state.value += 1
print(f"Incremented to {self.state.value}")
return self.state.value
@listen(increment)
def double(self, value):
self.state.value = value * 2
print(f"Doubled to {self.state.value}")
return self.state.value
# First run
flow1 = PersistentCounterFlow()
result1 = flow1.kickoff()
print(f"First run result: {result1}")
# Second run - state is automatically loaded
flow2 = PersistentCounterFlow()
result2 = flow2.kickoff()
print(f"Second run result: {result2}") # Will be higher due to persisted state
```
#### Method-Level Persistence
For more granular control, you can apply `@persist` to specific methods:
```python
from crewai.flow.flow import Flow, listen, persist, start
class SelectivePersistFlow(Flow):
@start()
def first_step(self):
self.state["count"] = 1
return "First step"
@persist # Only persist after this method
@listen(first_step)
def important_step(self, prev_result):
self.state["count"] += 1
self.state["important_data"] = "This will be persisted"
return "Important step completed"
@listen(important_step)
def final_step(self, prev_result):
self.state["count"] += 1
return f"Complete with count {self.state['count']}"
```
## Advanced State Patterns
### State-Based Conditional Logic
You can use state to implement complex conditional logic in your flows:
```python
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
class PaymentState(BaseModel):
amount: float = 0.0
is_approved: bool = False
retry_count: int = 0
class PaymentFlow(Flow[PaymentState]):
@start()
def process_payment(self):
# Simulate payment processing
self.state.amount = 100.0
self.state.is_approved = self.state.amount < 1000
return "Payment processed"
@router(process_payment)
def check_approval(self, previous_result):
if self.state.is_approved:
return "approved"
elif self.state.retry_count < 3:
return "retry"
else:
return "rejected"
@listen("approved")
def handle_approval(self):
return f"Payment of ${self.state.amount} approved!"
@listen("retry")
def handle_retry(self):
self.state.retry_count += 1
print(f"Retrying payment (attempt {self.state.retry_count})...")
# Could implement retry logic here
return "Retry initiated"
@listen("rejected")
def handle_rejection(self):
return f"Payment of ${self.state.amount} rejected after {self.state.retry_count} retries."
```
### Handling Complex State Transformations
For complex state transformations, you can create dedicated methods:
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
from typing import List, Dict
class UserData(BaseModel):
name: str
active: bool = True
login_count: int = 0
class ComplexState(BaseModel):
users: Dict[str, UserData] = {}
active_user_count: int = 0
class TransformationFlow(Flow[ComplexState]):
@start()
def initialize(self):
# Add some users
self.add_user("alice", "Alice")
self.add_user("bob", "Bob")
self.add_user("charlie", "Charlie")
return "Initialized"
@listen(initialize)
def process_users(self, _):
# Increment login counts
for user_id in self.state.users:
self.increment_login(user_id)
# Deactivate one user
self.deactivate_user("bob")
# Update active count
self.update_active_count()
return f"Processed {len(self.state.users)} users"
# Helper methods for state transformations
def add_user(self, user_id: str, name: str):
self.state.users[user_id] = UserData(name=name)
self.update_active_count()
def increment_login(self, user_id: str):
if user_id in self.state.users:
self.state.users[user_id].login_count += 1
def deactivate_user(self, user_id: str):
if user_id in self.state.users:
self.state.users[user_id].active = False
self.update_active_count()
def update_active_count(self):
self.state.active_user_count = sum(
1 for user in self.state.users.values() if user.active
)
```
This pattern of creating helper methods keeps your flow methods clean while enabling complex state manipulations.
## State Management with Crews
One of the most powerful patterns in CrewAI is combining flow state management with crew execution.
### Passing State to Crews
You can use flow state to parameterize crews:
```python
from crewai.flow.flow import Flow, listen, start
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
class ResearchState(BaseModel):
topic: str = ""
depth: str = "medium"
results: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def get_parameters(self):
# In a real app, this might come from user input
self.state.topic = "Artificial Intelligence Ethics"
self.state.depth = "deep"
return "Parameters set"
@listen(get_parameters)
def execute_research(self, _):
# Create agents
researcher = Agent(
role="Research Specialist",
goal=f"Research {self.state.topic} in {self.state.depth} detail",
backstory="You are an expert researcher with a talent for finding accurate information."
)
writer = Agent(
role="Content Writer",
goal="Transform research into clear, engaging content",
backstory="You excel at communicating complex ideas clearly and concisely."
)
# Create tasks
research_task = Task(
description=f"Research {self.state.topic} with {self.state.depth} analysis",
expected_output="Comprehensive research notes in markdown format",
agent=researcher
)
writing_task = Task(
description=f"Create a summary on {self.state.topic} based on the research",
expected_output="Well-written article in markdown format",
agent=writer,
context=[research_task]
)
# Create and run crew
research_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
verbose=True
)
# Run crew and store result in state
result = research_crew.kickoff()
self.state.results = result.raw
return "Research completed"
@listen(execute_research)
def summarize_results(self, _):
# Access the stored results
result_length = len(self.state.results)
return f"Research on {self.state.topic} completed with {result_length} characters of results."
```
### Handling Crew Outputs in State
When a crew completes, you can process its output and store it in your flow state:
```python
@listen(execute_crew)
def process_crew_results(self, _):
# Parse the raw results (assuming JSON output)
import json
try:
results_dict = json.loads(self.state.raw_results)
self.state.processed_results = {
"title": results_dict.get("title", ""),
"main_points": results_dict.get("main_points", []),
"conclusion": results_dict.get("conclusion", "")
}
return "Results processed successfully"
except json.JSONDecodeError:
self.state.error = "Failed to parse crew results as JSON"
return "Error processing results"
```
## Best Practices for State Management
### 1. Keep State Focused
Design your state to contain only what's necessary:
```python
# Too broad
class BloatedState(BaseModel):
user_data: Dict = {}
system_settings: Dict = {}
temporary_calculations: List = []
debug_info: Dict = {}
# ...many more fields
# Better: Focused state
class FocusedState(BaseModel):
user_id: str
preferences: Dict[str, str]
completion_status: Dict[str, bool]
```
### 2. Use Structured State for Complex Flows
As your flows grow in complexity, structured state becomes increasingly valuable:
```python
# Simple flow can use unstructured state
class SimpleGreetingFlow(Flow):
@start()
def greet(self):
self.state["name"] = "World"
return f"Hello, {self.state['name']}!"
# Complex flow benefits from structured state
class UserRegistrationState(BaseModel):
username: str
email: str
verification_status: bool = False
registration_date: datetime = Field(default_factory=datetime.now)
last_login: Optional[datetime] = None
class RegistrationFlow(Flow[UserRegistrationState]):
# Methods with strongly-typed state access
```
### 3. Document State Transitions
For complex flows, document how state changes throughout the execution:
```python
@start()
def initialize_order(self):
"""
Initialize order state with empty values.
State before: {}
State after: {order_id: str, items: [], status: 'new'}
"""
self.state.order_id = str(uuid.uuid4())
self.state.items = []
self.state.status = "new"
return "Order initialized"
```
### 4. Handle State Errors Gracefully
Implement error handling for state access:
```python
@listen(previous_step)
def process_data(self, _):
try:
# Try to access a value that might not exist
user_preference = self.state.preferences.get("theme", "default")
except (AttributeError, KeyError):
# Handle the error gracefully
self.state.errors = self.state.get("errors", [])
self.state.errors.append("Failed to access preferences")
user_preference = "default"
return f"Used preference: {user_preference}"
```
### 5. Use State for Progress Tracking
Leverage state to track progress in long-running flows:
```python
class ProgressTrackingFlow(Flow):
@start()
def initialize(self):
self.state["total_steps"] = 3
self.state["current_step"] = 0
self.state["progress"] = 0.0
self.update_progress()
return "Initialized"
def update_progress(self):
"""Helper method to calculate and update progress"""
if self.state.get("total_steps", 0) > 0:
self.state["progress"] = (self.state.get("current_step", 0) /
self.state["total_steps"]) * 100
print(f"Progress: {self.state['progress']:.1f}%")
@listen(initialize)
def step_one(self, _):
# Do work...
self.state["current_step"] = 1
self.update_progress()
return "Step 1 complete"
# Additional steps...
```
### 6. Use Immutable Operations When Possible
Especially with structured state, prefer immutable operations for clarity:
```python
# Instead of modifying lists in place:
self.state.items.append(new_item) # Mutable operation
# Consider creating new state:
from pydantic import BaseModel
from typing import List
class ItemState(BaseModel):
items: List[str] = []
class ImmutableFlow(Flow[ItemState]):
@start()
def add_item(self):
# Create new list with the added item
self.state.items = [*self.state.items, "new item"]
return "Item added"
```
## Debugging Flow State
### Logging State Changes
When developing, add logging to track state changes:
```python
import logging
logging.basicConfig(level=logging.INFO)
class LoggingFlow(Flow):
def log_state(self, step_name):
logging.info(f"State after {step_name}: {self.state}")
@start()
def initialize(self):
self.state["counter"] = 0
self.log_state("initialize")
return "Initialized"
@listen(initialize)
def increment(self, _):
self.state["counter"] += 1
self.log_state("increment")
return f"Incremented to {self.state['counter']}"
```
### State Visualization
You can add methods to visualize your state for debugging:
```python
def visualize_state(self):
"""Create a simple visualization of the current state"""
import json
from rich.console import Console
from rich.panel import Panel
console = Console()
if hasattr(self.state, "model_dump"):
# Pydantic v2
state_dict = self.state.model_dump()
elif hasattr(self.state, "dict"):
# Pydantic v1
state_dict = self.state.dict()
else:
# Unstructured state
state_dict = dict(self.state)
# Remove id for cleaner output
if "id" in state_dict:
state_dict.pop("id")
state_json = json.dumps(state_dict, indent=2, default=str)
console.print(Panel(state_json, title="Current Flow State"))
```
## Conclusion
Mastering state management in CrewAI Flows gives you the power to build sophisticated, robust AI applications that maintain context, make complex decisions, and deliver consistent results.
Whether you choose unstructured or structured state, implementing proper state management practices will help you create flows that are maintainable, extensible, and effective at solving real-world problems.
As you develop more complex flows, remember that good state management is about finding the right balance between flexibility and structure, making your code both powerful and easy to understand.
<Check>
You've now mastered the concepts and practices of state management in CrewAI Flows! With this knowledge, you can create robust AI workflows that effectively maintain context, share data between steps, and build sophisticated application logic.
</Check>
## Next Steps
- Experiment with both structured and unstructured state in your flows
- Try implementing state persistence for long-running workflows
- Explore [building your first crew](/guides/crews/first-crew) to see how crews and flows can work together
- Check out the [Flow reference documentation](/concepts/flows) for more advanced features

View File

@@ -1,5 +1,5 @@
---
title: Agent Monitoring with AgentOps
title: AgentOps Integration
description: Understanding and logging your agent performance with AgentOps.
icon: paperclip
---

View File

@@ -0,0 +1,151 @@
---
title: Arize Phoenix
description: Arize Phoenix integration for CrewAI with OpenTelemetry and OpenInference
icon: magnifying-glass-chart
---
# Arize Phoenix Integration
This guide demonstrates how to integrate **Arize Phoenix** with **CrewAI** using OpenTelemetry via the [OpenInference](https://github.com/openinference/openinference) SDK. By the end of this guide, you will be able to trace your CrewAI agents and easily debug your agents.
> **What is Arize Phoenix?** [Arize Phoenix](https://phoenix.arize.com) is an LLM observability platform that provides tracing and evaluation for AI applications.
[![Watch a Video Demo of Our Integration with Phoenix](https://storage.googleapis.com/arize-assets/fixtures/setup_crewai.png)](https://www.youtube.com/watch?v=Yc5q3l6F7Ww)
## Get Started
We'll walk through a simple example of using CrewAI and integrating it with Arize Phoenix via OpenTelemetry using OpenInference.
You can also access this guide on [Google Colab](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/crewai_tracing_tutorial.ipynb).
### Step 1: Install Dependencies
```bash
pip install openinference-instrumentation-crewai crewai crewai-tools arize-phoenix-otel
```
### Step 2: Set Up Environment Variables
Setup Phoenix Cloud API keys and configure OpenTelemetry to send traces to Phoenix. Phoenix Cloud is a hosted version of Arize Phoenix, but it is not required to use this integration.
You can get your free Serper API key [here](https://serper.dev/).
```python
import os
from getpass import getpass
# Get your Phoenix Cloud credentials
PHOENIX_API_KEY = getpass("🔑 Enter your Phoenix Cloud API Key: ")
# Get API keys for services
OPENAI_API_KEY = getpass("🔑 Enter your OpenAI API key: ")
SERPER_API_KEY = getpass("🔑 Enter your Serper API key: ")
# Set environment variables
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com" # Phoenix Cloud, change this to your own endpoint if you are using a self-hosted instance
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
os.environ["SERPER_API_KEY"] = SERPER_API_KEY
```
### Step 3: Initialize OpenTelemetry with Phoenix
Initialize the OpenInference OpenTelemetry instrumentation SDK to start capturing traces and send them to Phoenix.
```python
from phoenix.otel import register
tracer_provider = register(
project_name="crewai-tracing-demo",
auto_instrument=True,
)
```
### Step 4: Create a CrewAI Application
We'll create a CrewAI application where two agents collaborate to research and write a blog post about AI advancements.
```python
from crewai import Agent, Crew, Process, Task
from crewai_tools import SerperDevTool
from openinference.instrumentation.crewai import CrewAIInstrumentor
from phoenix.otel import register
# setup monitoring for your crew
tracer_provider = register(
endpoint="http://localhost:6006/v1/traces")
CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)
search_tool = SerperDevTool()
# Define your agents with roles and goals
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI and data science",
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
# You can pass an optional llm attribute specifying what model you wanna use.
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
tools=[search_tool],
)
writer = Agent(
role="Tech Content Strategist",
goal="Craft compelling content on tech advancements",
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True,
)
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.""",
expected_output="Full analysis report in bullet points",
agent=researcher,
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer,
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer], tasks=[task1, task2], verbose=1, process=Process.sequential
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
```
### Step 5: View Traces in Phoenix
After running the agent, you can view the traces generated by your CrewAI application in Phoenix. You should see detailed steps of the agent interactions and LLM calls, which can help you debug and optimize your AI agents.
Log into your Phoenix Cloud account and navigate to the project you specified in the `project_name` parameter. You'll see a timeline view of your trace with all the agent interactions, tool usages, and LLM calls.
![Example trace in Phoenix showing agent interactions](https://storage.googleapis.com/arize-assets/fixtures/crewai_traces.png)
### Version Compatibility Information
- Python 3.8+
- CrewAI >= 0.86.0
- Arize Phoenix >= 7.0.1
- OpenTelemetry SDK >= 1.31.0
### References
- [Phoenix Documentation](https://docs.arize.com/phoenix/) - Overview of the Phoenix platform.
- [CrewAI Documentation](https://docs.crewai.com/) - Overview of the CrewAI framework.
- [OpenTelemetry Docs](https://opentelemetry.io/docs/) - OpenTelemetry guide
- [OpenInference GitHub](https://github.com/openinference/openinference) - Source code for OpenInference SDK.

View File

@@ -0,0 +1,443 @@
---
title: Bring your own agent
description: Learn how to bring your own agents that work within a Crew.
icon: robots
---
Interoperability is a core concept in CrewAI. This guide will show you how to bring your own agents that work within a Crew.
## Adapter Guide for Bringing your own agents (Langgraph Agents, OpenAI Agents, etc...)
We require 3 adapters to turn any agent from different frameworks to work within crew.
1. BaseAgentAdapter
2. BaseToolAdapter
3. BaseConverter
## BaseAgentAdapter
This abstract class defines the common interface and functionality that all
agent adapters must implement. It extends BaseAgent to maintain compatibility
with the CrewAI framework while adding adapter-specific requirements.
Required Methods:
1. `def configure_tools`
2. `def configure_structured_output`
## Creating your own Adapter
To integrate an agent from a different framework (e.g., LangGraph, Autogen, OpenAI Assistants) into CrewAI, you need to create a custom adapter by inheriting from `BaseAgentAdapter`. This adapter acts as a compatibility layer, translating between the CrewAI interfaces and the specific requirements of your external agent.
Here's how you implement your custom adapter:
1. **Inherit from `BaseAgentAdapter`**:
```python
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.tools import BaseTool
from typing import List, Optional, Any, Dict
class MyCustomAgentAdapter(BaseAgentAdapter):
# ... implementation details ...
```
2. **Implement `__init__`**:
The constructor should call the parent class constructor `super().__init__(**kwargs)` and perform any initialization specific to your external agent. You can use the optional `agent_config` dictionary passed during CrewAI's `Agent` initialization to configure your adapter and the underlying agent.
```python
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
super().__init__(agent_config=agent_config, **kwargs)
# Initialize your external agent here, possibly using agent_config
# Example: self.external_agent = initialize_my_agent(agent_config)
print(f"Initializing MyCustomAgentAdapter with config: {agent_config}")
```
3. **Implement `configure_tools`**:
This abstract method is crucial. It receives a list of CrewAI `BaseTool` instances. Your implementation must convert or adapt these tools into the format expected by your external agent framework. This might involve wrapping them, extracting specific attributes, or registering them with the external agent instance.
```python
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
if tools:
adapted_tools = []
for tool in tools:
# Adapt CrewAI BaseTool to the format your agent expects
# Example: adapted_tool = adapt_to_my_framework(tool)
# adapted_tools.append(adapted_tool)
pass # Replace with your actual adaptation logic
# Configure the external agent with the adapted tools
# Example: self.external_agent.set_tools(adapted_tools)
print(f"Configuring tools for MyCustomAgentAdapter: {adapted_tools}") # Placeholder
else:
# Handle the case where no tools are provided
# Example: self.external_agent.set_tools([])
print("No tools provided for MyCustomAgentAdapter.")
```
4. **Implement `configure_structured_output`**:
This method is called when the CrewAI `Agent` is configured with structured output requirements (e.g., `output_json` or `output_pydantic`). Your adapter needs to ensure the external agent is set up to comply with these requirements. This might involve setting specific parameters on the external agent or ensuring its underlying model supports the requested format. If the external agent doesn't support structured output in a way compatible with CrewAI's expectations, you might need to handle the conversion or raise an appropriate error.
```python
def configure_structured_output(self, structured_output: Any) -> None:
# Configure your external agent to produce output in the specified format
# Example: self.external_agent.set_output_format(structured_output)
self.adapted_structured_output = True # Signal that structured output is handled
print(f"Configuring structured output for MyCustomAgentAdapter: {structured_output}")
```
By implementing these methods, your `MyCustomAgentAdapter` will allow your custom agent implementation to function correctly within a CrewAI crew, interacting with tasks and tools seamlessly. Remember to replace the example comments and print statements with your actual adaptation logic specific to the external agent framework you are integrating.
## BaseToolAdapter implementation
The `BaseToolAdapter` class is responsible for converting CrewAI's native `BaseTool` objects into a format that your specific external agent framework can understand and utilize. Different agent frameworks (like LangGraph, OpenAI Assistants, etc.) have their own unique ways of defining and handling tools, and the `BaseToolAdapter` acts as the translator.
Here's how you implement your custom tool adapter:
1. **Inherit from `BaseToolAdapter`**:
```python
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools import BaseTool
from typing import List, Any
class MyCustomToolAdapter(BaseToolAdapter):
# ... implementation details ...
```
2. **Implement `configure_tools`**:
This is the core abstract method you must implement. It receives a list of CrewAI `BaseTool` instances provided to the agent. Your task is to iterate through this list, adapt each `BaseTool` into the format expected by your external framework, and store the converted tools in the `self.converted_tools` list (which is initialized in the base class constructor).
```python
def configure_tools(self, tools: List[BaseTool]) -> None:
"""Configure and convert CrewAI tools for the specific implementation."""
self.converted_tools = [] # Reset in case it's called multiple times
for tool in tools:
# Sanitize the tool name if required by the target framework
sanitized_name = self.sanitize_tool_name(tool.name)
# --- Your Conversion Logic Goes Here ---
# Example: Convert BaseTool to a dictionary format for LangGraph
# converted_tool = {
# "name": sanitized_name,
# "description": tool.description,
# "parameters": tool.args_schema.schema() if tool.args_schema else {},
# # Add any other framework-specific fields
# }
# Example: Convert BaseTool to an OpenAI function definition
# converted_tool = {
# "type": "function",
# "function": {
# "name": sanitized_name,
# "description": tool.description,
# "parameters": tool.args_schema.schema() if tool.args_schema else {"type": "object", "properties": {}},
# }
# }
# --- Replace above examples with your actual adaptation ---
converted_tool = self.adapt_tool_to_my_framework(tool, sanitized_name) # Placeholder
self.converted_tools.append(converted_tool)
print(f"Adapted tool '{tool.name}' to '{sanitized_name}' for MyCustomToolAdapter") # Placeholder
print(f"MyCustomToolAdapter finished configuring tools: {len(self.converted_tools)} adapted.") # Placeholder
# --- Helper method for adaptation (Example) ---
def adapt_tool_to_my_framework(self, tool: BaseTool, sanitized_name: str) -> Any:
# Replace this with the actual logic to convert a CrewAI BaseTool
# to the format needed by your specific external agent framework.
# This will vary greatly depending on the target framework.
adapted_representation = {
"framework_specific_name": sanitized_name,
"framework_specific_description": tool.description,
"inputs": tool.args_schema.schema() if tool.args_schema else None,
"implementation_reference": tool.run # Or however the framework needs to call it
}
# Also ensure the tool works both sync and async
async def async_tool_wrapper(*args, **kwargs):
output = tool.run(*args, **kwargs)
if inspect.isawaitable(output):
return await output
else:
return output
adapted_tool = MyFrameworkTool(
name=sanitized_name,
description=tool.description,
inputs=tool.args_schema.schema() if tool.args_schema else None,
implementation_reference=async_tool_wrapper
)
return adapted_representation
```
3. **Using the Adapter**:
Typically, you would instantiate your `MyCustomToolAdapter` within your `MyCustomAgentAdapter`'s `configure_tools` method and use it to process the tools before configuring your external agent.
```python
# Inside MyCustomAgentAdapter.configure_tools
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
if tools:
tool_adapter = MyCustomToolAdapter() # Instantiate your tool adapter
tool_adapter.configure_tools(tools) # Convert the tools
adapted_tools = tool_adapter.tools() # Get the converted tools
# Now configure your external agent with the adapted_tools
# Example: self.external_agent.set_tools(adapted_tools)
print(f"Configuring external agent with adapted tools: {adapted_tools}") # Placeholder
else:
# Handle no tools case
print("No tools provided for MyCustomAgentAdapter.")
```
By creating a `BaseToolAdapter`, you decouple the tool conversion logic from the agent adaptation, making the integration cleaner and more modular. Remember to replace the placeholder examples with the actual conversion logic required by your specific external agent framework.
## BaseConverter
The `BaseConverterAdapter` plays a crucial role when a CrewAI `Task` requires an agent to return its final output in a specific structured format, such as JSON or a Pydantic model. It bridges the gap between CrewAI's structured output requirements and the capabilities of your external agent.
Its primary responsibilities are:
1. **Configuring the Agent for Structured Output:** Based on the `Task`'s requirements (`output_json` or `output_pydantic`), it instructs the associated `BaseAgentAdapter` (and indirectly, the external agent) on what format is expected.
2. **Enhancing the System Prompt:** It modifies the agent's system prompt to include clear instructions on *how* to generate the output in the required structure.
3. **Post-processing the Result:** It takes the raw output from the agent and attempts to parse, validate, and format it according to the required structure, ultimately returning a string representation (e.g., a JSON string).
Here's how you implement your custom converter adapter:
1. **Inherit from `BaseConverterAdapter`**:
```python
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
# Assuming you have your MyCustomAgentAdapter defined
# from .my_custom_agent_adapter import MyCustomAgentAdapter
from crewai.task import Task
from typing import Any
class MyCustomConverterAdapter(BaseConverterAdapter):
# Store the expected output type (e.g., 'json', 'pydantic', 'text')
_output_type: str = 'text'
_output_schema: Any = None # Store JSON schema or Pydantic model
# ... implementation details ...
```
2. **Implement `__init__`**:
The constructor must accept the corresponding `agent_adapter` instance it will work with.
```python
def __init__(self, agent_adapter: Any): # Use your specific AgentAdapter type hint
self.agent_adapter = agent_adapter
print(f"Initializing MyCustomConverterAdapter for agent adapter: {type(agent_adapter).__name__}")
```
3. **Implement `configure_structured_output`**:
This method receives the CrewAI `Task` object. You need to check the task's `output_json` and `output_pydantic` attributes to determine the required output structure. Store this information (e.g., in `_output_type` and `_output_schema`) and potentially call configuration methods on your `self.agent_adapter` if the external agent needs specific setup for structured output (which might have been partially handled in the agent adapter's `configure_structured_output` already).
```python
def configure_structured_output(self, task: Task) -> None:
"""Configure the expected structured output based on the task."""
if task.output_pydantic:
self._output_type = 'pydantic'
self._output_schema = task.output_pydantic
print(f"Converter: Configured for Pydantic output: {self._output_schema.__name__}")
elif task.output_json:
self._output_type = 'json'
self._output_schema = task.output_json
print(f"Converter: Configured for JSON output with schema: {self._output_schema}")
else:
self._output_type = 'text'
self._output_schema = None
print("Converter: Configured for standard text output.")
# Optionally, inform the agent adapter if needed
# self.agent_adapter.set_output_mode(self._output_type, self._output_schema)
```
4. **Implement `enhance_system_prompt`**:
This method takes the agent's base system prompt string and should append instructions tailored to the currently configured `_output_type` and `_output_schema`. The goal is to guide the LLM powering the agent to produce output in the correct format.
```python
def enhance_system_prompt(self, base_prompt: str) -> str:
"""Enhance the system prompt with structured output instructions."""
if self._output_type == 'text':
return base_prompt # No enhancement needed for plain text
instructions = "\n\nYour final answer MUST be formatted as "
if self._output_type == 'json':
schema_str = json.dumps(self._output_schema, indent=2)
instructions += f"a JSON object conforming to the following schema:\n```json\n{schema_str}\n```"
elif self._output_type == 'pydantic':
schema_str = json.dumps(self._output_schema.model_json_schema(), indent=2)
instructions += f"a JSON object conforming to the Pydantic model '{self._output_schema.__name__}' with the following schema:\n```json\n{schema_str}\n```"
instructions += "\nEnsure your entire response is ONLY the valid JSON object, without any introductory text, explanations, or concluding remarks."
print(f"Converter: Enhancing prompt for {self._output_type} output.")
return base_prompt + instructions
```
*Note: The exact prompt engineering might need tuning based on the agent/LLM being used.*
5. **Implement `post_process_result`**:
This method receives the raw string output from the agent. If structured output was requested (`json` or `pydantic`), you should attempt to parse the string into the expected format. Handle potential parsing errors (e.g., log them, attempt simple fixes, or raise an exception). Crucially, the method must **always return a string**, even if the intermediate format was a dictionary or Pydantic object (e.g., by serializing it back to a JSON string).
```python
import json
from pydantic import ValidationError
def post_process_result(self, result: str) -> str:
"""Post-process the agent's result to ensure it matches the expected format."""
print(f"Converter: Post-processing result for {self._output_type} output.")
if self._output_type == 'json':
try:
# Attempt to parse and re-serialize to ensure validity and consistent format
parsed_json = json.loads(result)
# Optional: Validate against self._output_schema if it's a JSON schema dictionary
# from jsonschema import validate
# validate(instance=parsed_json, schema=self._output_schema)
return json.dumps(parsed_json)
except json.JSONDecodeError as e:
print(f"Error: Failed to parse JSON output: {e}\nRaw output:\n{result}")
# Handle error: return raw, raise exception, or try to fix
return result # Example: return raw output on failure
# except Exception as e: # Catch validation errors if using jsonschema
# print(f"Error: JSON output failed schema validation: {e}\nRaw output:\n{result}")
# return result
elif self._output_type == 'pydantic':
try:
# Attempt to parse into the Pydantic model
model_instance = self._output_schema.model_validate_json(result)
# Return the model serialized back to JSON
return model_instance.model_dump_json()
except ValidationError as e:
print(f"Error: Failed to validate Pydantic output: {e}\nRaw output:\n{result}")
# Handle error
return result # Example: return raw output on failure
except json.JSONDecodeError as e:
print(f"Error: Failed to parse JSON for Pydantic model: {e}\nRaw output:\n{result}")
return result
else: # 'text'
return result # No processing needed for plain text
```
By implementing these methods, your `MyCustomConverterAdapter` ensures that structured output requests from CrewAI tasks are correctly handled by your integrated external agent, improving the reliability and usability of your custom agent within the CrewAI framework.
## Out of the Box Adapters
We provide out of the box adapters for the following frameworks:
1. LangGraph
2. OpenAI Agents
## Kicking off a crew with adapted agents:
```python
import json
import os
from typing import List
from crewai_tools import SerperDevTool
from src.crewai import Agent, Crew, Task
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from crewai.agents.agent_adapters.langgraph.langgraph_adapter import (
LangGraphAgentAdapter,
)
from crewai.agents.agent_adapters.openai_agents.openai_adapter import OpenAIAgentAdapter
# CrewAI Agent
code_helper_agent = Agent(
role="Code Helper",
goal="Help users solve coding problems effectively and provide clear explanations.",
backstory="You are an experienced programmer with deep knowledge across multiple programming languages and frameworks. You specialize in solving complex coding challenges and explaining solutions clearly.",
allow_delegation=False,
verbose=True,
)
# OpenAI Agent Adapter
link_finder_agent = OpenAIAgentAdapter(
role="Link Finder",
goal="Find the most relevant and high-quality resources for coding tasks.",
backstory="You are a research specialist with a talent for finding the most helpful resources. You're skilled at using search tools to discover documentation, tutorials, and examples that directly address the user's coding needs.",
tools=[SerperDevTool()],
allow_delegation=False,
verbose=True,
)
# LangGraph Agent Adapter
reporter_agent = LangGraphAgentAdapter(
role="Reporter",
goal="Report the results of the tasks.",
backstory="You are a reporter who reports the results of the other tasks",
llm=ChatOpenAI(model="gpt-4o"),
allow_delegation=True,
verbose=True,
)
class Code(BaseModel):
code: str
task = Task(
description="Give an answer to the coding question: {task}",
expected_output="A thorough answer to the coding question: {task}",
agent=code_helper_agent,
output_json=Code,
)
task2 = Task(
description="Find links to resources that can help with coding tasks. Use the serper tool to find resources that can help.",
expected_output="A list of links to resources that can help with coding tasks",
agent=link_finder_agent,
)
class Report(BaseModel):
code: str
links: List[str]
task3 = Task(
description="Report the results of the tasks.",
expected_output="A report of the results of the tasks. this is the code produced and then the links to the resources that can help with the coding task.",
agent=reporter_agent,
output_json=Report,
)
# Use in CrewAI
crew = Crew(
agents=[code_helper_agent, link_finder_agent, reporter_agent],
tasks=[task, task2, task3],
verbose=True,
)
result = crew.kickoff(
inputs={"task": "How do you implement an abstract class in python?"}
)
# Print raw result first
print("Raw result:", result)
# Handle result based on its type
if hasattr(result, "json_dict") and result.json_dict:
json_result = result.json_dict
print("\nStructured JSON result:")
print(f"{json.dumps(json_result, indent=2)}")
# Access fields safely
if isinstance(json_result, dict):
if "code" in json_result:
print("\nCode:")
print(
json_result["code"][:200] + "..."
if len(json_result["code"]) > 200
else json_result["code"]
)
if "links" in json_result:
print("\nLinks:")
for link in json_result["links"][:5]: # Print first 5 links
print(f"- {link}")
if len(json_result["links"]) > 5:
print(f"...and {len(json_result['links']) - 5} more links")
elif hasattr(result, "pydantic") and result.pydantic:
print("\nPydantic model result:")
print(result.pydantic.model_dump_json(indent=2))
else:
# Fallback to raw output
print("\nNo structured result available, using raw output:")
print(result.raw[:500] + "..." if len(result.raw) > 500 else result.raw)
```

646
docs/how-to/custom-llm.mdx Normal file
View File

@@ -0,0 +1,646 @@
---
title: Custom LLM Implementation
description: Learn how to create custom LLM implementations in CrewAI.
icon: code
---
## Custom LLM Implementations
CrewAI now supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to create your own LLM implementations that don't rely on litellm's authentication mechanism.
To create a custom LLM implementation, you need to:
1. Inherit from the `BaseLLM` abstract base class
2. Implement the required methods:
- `call()`: The main method to call the LLM with messages
- `supports_function_calling()`: Whether the LLM supports function calling
- `supports_stop_words()`: Whether the LLM supports stop words
- `get_context_window_size()`: The context window size of the LLM
## Example: Basic Custom LLM
```python
from crewai import BaseLLM
from typing import Any, Dict, List, Optional, Union
class CustomLLM(BaseLLM):
def __init__(self, api_key: str, endpoint: str):
super().__init__() # Initialize the base class to set default attributes
if not api_key or not isinstance(api_key, str):
raise ValueError("Invalid API key: must be a non-empty string")
if not endpoint or not isinstance(endpoint, str):
raise ValueError("Invalid endpoint URL: must be a non-empty string")
self.api_key = api_key
self.endpoint = endpoint
self.stop = [] # You can customize stop words if needed
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
"""Call the LLM with the given messages.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas for function calling.
callbacks: Optional list of callback functions.
available_functions: Optional dict mapping function names to callables.
Returns:
Either a text response from the LLM or the result of a tool function call.
Raises:
TimeoutError: If the LLM request times out.
RuntimeError: If the LLM request fails for other reasons.
ValueError: If the response format is invalid.
"""
# Implement your own logic to call the LLM
# For example, using requests:
import requests
try:
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
}
# Convert string message to proper format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
data = {
"messages": messages,
"tools": tools
}
response = requests.post(
self.endpoint,
headers=headers,
json=data,
timeout=30 # Set a reasonable timeout
)
response.raise_for_status() # Raise an exception for HTTP errors
return response.json()["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
def supports_function_calling(self) -> bool:
"""Check if the LLM supports function calling.
Returns:
True if the LLM supports function calling, False otherwise.
"""
# Return True if your LLM supports function calling
return True
def supports_stop_words(self) -> bool:
"""Check if the LLM supports stop words.
Returns:
True if the LLM supports stop words, False otherwise.
"""
# Return True if your LLM supports stop words
return True
def get_context_window_size(self) -> int:
"""Get the context window size of the LLM.
Returns:
The context window size as an integer.
"""
# Return the context window size of your LLM
return 8192
```
## Error Handling Best Practices
When implementing custom LLMs, it's important to handle errors properly to ensure robustness and reliability. Here are some best practices:
### 1. Implement Try-Except Blocks for API Calls
Always wrap API calls in try-except blocks to handle different types of errors:
```python
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
try:
# API call implementation
response = requests.post(
self.endpoint,
headers=self.headers,
json=self.prepare_payload(messages),
timeout=30 # Set a reasonable timeout
)
response.raise_for_status() # Raise an exception for HTTP errors
return response.json()["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
```
### 2. Implement Retry Logic for Transient Failures
For transient failures like network issues or rate limiting, implement retry logic with exponential backoff:
```python
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
import time
max_retries = 3
retry_delay = 1 # seconds
for attempt in range(max_retries):
try:
response = requests.post(
self.endpoint,
headers=self.headers,
json=self.prepare_payload(messages),
timeout=30
)
response.raise_for_status()
return response.json()["choices"][0]["message"]["content"]
except (requests.Timeout, requests.ConnectionError) as e:
if attempt < max_retries - 1:
time.sleep(retry_delay * (2 ** attempt)) # Exponential backoff
continue
raise TimeoutError(f"LLM request failed after {max_retries} attempts: {str(e)}")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
```
### 3. Validate Input Parameters
Always validate input parameters to prevent runtime errors:
```python
def __init__(self, api_key: str, endpoint: str):
super().__init__()
if not api_key or not isinstance(api_key, str):
raise ValueError("Invalid API key: must be a non-empty string")
if not endpoint or not isinstance(endpoint, str):
raise ValueError("Invalid endpoint URL: must be a non-empty string")
self.api_key = api_key
self.endpoint = endpoint
```
### 4. Handle Authentication Errors Gracefully
Provide clear error messages for authentication failures:
```python
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
try:
response = requests.post(self.endpoint, headers=self.headers, json=data)
if response.status_code == 401:
raise ValueError("Authentication failed: Invalid API key or token")
elif response.status_code == 403:
raise ValueError("Authorization failed: Insufficient permissions")
response.raise_for_status()
# Process response
except Exception as e:
# Handle error
raise
```
## Example: JWT-based Authentication
For services that use JWT-based authentication instead of API keys, you can implement a custom LLM like this:
```python
from crewai import BaseLLM, Agent, Task
from typing import Any, Dict, List, Optional, Union
class JWTAuthLLM(BaseLLM):
def __init__(self, jwt_token: str, endpoint: str):
super().__init__() # Initialize the base class to set default attributes
if not jwt_token or not isinstance(jwt_token, str):
raise ValueError("Invalid JWT token: must be a non-empty string")
if not endpoint or not isinstance(endpoint, str):
raise ValueError("Invalid endpoint URL: must be a non-empty string")
self.jwt_token = jwt_token
self.endpoint = endpoint
self.stop = [] # You can customize stop words if needed
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
"""Call the LLM with JWT authentication.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas for function calling.
callbacks: Optional list of callback functions.
available_functions: Optional dict mapping function names to callables.
Returns:
Either a text response from the LLM or the result of a tool function call.
Raises:
TimeoutError: If the LLM request times out.
RuntimeError: If the LLM request fails for other reasons.
ValueError: If the response format is invalid.
"""
# Implement your own logic to call the LLM with JWT authentication
import requests
try:
headers = {
"Authorization": f"Bearer {self.jwt_token}",
"Content-Type": "application/json"
}
# Convert string message to proper format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
data = {
"messages": messages,
"tools": tools
}
response = requests.post(
self.endpoint,
headers=headers,
json=data,
timeout=30 # Set a reasonable timeout
)
if response.status_code == 401:
raise ValueError("Authentication failed: Invalid JWT token")
elif response.status_code == 403:
raise ValueError("Authorization failed: Insufficient permissions")
response.raise_for_status() # Raise an exception for HTTP errors
return response.json()["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
def supports_function_calling(self) -> bool:
"""Check if the LLM supports function calling.
Returns:
True if the LLM supports function calling, False otherwise.
"""
return True
def supports_stop_words(self) -> bool:
"""Check if the LLM supports stop words.
Returns:
True if the LLM supports stop words, False otherwise.
"""
return True
def get_context_window_size(self) -> int:
"""Get the context window size of the LLM.
Returns:
The context window size as an integer.
"""
return 8192
```
## Troubleshooting
Here are some common issues you might encounter when implementing custom LLMs and how to resolve them:
### 1. Authentication Failures
**Symptoms**: 401 Unauthorized or 403 Forbidden errors
**Solutions**:
- Verify that your API key or JWT token is valid and not expired
- Check that you're using the correct authentication header format
- Ensure that your token has the necessary permissions
### 2. Timeout Issues
**Symptoms**: Requests taking too long or timing out
**Solutions**:
- Implement timeout handling as shown in the examples
- Use retry logic with exponential backoff
- Consider using a more reliable network connection
### 3. Response Parsing Errors
**Symptoms**: KeyError, IndexError, or ValueError when processing responses
**Solutions**:
- Validate the response format before accessing nested fields
- Implement proper error handling for malformed responses
- Check the API documentation for the expected response format
### 4. Rate Limiting
**Symptoms**: 429 Too Many Requests errors
**Solutions**:
- Implement rate limiting in your custom LLM
- Add exponential backoff for retries
- Consider using a token bucket algorithm for more precise rate control
## Advanced Features
### Logging
Adding logging to your custom LLM can help with debugging and monitoring:
```python
import logging
from typing import Any, Dict, List, Optional, Union
class LoggingLLM(BaseLLM):
def __init__(self, api_key: str, endpoint: str):
super().__init__()
self.api_key = api_key
self.endpoint = endpoint
self.logger = logging.getLogger("crewai.llm.custom")
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
self.logger.info(f"Calling LLM with {len(messages) if isinstance(messages, list) else 1} messages")
try:
# API call implementation
response = self._make_api_call(messages, tools)
self.logger.debug(f"LLM response received: {response[:100]}...")
return response
except Exception as e:
self.logger.error(f"LLM call failed: {str(e)}")
raise
```
### Rate Limiting
Implementing rate limiting can help avoid overwhelming the LLM API:
```python
import time
from typing import Any, Dict, List, Optional, Union
class RateLimitedLLM(BaseLLM):
def __init__(
self,
api_key: str,
endpoint: str,
requests_per_minute: int = 60
):
super().__init__()
self.api_key = api_key
self.endpoint = endpoint
self.requests_per_minute = requests_per_minute
self.request_times: List[float] = []
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
self._enforce_rate_limit()
# Record this request time
self.request_times.append(time.time())
# Make the actual API call
return self._make_api_call(messages, tools)
def _enforce_rate_limit(self) -> None:
"""Enforce the rate limit by waiting if necessary."""
now = time.time()
# Remove request times older than 1 minute
self.request_times = [t for t in self.request_times if now - t < 60]
if len(self.request_times) >= self.requests_per_minute:
# Calculate how long to wait
oldest_request = min(self.request_times)
wait_time = 60 - (now - oldest_request)
if wait_time > 0:
time.sleep(wait_time)
```
### Metrics Collection
Collecting metrics can help you monitor your LLM usage:
```python
import time
from typing import Any, Dict, List, Optional, Union
class MetricsCollectingLLM(BaseLLM):
def __init__(self, api_key: str, endpoint: str):
super().__init__()
self.api_key = api_key
self.endpoint = endpoint
self.metrics: Dict[str, Any] = {
"total_calls": 0,
"total_tokens": 0,
"errors": 0,
"latency": []
}
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
start_time = time.time()
self.metrics["total_calls"] += 1
try:
response = self._make_api_call(messages, tools)
# Estimate tokens (simplified)
if isinstance(messages, str):
token_estimate = len(messages) // 4
else:
token_estimate = sum(len(m.get("content", "")) // 4 for m in messages)
self.metrics["total_tokens"] += token_estimate
return response
except Exception as e:
self.metrics["errors"] += 1
raise
finally:
latency = time.time() - start_time
self.metrics["latency"].append(latency)
def get_metrics(self) -> Dict[str, Any]:
"""Return the collected metrics."""
avg_latency = sum(self.metrics["latency"]) / len(self.metrics["latency"]) if self.metrics["latency"] else 0
return {
**self.metrics,
"avg_latency": avg_latency
}
```
## Advanced Usage: Function Calling
If your LLM supports function calling, you can implement the function calling logic in your custom LLM:
```python
import json
from typing import Any, Dict, List, Optional, Union
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> Union[str, Any]:
import requests
try:
headers = {
"Authorization": f"Bearer {self.jwt_token}",
"Content-Type": "application/json"
}
# Convert string message to proper format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
data = {
"messages": messages,
"tools": tools
}
response = requests.post(
self.endpoint,
headers=headers,
json=data,
timeout=30
)
response.raise_for_status()
response_data = response.json()
# Check if the LLM wants to call a function
if response_data["choices"][0]["message"].get("tool_calls"):
tool_calls = response_data["choices"][0]["message"]["tool_calls"]
# Process each tool call
for tool_call in tool_calls:
function_name = tool_call["function"]["name"]
function_args = json.loads(tool_call["function"]["arguments"])
if available_functions and function_name in available_functions:
function_to_call = available_functions[function_name]
function_response = function_to_call(**function_args)
# Add the function response to the messages
messages.append({
"role": "tool",
"tool_call_id": tool_call["id"],
"name": function_name,
"content": str(function_response)
})
# Call the LLM again with the updated messages
return self.call(messages, tools, callbacks, available_functions)
# Return the text response if no function call
return response_data["choices"][0]["message"]["content"]
except requests.Timeout:
raise TimeoutError("LLM request timed out")
except requests.RequestException as e:
raise RuntimeError(f"LLM request failed: {str(e)}")
except (KeyError, IndexError, ValueError) as e:
raise ValueError(f"Invalid response format: {str(e)}")
```
## Using Your Custom LLM with CrewAI
Once you've implemented your custom LLM, you can use it with CrewAI agents and crews:
```python
from crewai import Agent, Task, Crew
from typing import Dict, Any
# Create your custom LLM instance
jwt_llm = JWTAuthLLM(
jwt_token="your.jwt.token",
endpoint="https://your-llm-endpoint.com/v1/chat/completions"
)
# Use it with an agent
agent = Agent(
role="Research Assistant",
goal="Find information on a topic",
backstory="You are a research assistant tasked with finding information.",
llm=jwt_llm,
)
# Create a task for the agent
task = Task(
description="Research the benefits of exercise",
agent=agent,
expected_output="A summary of the benefits of exercise",
)
# Execute the task
result = agent.execute_task(task)
print(result)
# Or use it with a crew
crew = Crew(
agents=[agent],
tasks=[task],
manager_llm=jwt_llm, # Use your custom LLM for the manager
)
# Run the crew
result = crew.kickoff()
print(result)
```
## Implementing Your Own Authentication Mechanism
The `BaseLLM` class allows you to implement any authentication mechanism you need, not just JWT or API keys. You can use:
- OAuth tokens
- Client certificates
- Custom headers
- Session-based authentication
- Any other authentication method required by your LLM provider
Simply implement the appropriate authentication logic in your custom LLM class.

View File

@@ -1,5 +1,5 @@
---
title: Create Your Own Manager Agent
title: Custom Manager Agent
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
icon: user-shield
---

View File

@@ -48,7 +48,6 @@ Define a crew with a designated manager and establish a clear chain of command.
</Tip>
```python Code
from langchain_openai import ChatOpenAI
from crewai import Crew, Process, Agent
# Agents are defined with attributes for backstory, cache, and verbose mode
@@ -56,38 +55,51 @@ researcher = Agent(
role='Researcher',
goal='Conduct in-depth analysis',
backstory='Experienced data analyst with a knack for uncovering hidden trends.',
cache=True,
verbose=False,
# tools=[] # This can be optionally specified; defaults to an empty list
use_system_prompt=True, # Enable or disable system prompts for this agent
max_rpm=30, # Limit on the number of requests per minute
max_iter=5 # Maximum number of iterations for a final answer
)
writer = Agent(
role='Writer',
goal='Create engaging content',
backstory='Creative writer passionate about storytelling in technical domains.',
cache=True,
verbose=False,
# tools=[] # Optionally specify tools; defaults to an empty list
use_system_prompt=True, # Enable or disable system prompts for this agent
max_rpm=30, # Limit on the number of requests per minute
max_iter=5 # Maximum number of iterations for a final answer
)
# Establishing the crew with a hierarchical process and additional configurations
project_crew = Crew(
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory if manager_agent is not set
process=Process.hierarchical, # Specifies the hierarchical management approach
respect_context_window=True, # Enable respect of the context window for tasks
memory=True, # Enable memory usage for enhanced task execution
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
planning=True, # Enable planning feature for pre-execution strategy
manager_llm="gpt-4o", # Specify which LLM the manager should use
process=Process.hierarchical,
planning=True,
)
```
### Using a Custom Manager Agent
Alternatively, you can create a custom manager agent with specific attributes tailored to your project's management needs. This gives you more control over the manager's behavior and capabilities.
```python
# Define a custom manager agent
manager = Agent(
role="Project Manager",
goal="Efficiently manage the crew and ensure high-quality task completion",
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success.",
allow_delegation=True,
)
# Use the custom manager in your crew
project_crew = Crew(
tasks=[...],
agents=[researcher, writer],
manager_agent=manager, # Use your custom manager agent
process=Process.hierarchical,
planning=True,
)
```
<Tip>
For more details on creating and customizing a manager agent, check out the [Custom Manager Agent documentation](https://docs.crewai.com/how-to/custom-manager-agent#custom-manager-agent).
</Tip>
### Workflow in Action
1. **Task Assignment**: The manager assigns tasks strategically, considering each agent's capabilities and available tools.
@@ -97,4 +109,4 @@ project_crew = Crew(
## Conclusion
Adopting the hierarchical process in CrewAI, with the correct configurations and understanding of the system's capabilities, facilitates an organized and efficient approach to project management.
Utilize the advanced features and customizations to tailor the workflow to your specific needs, ensuring optimal task execution and project success.
Utilize the advanced features and customizations to tailor the workflow to your specific needs, ensuring optimal task execution and project success.

View File

@@ -54,7 +54,8 @@ coding_agent = Agent(
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
agent=coding_agent,
expected_output="The average age of the participants."
)
# Create a crew and add the task
@@ -91,12 +92,14 @@ coding_agent = Agent(
# Create tasks that require code execution
task_1 = Task(
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
agent=coding_agent,
expected_output="The average age of the participants."
)
task_2 = Task(
description="Analyze the second dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
agent=coding_agent,
expected_output="The average age of the participants."
)
# Create two crews and add tasks
@@ -116,4 +119,4 @@ async def async_multiple_crews():
# Run the async function
asyncio.run(async_multiple_crews())
```
```

View File

@@ -39,8 +39,7 @@ analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task],
verbose=True,
memory=False,
respect_context_window=True # enable by default
memory=False
)
datasets = [

View File

@@ -1,7 +1,7 @@
---
title: Agent Monitoring with Langfuse
title: Langfuse Integration
description: Learn how to integrate Langfuse with CrewAI via OpenTelemetry using OpenLit
icon: magnifying-glass-chart
icon: vials
---
# Integrate Langfuse with CrewAI

View File

@@ -1,5 +1,5 @@
---
title: Agent Monitoring with Langtrace
title: Langtrace Integration
description: How to monitor cost, latency, and performance of CrewAI Agents using Langtrace, an external observability tool.
icon: chart-line
---

View File

@@ -1,5 +1,5 @@
---
title: Agent Monitoring with MLflow
title: MLflow Integration
description: Quickly start monitoring your Agents with MLflow.
icon: bars-staggered
---

View File

@@ -1,5 +1,5 @@
---
title: Agent Monitoring with OpenLIT
title: OpenLIT Integration
description: Quickly start monitoring your Agents in just a single line of code with OpenTelemetry.
icon: magnifying-glass-chart
---

View File

@@ -0,0 +1,129 @@
---
title: Opik Integration
description: Learn how to use Comet Opik to debug, evaluate, and monitor your CrewAI applications with comprehensive tracing, automated evaluations, and production-ready dashboards.
icon: meteor
---
# Opik Overview
With [Comet Opik](https://www.comet.com/docs/opik/), debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards.
<Frame caption="Opik Agent Dashboard">
<img src="/images/opik-crewai-dashboard.png" alt="Opik agent monitoring example with CrewAI" />
</Frame>
Opik provides comprehensive support for every stage of your CrewAI application development:
- **Log Traces and Spans**: Automatically track LLM calls and application logic to debug and analyze development and production systems. Manually or programmatically annotate, view, and compare responses across projects.
- **Evaluate Your LLM Application's Performance**: Evaluate against a custom test set and run built-in evaluation metrics or define your own metrics in the SDK or UI.
- **Test Within Your CI/CD Pipeline**: Establish reliable performance baselines with Opik's LLM unit tests, built on PyTest. Run online evaluations for continuous monitoring in production.
- **Monitor & Analyze Production Data**: Understand your models' performance on unseen data in production and generate datasets for new dev iterations.
## Setup
Comet provides a hosted version of the Opik platform, or you can run the platform locally.
To use the hosted version, simply [create a free Comet account](https://www.comet.com/signup?utm_medium=github&utm_source=crewai_docs) and grab you API Key.
To run the Opik platform locally, see our [installation guide](https://www.comet.com/docs/opik/self-host/overview/) for more information.
For this guide we will use CrewAIs quickstart example.
<Steps>
<Step title="Install required packages">
```shell
pip install crewai crewai-tools opik --upgrade
```
</Step>
<Step title="Configure Opik">
```python
import opik
opik.configure(use_local=False)
```
</Step>
<Step title="Prepare environment">
First, we set up our API keys for our LLM-provider as environment variables:
```python
import os
import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
```
</Step>
<Step title="Using CrewAI">
The first step is to create our project. We will use an example from CrewAIs documentation:
```python
from crewai import Agent, Crew, Task, Process
class YourCrewName:
def agent_one(self) -> Agent:
return Agent(
role="Data Analyst",
goal="Analyze data trends in the market",
backstory="An experienced data analyst with a background in economics",
verbose=True,
)
def agent_two(self) -> Agent:
return Agent(
role="Market Researcher",
goal="Gather information on market dynamics",
backstory="A diligent researcher with a keen eye for detail",
verbose=True,
)
def task_one(self) -> Task:
return Task(
name="Collect Data Task",
description="Collect recent market data and identify trends.",
expected_output="A report summarizing key trends in the market.",
agent=self.agent_one(),
)
def task_two(self) -> Task:
return Task(
name="Market Research Task",
description="Research factors affecting market dynamics.",
expected_output="An analysis of factors influencing the market.",
agent=self.agent_two(),
)
def crew(self) -> Crew:
return Crew(
agents=[self.agent_one(), self.agent_two()],
tasks=[self.task_one(), self.task_two()],
process=Process.sequential,
verbose=True,
)
```
Now we can import Opiks tracker and run our crew:
```python
from opik.integrations.crewai import track_crewai
track_crewai(project_name="crewai-integration-demo")
my_crew = YourCrewName().crew()
result = my_crew.kickoff()
print(result)
```
After running your CrewAI application, visit the Opik app to view:
- LLM traces, spans, and their metadata
- Agent interactions and task execution flow
- Performance metrics like latency and token usage
- Evaluation metrics (built-in or custom)
</Step>
</Steps>
## Resources
- [🦉 Opik Documentation](https://www.comet.com/docs/opik/)
- [👉 Opik + CrewAI Colab](https://colab.research.google.com/github/comet-ml/opik/blob/main/apps/opik-documentation/documentation/docs/cookbook/crewai.ipynb)
- [🐦 X](https://x.com/cometml)
- [💬 Slack](https://slack.comet.com/)

View File

@@ -1,5 +1,5 @@
---
title: Agent Monitoring with Portkey
title: Portkey Integration
description: How to use Portkey with CrewAI
icon: key
---

View File

@@ -20,10 +20,8 @@ Here's an example of how to replay from a task:
To use the replay feature, follow these steps:
<Steps>
<Step title="Open your terminal or command prompt.">
</Step>
<Step title="Navigate to the directory where your CrewAI project is located.">
</Step>
<Step title="Open your terminal or command prompt."></Step>
<Step title="Navigate to the directory where your CrewAI project is located."></Step>
<Step title="Run the following commands:">
To view the latest kickoff task_ids use:

View File

@@ -0,0 +1,124 @@
---
title: Weave Integration
description: Learn how to use Weights & Biases (W&B) Weave to track, experiment with, evaluate, and improve your CrewAI applications.
icon: radar
---
# Weave Overview
[Weights & Biases (W&B) Weave](https://weave-docs.wandb.ai/) is a framework for tracking, experimenting with, evaluating, deploying, and improving LLM-based applications.
![Overview of W&B Weave CrewAI tracing usage](/images/weave-tracing.gif)
Weave provides comprehensive support for every stage of your CrewAI application development:
- **Tracing & Monitoring**: Automatically track LLM calls and application logic to debug and analyze production systems
- **Systematic Iteration**: Refine and iterate on prompts, datasets, and models
- **Evaluation**: Use custom or pre-built scorers to systematically assess and enhance agent performance
- **Guardrails**: Protect your agents with pre- and post-safeguards for content moderation and prompt safety
Weave automatically captures traces for your CrewAI applications, enabling you to monitor and analyze your agents' performance, interactions, and execution flow. This helps you build better evaluation datasets and optimize your agent workflows.
## Setup Instructions
<Steps>
<Step title="Install required packages">
```shell
pip install crewai weave
```
</Step>
<Step title="Set up W&B Account">
Sign up for a [Weights & Biases account](https://wandb.ai) if you haven't already. You'll need this to view your traces and metrics.
</Step>
<Step title="Initialize Weave in Your Application">
Add the following code to your application:
```python
import weave
# Initialize Weave with your project name
weave.init(project_name="crewai_demo")
```
After initialization, Weave will provide a URL where you can view your traces and metrics.
</Step>
<Step title="Create your Crews/Flows">
```python
from crewai import Agent, Task, Crew, LLM, Process
# Create an LLM with a temperature of 0 to ensure deterministic outputs
llm = LLM(model="gpt-4o", temperature=0)
# Create agents
researcher = Agent(
role='Research Analyst',
goal='Find and analyze the best investment opportunities',
backstory='Expert in financial analysis and market research',
llm=llm,
verbose=True,
allow_delegation=False,
)
writer = Agent(
role='Report Writer',
goal='Write clear and concise investment reports',
backstory='Experienced in creating detailed financial reports',
llm=llm,
verbose=True,
allow_delegation=False,
)
# Create tasks
research_task = Task(
description='Deep research on the {topic}',
expected_output='Comprehensive market data including key players, market size, and growth trends.',
agent=researcher
)
writing_task = Task(
description='Write a detailed report based on the research',
expected_output='The report should be easy to read and understand. Use bullet points where applicable.',
agent=writer
)
# Create a crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True,
process=Process.sequential,
)
# Run the crew
result = crew.kickoff(inputs={"topic": "AI in material science"})
print(result)
```
</Step>
<Step title="View Traces in Weave">
After running your CrewAI application, visit the Weave URL provided during initialization to view:
- LLM calls and their metadata
- Agent interactions and task execution flow
- Performance metrics like latency and token usage
- Any errors or issues that occurred during execution
<Frame caption="Weave Tracing Dashboard">
<img src="/images/weave-tracing.png" alt="Weave tracing example with CrewAI" />
</Frame>
</Step>
</Steps>
## Features
- Weave automatically captures all CrewAI operations: agent interactions and task executions; LLM calls with metadata and token usage; tool usage and results.
- The integration supports all CrewAI execution methods: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
- Automatic tracing of all [crewAI-tools](https://github.com/crewAIInc/crewAI-tools).
- Flow feature support with decorator patching (`@start`, `@listen`, `@router`, `@or_`, `@and_`).
- Track custom guardrails passed to CrewAI `Task` with `@weave.op()`.
For detailed information on what's supported, visit the [Weave CrewAI documentation](https://weave-docs.wandb.ai/guides/integrations/crewai/#getting-started-with-flow).
## Resources
- [📘 Weave Documentation](https://weave-docs.wandb.ai)
- [📊 Example Weave x CrewAI dashboard](https://wandb.ai/ayut/crewai_demo/weave/traces?cols=%7B%22wb_run_id%22%3Afalse%2C%22attributes.weave.client_version%22%3Afalse%2C%22attributes.weave.os_name%22%3Afalse%2C%22attributes.weave.os_release%22%3Afalse%2C%22attributes.weave.os_version%22%3Afalse%2C%22attributes.weave.source%22%3Afalse%2C%22attributes.weave.sys_version%22%3Afalse%7D&peekPath=%2Fayut%2Fcrewai_demo%2Fcalls%2F0195c838-38cb-71a2-8a15-651ecddf9d89)
- [🐦 X](https://x.com/weave_wb)

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 705 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 547 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 332 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Some files were not shown because too many files have changed in this diff Show More