Compare commits

...

39 Commits

Author SHA1 Message Date
Lorenze Jay
f3c17a249b feat: Introduce production-ready Flows and Crews architecture with ne… (#4003)
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Check Documentation Broken Links / Check broken links (push) Waiting to run
Notify Downstream / notify-downstream (push) Waiting to run
* feat: Introduce production-ready Flows and Crews architecture with new runner and updated documentation across multiple languages.

* ko and pt-br for tracing missing links

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-31 14:29:42 -08:00
Lorenze Jay
467ee2917e Improve EventListener and TraceCollectionListener for improved event… (#4160)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* Refactor EventListener and TraceCollectionListener for improved event handling

- Removed unused threading and method branches from EventListener to simplify the code.
- Updated event handling methods in EventListener to use new formatter methods for better clarity and consistency.
- Refactored TraceCollectionListener to eliminate unnecessary parameters in formatter calls, enhancing readability.
- Simplified ConsoleFormatter by removing outdated tree management methods and focusing on panel-based output for status updates.
- Enhanced ToolUsage to track run attempts for better tool usage metrics.

* clearer for knowledge retrieval and dropped some reduancies

* Refactor EventListener and ConsoleFormatter for improved clarity and consistency

- Removed the MCPToolExecutionCompletedEvent handler from EventListener to streamline event processing.
- Updated ConsoleFormatter to enhance output formatting by adding line breaks for better readability in status content.
- Renamed status messages for MCP Tool execution to provide clearer context during tool operations.

* fix run attempt incrementation

* task name consistency

* memory events consistency

* ensure hitl works

* linting
2025-12-30 11:36:31 -08:00
Lorenze Jay
b9dd166a6b Lorenze/agent executor flow pattern (#3975)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* WIP gh pr refactor: update agent executor handling and introduce flow-based executor

* wip

* refactor: clean up comments and improve code clarity in agent executor flow

- Removed outdated comments and unnecessary explanations in  and  classes to enhance code readability.
- Simplified parameter updates in the agent executor to avoid confusion regarding executor recreation.
- Improved clarity in the  method to ensure proper handling of non-final answers without raising errors.

* bumping pytest-randomly numpy

* also bump versions of anthropic sdk

* ensure flow logs are not passed if its on executor

* revert anthropic bump

* fix

* refactor: update dependency markers in uv.lock for platform compatibility

- Enhanced dependency markers for , , , and others to ensure compatibility across different platforms (Linux, Darwin, and architecture-specific conditions).
- Removed unnecessary event emission in the  class during kickoff.
- Cleaned up commented-out code in the  class for better readability and maintainability.

* drop dupllicate

* test: enhance agent executor creation and stop word assertions

- Added calls to create_agent_executor in multiple test cases to ensure proper agent execution setup.
- Updated assertions for stop words in the agent tests to remove unnecessary checks and improve clarity.
- Ensured consistency in task handling by invoking create_agent_executor with the appropriate task parameter.

* refactor: reorganize agent executor imports and introduce CrewAgentExecutorFlow

- Removed the old import of CrewAgentExecutorFlow and replaced it with the new import from the experimental module.
- Updated relevant references in the codebase to ensure compatibility with the new structure.
- Enhanced the organization of imports in core.py and base_agent.py for better clarity and maintainability.

* updating name

* dropped usage of printer here for rich console and dropped non-added value logging

* address i18n

* Enhance concurrency control in CrewAgentExecutorFlow by introducing a threading lock to prevent concurrent executions. This change ensures that the executor instance cannot be invoked while already running, improving stability and reliability during flow execution.

* string literal returns

* string literal returns

* Enhance CrewAgentExecutor initialization by allowing optional i18n parameter for improved internationalization support. This change ensures that the executor can utilize a provided i18n instance or fallback to the default, enhancing flexibility in multilingual contexts.

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-28 10:21:32 -08:00
João Moura
c73b36a4c5 Adding HITL for Flows (#4143)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: introduce human feedback events and decorator for flow methods

- Added HumanFeedbackRequestedEvent and HumanFeedbackReceivedEvent classes to handle human feedback interactions within flows.
- Implemented the @human_feedback decorator to facilitate human-in-the-loop workflows, allowing for feedback collection and routing based on responses.
- Enhanced Flow class to store human feedback history and manage feedback outcomes.
- Updated flow wrappers to preserve attributes from methods decorated with @human_feedback.
- Added integration and unit tests for the new human feedback functionality, ensuring proper validation and routing behavior.

* adding deployment docs

* New docs

* fix printer

* wrong change

* Adding Async Support
feat: enhance human feedback support in flows

- Updated the @human_feedback decorator to use 'message' parameter instead of 'request' for clarity.
- Introduced new FlowPausedEvent and MethodExecutionPausedEvent to handle flow and method pauses during human feedback.
- Added ConsoleProvider for synchronous feedback collection and integrated async feedback capabilities.
- Implemented SQLite persistence for managing pending feedback context.
- Expanded documentation to include examples of async human feedback usage and best practices.

* linter

* fix

* migrating off printer

* updating docs

* new tests

* doc update
2025-12-25 21:04:10 -03:00
Lucas Gomide
0c020991c4 docs: fix wrong trigger name in sample docs (#4147)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-23 08:41:51 -05:00
Heitor Carvalho
be70a04153 fix: correct error fetching for workos login polling (#4124)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-19 20:00:26 -03:00
Greyson LaLonde
0c359f4df8 feat: bump versions to 1.7.2
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-19 15:47:00 -05:00
Lucas Gomide
fe288dbe73 Resolving some connection issues (#4129)
* fix: use CREWAI_PLUS_URL env var in precedence over PlusAPI configured value

* feat: bypass TLS certificate verification when calling platform

* test: fix test
2025-12-19 10:15:20 -05:00
Heitor Carvalho
dc63bc2319 chore: remove CREWAI_BASE_URL and fetch url from settings instead
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-18 15:41:38 -03:00
Greyson LaLonde
8d0effafec chore: add commitizen pre-commit hook
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-17 15:49:24 -05:00
Greyson LaLonde
1cdbe79b34 chore: add deployment action, trigger for releases 2025-12-17 08:40:14 -05:00
Lorenze Jay
84328d9311 fixed api-reference/status docs page (#4109)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-16 15:31:30 -08:00
Lorenze Jay
88d3c0fa97 feat: bump versions to 1.7.1 (#4092)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.7.1

* bump projects
2025-12-15 21:51:53 -08:00
Matt Aitchison
75ff7dce0c feat: add --no-commit flag to bump command (#4087)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Allows updating version files without creating a commit, branch, or PR.
2025-12-15 15:32:37 -06:00
Greyson LaLonde
38b0b125d3 feat: use json schema for tool argument serialization
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Replace Python representation with JsonSchema for tool arguments
  - Remove deprecated PydanticSchemaParser in favor of direct schema generation
  - Add handling for VAR_POSITIONAL and VAR_KEYWORD parameters
  - Improve tool argument schema collection
2025-12-11 15:50:19 -05:00
Vini Brasil
9bd8ad51f7 Add docs for AOP Deploy API (#4076)
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 15:58:17 -03:00
Heitor Carvalho
0632a054ca chore: display error message from response when tool repository login fails (#4075)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-11 14:56:00 -03:00
Dragos Ciupureanu
feec6b440e fix: gracefully terminate the future when executing a task async
* fix: gracefully terminate the future when executing a task async

* core: add unit test

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 12:03:33 -05:00
Greyson LaLonde
e43c7debbd fix: add idx for task ordering, tests 2025-12-11 10:18:15 -05:00
Greyson LaLonde
8ef9fe2cab fix: check platform compat for windows signals 2025-12-11 08:38:19 -05:00
Alex Larionov
807f97114f fix: set rpm controller timer as daemon to prevent process hang
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-12-11 02:59:55 -05:00
Greyson LaLonde
bdafe0fac7 fix: ensure token usage recording, validate response model on stream
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-12-10 20:32:10 -05:00
Greyson LaLonde
8e99d490b0 chore: add translated docs for async
* chore: add translated docs for async

* chore: add missing pages
2025-12-10 14:17:10 -05:00
Gil Feig
34b909367b Add docs for the agent handler connector (#4012)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add docs for the agent handler connector

* Fix links

* Update docs
2025-12-09 15:49:52 -08:00
Greyson LaLonde
22684b513e chore: add docs on native async
Some checks failed
Mark stale issues and pull requests / stale (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-12-08 20:49:18 -05:00
Lorenze Jay
3e3b9df761 feat: bump versions to 1.7.0 (#4051)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.7.0

* bump
2025-12-08 16:42:12 -08:00
Greyson LaLonde
177294f588 fix: ensure nonetypes are not passed to otel (#4052)
* fix: ensure nonetypes are not passed to otel

* fix: ensure attribute is always set in span
2025-12-08 16:27:42 -08:00
Greyson LaLonde
beef712646 fix: ensure token store file ops do not deadlock
* fix: ensure token store file ops do not deadlock
* chore: update test method reference
2025-12-08 19:04:21 -05:00
Lorenze Jay
6125b866fd supporting thinking for anthropic models (#3978)
* supporting thinking for anthropic models

* drop comments here

* thinking and tool calling support

* fix: properly mock tool use and text block types in Anthropic tests

- Updated the test for the Anthropic tool use conversation flow to include type attributes for mocked ToolUseBlock and text blocks, ensuring accurate simulation of tool interactions during testing.

* feat: add AnthropicThinkingConfig for enhanced thinking capabilities

This update introduces the AnthropicThinkingConfig class to manage thinking parameters for the Anthropic completion model. The LLM and AnthropicCompletion classes have been updated to utilize this new configuration. Additionally, new test cassettes have been added to validate the functionality of thinking blocks across interactions.
2025-12-08 15:34:54 -08:00
Greyson LaLonde
f2f994612c fix: ensure otel span is closed
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-05 13:23:26 -05:00
Greyson LaLonde
7fff2b654c fix: use HuggingFaceEmbeddingFunction for embeddings, update keys and add tests (#4005)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-12-04 15:05:50 -08:00
Greyson LaLonde
34e09162ba feat: async flow kickoff
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Introduces akickoff alias to flows, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and removes duplicated logic.
2025-12-04 17:08:08 -05:00
Greyson LaLonde
24d1fad7ab feat: async crew support
native async crew execution. Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and removes duplicated logic.
2025-12-04 16:53:19 -05:00
Greyson LaLonde
9b8f31fa07 feat: async task support (#4024)
* feat: add async support for tools, add async tool tests

* chore: improve tool decorator typing

* fix: ensure _run backward compat

* chore: update docs

* chore: make docstrings a little more readable

* feat: add async execution support to agent executor

* chore: add tests

* feat: add aiosqlite dep; regenerate lockfile

* feat: add async ops to memory feat; create tests

* feat: async knowledge support; add tests

* feat: add async task support

* chore: dry out duplicate logic
2025-12-04 13:34:29 -08:00
Greyson LaLonde
d898d7c02c feat: async knowledge support (#4023)
* feat: add async support for tools, add async tool tests

* chore: improve tool decorator typing

* fix: ensure _run backward compat

* chore: update docs

* chore: make docstrings a little more readable

* feat: add async execution support to agent executor

* chore: add tests

* feat: add aiosqlite dep; regenerate lockfile

* feat: add async ops to memory feat; create tests

* feat: async knowledge support; add tests

* chore: regenerate lockfile
2025-12-04 10:27:52 -08:00
Greyson LaLonde
f04c40babf feat: async memory support
Adds async support for tools with tests, async execution in the agent executor, and async operations for memory (with aiosqlite). Improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, adds tests, and regenerates lockfiles.
2025-12-04 12:54:49 -05:00
Lorenze Jay
c456e5c5fa Lorenze/ensure hooks work with lite agents flows (#3981)
* liteagent support hooks

* wip llm.call hooks work - needs tests for this

* fix tests

* fixed more

* more tool hooks test cassettes
2025-12-04 09:38:39 -08:00
Greyson LaLonde
633e279b51 feat: add async support for tools and agent executor; improve typing and docs
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Introduces async tool support with new tests, adds async execution to the agent executor, improves tool decorator typing, ensures _run backward compatibility, updates docs and docstrings, and adds additional tests.
2025-12-03 20:13:03 -05:00
Greyson LaLonde
a25778974d feat: a2a extensions API and async agent card caching; fix task propagation & streaming
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Adds initial extensions API (with registry temporarily no-op), introduces aiocache for async caching, ensures reference task IDs propagate correctly, fixes streamed response model handling, updates streaming tests, and regenerates lockfiles.
2025-12-03 16:29:48 -05:00
498 changed files with 43068 additions and 69448 deletions

View File

@@ -1,9 +1,14 @@
name: Publish to PyPI
on:
release:
types: [ published ]
repository_dispatch:
types: [deployment-tests-passed]
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag to publish'
required: false
type: string
jobs:
build:
@@ -12,7 +17,21 @@ jobs:
permissions:
contents: read
steps:
- name: Determine release tag
id: release
run: |
# Priority: workflow_dispatch input > repository_dispatch payload > default branch
if [ -n "${{ inputs.release_tag }}" ]; then
echo "tag=${{ inputs.release_tag }}" >> $GITHUB_OUTPUT
elif [ -n "${{ github.event.client_payload.release_tag }}" ]; then
echo "tag=${{ github.event.client_payload.release_tag }}" >> $GITHUB_OUTPUT
else
echo "tag=" >> $GITHUB_OUTPUT
fi
- uses: actions/checkout@v4
with:
ref: ${{ steps.release.outputs.tag || github.ref }}
- name: Set up Python
uses: actions/setup-python@v5

View File

@@ -0,0 +1,18 @@
name: Trigger Deployment Tests
on:
release:
types: [published]
jobs:
trigger:
name: Trigger deployment tests
runs-on: ubuntu-latest
steps:
- name: Trigger deployment tests
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.CREWAI_DEPLOYMENTS_PAT }}
repository: ${{ secrets.CREWAI_DEPLOYMENTS_REPOSITORY }}
event-type: crewai-release
client-payload: '{"release_tag": "${{ github.event.release.tag_name }}", "release_name": "${{ github.event.release.name }}"}'

View File

@@ -24,4 +24,10 @@ repos:
rev: 0.9.3
hooks:
- id: uv-lock
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:
- id: commitizen
- id: commitizen-branch
stages: [ pre-push ]

View File

@@ -57,7 +57,7 @@
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
- **CrewAI Flows**: The **enterprise and production architecture** for building and deploying multi-agent systems. Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
@@ -166,13 +166,13 @@ Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](h
First, install CrewAI:
```shell
pip install crewai
uv pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
pip install 'crewai[tools]'
uv pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
@@ -185,14 +185,14 @@ If you encounter issues during installation or usage, here are some common solut
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
- Install tiktoken explicitly: `uv pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `uv pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
- Try upgrading pip: `uv pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `uv pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
@@ -611,7 +611,7 @@ uv build
### Installing Locally
```bash
pip install dist/*.tar.gz
uv pip install dist/*.tar.gz
```
## Telemetry
@@ -687,13 +687,13 @@ A: CrewAI is a standalone, lean, and fast Python framework built specifically fo
A: Install CrewAI using pip:
```shell
pip install crewai
uv pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
uv pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?

View File

@@ -136,6 +136,10 @@ def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any]:
"""Filter sensitive headers from response before recording."""
# Remove Content-Encoding to prevent decompression issues on replay
for encoding_header in ["Content-Encoding", "content-encoding"]:
response["headers"].pop(encoding_header, None)
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in response["headers"]:

View File

@@ -116,6 +116,7 @@
"en/concepts/tasks",
"en/concepts/crews",
"en/concepts/flows",
"en/concepts/production-architecture",
"en/concepts/knowledge",
"en/concepts/llms",
"en/concepts/processes",
@@ -253,7 +254,8 @@
"pages": [
"en/tools/integration/overview",
"en/tools/integration/bedrockinvokeagenttool",
"en/tools/integration/crewaiautomationtool"
"en/tools/integration/crewaiautomationtool",
"en/tools/integration/mergeagenthandlertool"
]
},
{
@@ -307,6 +309,7 @@
"en/learn/hierarchical-process",
"en/learn/human-input-on-execution",
"en/learn/human-in-the-loop",
"en/learn/human-feedback-in-flows",
"en/learn/kickoff-async",
"en/learn/kickoff-for-each",
"en/learn/llm-connections",
@@ -557,6 +560,7 @@
"pt-BR/concepts/tasks",
"pt-BR/concepts/crews",
"pt-BR/concepts/flows",
"pt-BR/concepts/production-architecture",
"pt-BR/concepts/knowledge",
"pt-BR/concepts/llms",
"pt-BR/concepts/processes",
@@ -701,6 +705,7 @@
{
"group": "Observabilidade",
"pages": [
"pt-BR/observability/tracing",
"pt-BR/observability/overview",
"pt-BR/observability/arize-phoenix",
"pt-BR/observability/braintrust",
@@ -734,6 +739,7 @@
"pt-BR/learn/hierarchical-process",
"pt-BR/learn/human-input-on-execution",
"pt-BR/learn/human-in-the-loop",
"pt-BR/learn/human-feedback-in-flows",
"pt-BR/learn/kickoff-async",
"pt-BR/learn/kickoff-for-each",
"pt-BR/learn/llm-connections",
@@ -981,6 +987,7 @@
"ko/concepts/tasks",
"ko/concepts/crews",
"ko/concepts/flows",
"ko/concepts/production-architecture",
"ko/concepts/knowledge",
"ko/concepts/llms",
"ko/concepts/processes",
@@ -1137,6 +1144,7 @@
{
"group": "Observability",
"pages": [
"ko/observability/tracing",
"ko/observability/overview",
"ko/observability/arize-phoenix",
"ko/observability/braintrust",
@@ -1170,6 +1178,7 @@
"ko/learn/hierarchical-process",
"ko/learn/human-input-on-execution",
"ko/learn/human-in-the-loop",
"ko/learn/human-feedback-in-flows",
"ko/learn/kickoff-async",
"ko/learn/kickoff-for-each",
"ko/learn/llm-connections",

View File

@@ -16,16 +16,17 @@ Welcome to the CrewAI AOP API reference. This API allows you to programmatically
Navigate to your crew's detail page in the CrewAI AOP dashboard and copy your Bearer Token from the Status tab.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive
a `kickoff_id`.
</Step>
<Step title="Monitor Progress">
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Token Types
| Token Type | Scope | Use Case |
|:-----------|:--------|:----------|
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
| Token Type | Scope | Use Case |
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI AOP dashboard.
You can find both token types in the Status tab of your crew's detail page in
the CrewAI AOP dashboard.
</Tip>
## Base URL
@@ -63,29 +65,33 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
4. **Results**: Extract the final output from the completed response
## Error Handling
The API uses standard HTTP status codes:
| Code | Meaning |
|------|:--------|
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| Code | Meaning |
| ----- | :----------------------------------------- |
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| `422` | Validation Error - Missing required inputs |
| `500` | Server Error - Contact support |
| `500` | Server Error - Contact support |
## Interactive Testing
<Info>
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew
URL, we use **reference mode** instead of an interactive playground to avoid
confusion. This shows you exactly what the requests should look like without
non-functional send buttons.
</Info>
Each endpoint page shows you:
- ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Each endpoint page shows you:
</CardGroup>
**Example workflow:**
1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard
@@ -111,10 +118,18 @@ Each endpoint page shows you:
## Need Help?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
Get help with API integration and troubleshooting
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
Manage your crews and view execution logs
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -307,12 +307,27 @@ print(result)
### Different Ways to Kick Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process.
#### Synchronous Methods
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
#### Asynchronous Methods
CrewAI offers two approaches for async execution:
| Method | Type | Description |
|--------|------|-------------|
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
| `akickoff_for_each()` | Native async | Native async execution for each input in a list |
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
| `kickoff_for_each_async()` | Thread-based | Thread-based async for each input in a list |
<Note>
For high-concurrency workloads, `akickoff()` and `akickoff_for_each()` are recommended as they use native async for task execution, memory operations, and knowledge retrieval.
</Note>
```python Code
# Start the crew's task execution
@@ -325,19 +340,30 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using kickoff_async
# Example of using native async with akickoff
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.akickoff(inputs=inputs)
print(async_result)
# Example of using native async with akickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
for async_result in async_results:
print(async_result)
# Example of using thread-based kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
# Example of using thread-based kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs. For detailed async examples, see the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide.
### Streaming Crew Execution

View File

@@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
### Human in the Loop (human feedback)
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Do you approve this content?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "Content to be reviewed..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Approved! Feedback: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"Rejected. Reason: {result.feedback}")
```
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
You can also use `@human_feedback` without routing to simply collect feedback:
```python Code
@start()
@human_feedback(message="Any comments on this output?")
def my_method(self):
return "Output for review"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# Access feedback via result.feedback
# Access original output via result.output
pass
```
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
## Adding Agents to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:

View File

@@ -283,11 +283,54 @@ In this section, you'll find detailed examples that help you select, configure,
)
```
**Extended Thinking (Claude Sonnet 4 and Beyond):**
CrewAI supports Anthropic's Extended Thinking feature, which allows Claude to think through problems in a more human-like way before responding. This is particularly useful for complex reasoning, analysis, and problem-solving tasks.
```python Code
from crewai import LLM
# Enable extended thinking with default settings
llm = LLM(
model="anthropic/claude-sonnet-4",
thinking={"type": "enabled"},
max_tokens=10000
)
# Configure thinking with budget control
llm = LLM(
model="anthropic/claude-sonnet-4",
thinking={
"type": "enabled",
"budget_tokens": 5000 # Limit thinking tokens
},
max_tokens=10000
)
```
**Thinking Configuration Options:**
- `type`: Set to `"enabled"` to activate extended thinking mode
- `budget_tokens` (optional): Maximum tokens to use for thinking (helps control costs)
**Models Supporting Extended Thinking:**
- `claude-sonnet-4` and newer models
- `claude-3-7-sonnet` (with extended thinking capabilities)
**When to Use Extended Thinking:**
- Complex reasoning and multi-step problem solving
- Mathematical calculations and proofs
- Code analysis and debugging
- Strategic planning and decision making
- Research and analytical tasks
**Note:** Extended thinking consumes additional tokens but can significantly improve response quality for complex tasks.
**Supported Environment Variables:**
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
**Features:**
- Native tool use support for Claude 3+ models
- Extended Thinking support for Claude Sonnet 4+
- Streaming support for real-time responses
- Automatic system message handling
- Stop sequences for controlled output
@@ -305,6 +348,7 @@ In this section, you'll find detailed examples that help you select, configure,
| Model | Context Window | Best For |
|------------------------------|----------------|-----------------------------------------------|
| claude-sonnet-4 | 200,000 tokens | Latest with extended thinking capabilities |
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |

View File

@@ -515,8 +515,7 @@ crew = Crew(
"provider": "huggingface",
"config": {
"api_key": "your-hf-token", # Optional for public models
"model": "sentence-transformers/all-MiniLM-L6-v2",
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
"model": "sentence-transformers/all-MiniLM-L6-v2"
}
}
)

View File

@@ -0,0 +1,154 @@
---
title: Production Architecture
description: Best practices for building production-ready AI applications with CrewAI
icon: server
mode: "wide"
---
# The Flow-First Mindset
When building production AI applications with CrewAI, **we recommend starting with a Flow**.
While it's possible to run individual Crews or Agents, wrapping them in a Flow provides the necessary structure for a robust, scalable application.
## Why Flows?
1. **State Management**: Flows provide a built-in way to manage state across different steps of your application. This is crucial for passing data between Crews, maintaining context, and handling user inputs.
2. **Control**: Flows allow you to define precise execution paths, including loops, conditionals, and branching logic. This is essential for handling edge cases and ensuring your application behaves predictably.
3. **Observability**: Flows provide a clear structure that makes it easier to trace execution, debug issues, and monitor performance. We recommend using [CrewAI Tracing](/en/observability/tracing) for detailed insights. Simply run `crewai login` to enable free observability features.
## The Architecture
A typical production CrewAI application looks like this:
```mermaid
graph TD
Start((Start)) --> Flow[Flow Orchestrator]
Flow --> State{State Management}
State --> Step1[Step 1: Data Gathering]
Step1 --> Crew1[Research Crew]
Crew1 --> State
State --> Step2{Condition Check}
Step2 -- "Valid" --> Step3[Step 3: Execution]
Step3 --> Crew2[Action Crew]
Step2 -- "Invalid" --> End((End))
Crew2 --> End
```
### 1. The Flow Class
Your `Flow` class is the entry point. It defines the state schema and the methods that execute your logic.
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class AppState(BaseModel):
user_input: str = ""
research_results: str = ""
final_report: str = ""
class ProductionFlow(Flow[AppState]):
@start()
def gather_input(self):
# ... logic to get input ...
pass
@listen(gather_input)
def run_research_crew(self):
# ... trigger a Crew ...
pass
```
### 2. State Management
Use Pydantic models to define your state. This ensures type safety and makes it clear what data is available at each step.
- **Keep it minimal**: Store only what you need to persist between steps.
- **Use structured data**: Avoid unstructured dictionaries when possible.
### 3. Crews as Units of Work
Delegate complex tasks to Crews. A Crew should be focused on a specific goal (e.g., "Research a topic", "Write a blog post").
- **Don't over-engineer Crews**: Keep them focused.
- **Pass state explicitly**: Pass the necessary data from the Flow state to the Crew inputs.
```python
@listen(gather_input)
def run_research_crew(self):
crew = ResearchCrew()
result = crew.kickoff(inputs={"topic": self.state.user_input})
self.state.research_results = result.raw
```
## Control Primitives
Leverage CrewAI's control primitives to add robustness and control to your Crews.
### 1. Task Guardrails
Use [Task Guardrails](/en/concepts/tasks#task-guardrails) to validate task outputs before they are accepted. This ensures that your agents produce high-quality results.
```python
def validate_content(result: TaskOutput) -> Tuple[bool, Any]:
if len(result.raw) < 100:
return (False, "Content is too short. Please expand.")
return (True, result.raw)
task = Task(
...,
guardrail=validate_content
)
```
### 2. Structured Outputs
Always use structured outputs (`output_pydantic` or `output_json`) when passing data between tasks or to your application. This prevents parsing errors and ensures type safety.
```python
class ResearchResult(BaseModel):
summary: str
sources: List[str]
task = Task(
...,
output_pydantic=ResearchResult
)
```
### 3. LLM Hooks
Use [LLM Hooks](/en/learn/llm-hooks) to inspect or modify messages before they are sent to the LLM, or to sanitize responses.
```python
@before_llm_call
def log_request(context):
print(f"Agent {context.agent.role} is calling the LLM...")
```
## Deployment Patterns
When deploying your Flow, consider the following:
### CrewAI Enterprise
The easiest way to deploy your Flow is using CrewAI Enterprise. It handles the infrastructure, authentication, and monitoring for you.
Check out the [Deployment Guide](/en/enterprise/guides/deploy-crew) to get started.
```bash
crewai deploy create
```
### Async Execution
For long-running tasks, use `kickoff_async` to avoid blocking your API.
### Persistence
Use the `@persist` decorator to save the state of your Flow to a database. This allows you to resume execution if the process crashes or if you need to wait for human input.
```python
@persist
class ProductionFlow(Flow[AppState]):
# ...
```
## Summary
- **Start with a Flow.**
- **Define a clear State.**
- **Use Crews for complex tasks.**
- **Deploy with an API and persistence.**

View File

@@ -187,6 +187,97 @@ You can also deploy your crews directly through the CrewAI AOP web interface by
</Steps>
## Option 3: Redeploy Using API (CI/CD Integration)
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
<Steps>
<Step title="Get Your Personal Access Token">
Navigate to your CrewAI AOP account settings to generate an API token:
1. Go to [app.crewai.com](https://app.crewai.com)
2. Click on **Settings** → **Account** → **Personal Access Token**
3. Generate a new token and copy it securely
4. Store this token as a secret in your CI/CD system
</Step>
<Step title="Find Your Automation UUID">
Locate the unique identifier for your deployed crew:
1. Go to **Automations** in your CrewAI AOP dashboard
2. Select your existing automation/crew
3. Click on **Additional Details**
4. Copy the **UUID** - this identifies your specific crew deployment
</Step>
<Step title="Trigger Redeployment via API">
Use the Deploy API endpoint to trigger a redeployment:
```bash
curl -i -X POST \
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
# HTTP/2 200
# content-type: application/json
#
# {
# "uuid": "your-automation-uuid",
# "status": "Deploy Enqueued",
# "public_url": "https://your-crew-deployment.crewai.com",
# "token": "your-bearer-token"
# }
```
<Info>
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
</Info>
</Step>
<Step title="GitHub Actions Integration Example">
Here's a GitHub Actions workflow with more complex deployment triggers:
```yaml
name: Deploy CrewAI Automation
on:
push:
branches: [ main ]
pull_request:
types: [ labeled ]
release:
types: [ published ]
jobs:
deploy:
runs-on: ubuntu-latest
if: |
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
(github.event_name == 'release')
steps:
- name: Trigger CrewAI Redeployment
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
```
<Tip>
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
</Tip>
</Step>
</Steps>
## ⚠️ Environment Variable Security Requirements
<Warning>

View File

@@ -62,13 +62,13 @@ Test your Gmail trigger integration locally using the CrewAI CLI:
crewai triggers list
# Simulate a Gmail trigger with realistic payload
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
Use `crewai triggers run gmail/new_email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
- Test locally with `crewai triggers run gmail/new_email_received` to see the exact payload structure
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -7,110 +7,89 @@ mode: "wide"
# What is CrewAI?
**CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely independent of LangChain or other agent frameworks.**
**CrewAI is the leading open-source framework for orchestrating autonomous AI agents and building complex workflows.**
CrewAI empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario:
It empowers developers to build production-ready multi-agent systems by combining the collaborative intelligence of **Crews** with the precise control of **Flows**.
- **[CrewAI Crews](/en/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
- **[CrewAI Flows](/en/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
- **[CrewAI Flows](/en/guides/flows/first-flow)**: The backbone of your AI application. Flows allow you to create structured, event-driven workflows that manage state and control execution. They provide the scaffolding for your AI agents to work within.
- **[CrewAI Crews](/en/guides/crews/first-crew)**: The units of work within your Flow. Crews are teams of autonomous agents that collaborate to solve specific tasks delegated to them by the Flow.
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
With over 100,000 developers certified through our community courses, CrewAI is the standard for enterprise-ready AI automation.
## The CrewAI Architecture
## How Crews Work
CrewAI's architecture is designed to balance autonomy with control.
### 1. Flows: The Backbone
<Note>
Just like a company has departments (Sales, Engineering, Marketing) working together under leadership to achieve business goals, CrewAI helps you create an organization of AI agents with specialized roles collaborating to accomplish complex tasks.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
|:----------|:-----------:|:------------|
| **Crew** | The top-level organization | • Manages AI agent teams<br/>• Oversees workflows<br/>• Ensures collaboration<br/>• Delivers outcomes |
| **AI Agents** | Specialized team members | • Have specific roles (researcher, writer)<br/>• Use designated tools<br/>• Can delegate tasks<br/>• Make autonomous decisions |
| **Process** | Workflow management system | • Defines collaboration patterns<br/>• Controls task assignments<br/>• Manages interactions<br/>• Ensures efficient execution |
| **Tasks** | Individual assignments | • Have clear objectives<br/>• Use specific tools<br/>• Feed into larger process<br/>• Produce actionable results |
### How It All Works Together
1. The **Crew** organizes the overall operation
2. **AI Agents** work on their specialized tasks
3. The **Process** ensures smooth collaboration
4. **Tasks** get completed to achieve the goal
## Key Features
<CardGroup cols={2}>
<Card title="Role-Based Agents" icon="users">
Create specialized agents with defined roles, expertise, and goals - from researchers to analysts to writers
</Card>
<Card title="Flexible Tools" icon="screwdriver-wrench">
Equip agents with custom tools and APIs to interact with external services and data sources
</Card>
<Card title="Intelligent Collaboration" icon="people-arrows">
Agents work together, sharing insights and coordinating tasks to achieve complex objectives
</Card>
<Card title="Task Management" icon="list-check">
Define sequential or parallel workflows, with agents automatically handling task dependencies
</Card>
</CardGroup>
## How Flows Work
<Note>
While Crews excel at autonomous collaboration, Flows provide structured automations, offering granular control over workflow execution. Flows ensure tasks are executed reliably, securely, and efficiently, handling conditional logic, loops, and dynamic state management with precision. Flows integrate seamlessly with Crews, enabling you to balance high autonomy with exacting control.
Think of a Flow as the "manager" or the "process definition" of your application. It defines the steps, the logic, and how data moves through your system.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
|:----------|:-----------:|:------------|
| **Flow** | Structured workflow orchestration | • Manages execution paths<br/>• Handles state transitions<br/>• Controls task sequencing<br/>• Ensures reliable execution |
| **Events** | Triggers for workflow actions | • Initiate specific processes<br/>• Enable dynamic responses<br/>• Support conditional branching<br/>• Allow for real-time adaptation |
| **States** | Workflow execution contexts | • Maintain execution data<br/>• Enable persistence<br/>• Support resumability<br/>• Ensure execution integrity |
| **Crew Support** | Enhances workflow automation | • Injects pockets of agency when needed<br/>• Complements structured workflows<br/>• Balances automation with intelligence<br/>• Enables adaptive decision-making |
Flows provide:
- **State Management**: Persist data across steps and executions.
- **Event-Driven Execution**: Trigger actions based on events or external inputs.
- **Control Flow**: Use conditional logic, loops, and branching.
### Key Capabilities
### 2. Crews: The Intelligence
<Note>
Crews are the "teams" that do the heavy lifting. Within a Flow, you can trigger a Crew to tackle a complex problem requiring creativity and collaboration.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
Crews provide:
- **Role-Playing Agents**: Specialized agents with specific goals and tools.
- **Autonomous Collaboration**: Agents work together to solve tasks.
- **Task Delegation**: Tasks are assigned and executed based on agent capabilities.
## How It All Works Together
1. **The Flow** triggers an event or starts a process.
2. **The Flow** manages the state and decides what to do next.
3. **The Flow** delegates a complex task to a **Crew**.
4. **The Crew**'s agents collaborate to complete the task.
5. **The Crew** returns the result to the **Flow**.
6. **The Flow** continues execution based on the result.
## Key Features
<CardGroup cols={2}>
<Card title="Event-Driven Orchestration" icon="bolt">
Define precise execution paths responding dynamically to events
<Card title="Production-Grade Flows" icon="arrow-progress">
Build reliable, stateful workflows that can handle long-running processes and complex logic.
</Card>
<Card title="Fine-Grained Control" icon="sliders">
Manage workflow states and conditional execution securely and efficiently
<Card title="Autonomous Crews" icon="users">
Deploy teams of agents that can plan, execute, and collaborate to achieve high-level goals.
</Card>
<Card title="Native Crew Integration" icon="puzzle-piece">
Effortlessly combine with Crews for enhanced autonomy and intelligence
<Card title="Flexible Tools" icon="screwdriver-wrench">
Connect your agents to any API, database, or local tool.
</Card>
<Card title="Deterministic Execution" icon="route">
Ensure predictable outcomes with explicit control flow and error handling
<Card title="Enterprise Security" icon="lock">
Designed with security and compliance in mind for enterprise deployments.
</Card>
</CardGroup>
## When to Use Crews vs. Flows
<Note>
Understanding when to use [Crews](/en/guides/crews/first-crew) versus [Flows](/en/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
</Note>
**The short answer: Use both.**
| Use Case | Recommended Approach | Why? |
|:---------|:---------------------|:-----|
| **Open-ended research** | [Crews](/en/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
| **Content generation** | [Crews](/en/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
| **Decision workflows** | [Flows](/en/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
| **API orchestration** | [Flows](/en/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
| **Hybrid applications** | Combined approach | Use [Flows](/en/guides/flows/first-flow) to orchestrate overall process with [Crews](/en/guides/crews/first-crew) handling complex subtasks |
For any production-ready application, **start with a Flow**.
### Decision Framework
- **Use a Flow** to define the overall structure, state, and logic of your application.
- **Use a Crew** within a Flow step when you need a team of agents to perform a specific, complex task that requires autonomy.
- **Choose [Crews](/en/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
- **Choose [Flows](/en/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
- **Combine both when:** Your application needs both structured processes and pockets of autonomous intelligence
| Use Case | Architecture |
| :--- | :--- |
| **Simple Automation** | Single Flow with Python tasks |
| **Complex Research** | Flow managing state -> Crew performing research |
| **Application Backend** | Flow handling API requests -> Crew generating content -> Flow saving to DB |
## Why Choose CrewAI?
@@ -124,13 +103,6 @@ With over 100,000 developers certified through our community courses, CrewAI is
## Ready to Start Building?
<CardGroup cols={2}>
<Card
title="Build Your First Crew"
icon="users-gear"
href="/en/guides/crews/first-crew"
>
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
</Card>
<Card
title="Build Your First Flow"
icon="diagram-project"
@@ -138,6 +110,13 @@ With over 100,000 developers certified through our community courses, CrewAI is
>
Learn how to create structured, event-driven workflows with precise control over execution.
</Card>
<Card
title="Build Your First Crew"
icon="users-gear"
href="/en/guides/crews/first-crew"
>
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
</Card>
</CardGroup>
<CardGroup cols={3}>

View File

@@ -0,0 +1,581 @@
---
title: Human Feedback in Flows
description: Learn how to integrate human feedback directly into your CrewAI Flows using the @human_feedback decorator
icon: user-check
mode: "wide"
---
## Overview
The `@human_feedback` decorator enables human-in-the-loop (HITL) workflows directly within CrewAI Flows. It allows you to pause flow execution, present output to a human for review, collect their feedback, and optionally route to different listeners based on the feedback outcome.
This is particularly valuable for:
- **Quality assurance**: Review AI-generated content before it's used downstream
- **Decision gates**: Let humans make critical decisions in automated workflows
- **Approval workflows**: Implement approve/reject/revise patterns
- **Interactive refinement**: Collect feedback to improve outputs iteratively
```mermaid
flowchart LR
A[Flow Method] --> B[Output Generated]
B --> C[Human Reviews]
C --> D{Feedback}
D -->|emit specified| E[LLM Collapses to Outcome]
D -->|no emit| F[HumanFeedbackResult]
E --> G["@listen('approved')"]
E --> H["@listen('rejected')"]
F --> I[Next Listener]
```
## Quick Start
Here's the simplest way to add human feedback to a flow:
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback
class SimpleReviewFlow(Flow):
@start()
@human_feedback(message="Please review this content:")
def generate_content(self):
return "This is AI-generated content that needs review."
@listen(generate_content)
def process_feedback(self, result):
print(f"Content: {result.output}")
print(f"Human said: {result.feedback}")
flow = SimpleReviewFlow()
flow.kickoff()
```
When this flow runs, it will:
1. Execute `generate_content` and return the string
2. Display the output to the user with the request message
3. Wait for the user to type feedback (or press Enter to skip)
4. Pass a `HumanFeedbackResult` object to `process_feedback`
## The @human_feedback Decorator
### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `message` | `str` | Yes | The message shown to the human alongside the method output |
| `emit` | `Sequence[str]` | No | List of possible outcomes. Feedback is collapsed to one of these, which triggers `@listen` decorators |
| `llm` | `str \| BaseLLM` | When `emit` specified | LLM used to interpret feedback and map to an outcome |
| `default_outcome` | `str` | No | Outcome to use if no feedback provided. Must be in `emit` |
| `metadata` | `dict` | No | Additional data for enterprise integrations |
| `provider` | `HumanFeedbackProvider` | No | Custom provider for async/non-blocking feedback. See [Async Human Feedback](#async-human-feedback-non-blocking) |
### Basic Usage (No Routing)
When you don't specify `emit`, the decorator simply collects feedback and passes a `HumanFeedbackResult` to the next listener:
```python Code
@start()
@human_feedback(message="What do you think of this analysis?")
def analyze_data(self):
return "Analysis results: Revenue up 15%, costs down 8%"
@listen(analyze_data)
def handle_feedback(self, result):
# result is a HumanFeedbackResult
print(f"Analysis: {result.output}")
print(f"Feedback: {result.feedback}")
```
### Routing with emit
When you specify `emit`, the decorator becomes a router. The human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes:
```python Code
@start()
@human_feedback(
message="Do you approve this content for publication?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_content(self):
return "Draft blog post content here..."
@listen("approved")
def publish(self, result):
print(f"Publishing! User said: {result.feedback}")
@listen("rejected")
def discard(self, result):
print(f"Discarding. Reason: {result.feedback}")
@listen("needs_revision")
def revise(self, result):
print(f"Revising based on: {result.feedback}")
```
<Tip>
The LLM uses structured outputs (function calling) when available to guarantee the response is one of your specified outcomes. This makes routing reliable and predictable.
</Tip>
## HumanFeedbackResult
The `HumanFeedbackResult` dataclass contains all information about a human feedback interaction:
```python Code
from crewai.flow.human_feedback import HumanFeedbackResult
@dataclass
class HumanFeedbackResult:
output: Any # The original method output shown to the human
feedback: str # The raw feedback text from the human
outcome: str | None # The collapsed outcome (if emit was specified)
timestamp: datetime # When the feedback was received
method_name: str # Name of the decorated method
metadata: dict # Any metadata passed to the decorator
```
### Accessing in Listeners
When a listener is triggered by a `@human_feedback` method with `emit`, it receives the `HumanFeedbackResult`:
```python Code
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Original output: {result.output}")
print(f"User feedback: {result.feedback}")
print(f"Outcome: {result.outcome}") # "approved"
print(f"Received at: {result.timestamp}")
```
## Accessing Feedback History
The `Flow` class provides two attributes for accessing human feedback:
### last_human_feedback
Returns the most recent `HumanFeedbackResult`:
```python Code
@listen(some_method)
def check_feedback(self):
if self.last_human_feedback:
print(f"Last feedback: {self.last_human_feedback.feedback}")
```
### human_feedback_history
A list of all `HumanFeedbackResult` objects collected during the flow:
```python Code
@listen(final_step)
def summarize(self):
print(f"Total feedback collected: {len(self.human_feedback_history)}")
for i, fb in enumerate(self.human_feedback_history):
print(f"{i+1}. {fb.method_name}: {fb.outcome or 'no routing'}")
```
<Warning>
Each `HumanFeedbackResult` is appended to `human_feedback_history`, so multiple feedback steps won't overwrite each other. Use this list to access all feedback collected during the flow.
</Warning>
## Complete Example: Content Approval Workflow
Here's a full example implementing a content review and approval workflow:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
draft: str = ""
final_content: str = ""
revision_count: int = 0
class ContentApprovalFlow(Flow[ContentState]):
"""A flow that generates content and gets human approval."""
@start()
def get_topic(self):
self.state.topic = input("What topic should I write about? ")
return self.state.topic
@listen(get_topic)
def generate_draft(self, topic):
# In real use, this would call an LLM
self.state.draft = f"# {topic}\n\nThis is a draft about {topic}..."
return self.state.draft
@listen(generate_draft)
@human_feedback(
message="Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_draft(self, draft):
return draft
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
self.state.final_content = result.output
print("\n✅ Content approved and published!")
print(f"Reviewer comment: {result.feedback}")
return "published"
@listen("rejected")
def handle_rejection(self, result: HumanFeedbackResult):
print("\n❌ Content rejected")
print(f"Reason: {result.feedback}")
return "rejected"
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
self.state.revision_count += 1
print(f"\n📝 Revision #{self.state.revision_count} requested")
print(f"Feedback: {result.feedback}")
# In a real flow, you might loop back to generate_draft
# For this example, we just acknowledge
return "revision_requested"
# Run the flow
flow = ContentApprovalFlow()
result = flow.kickoff()
print(f"\nFlow completed. Revisions requested: {flow.state.revision_count}")
```
```text Output
What topic should I write about? AI Safety
==================================================
OUTPUT FOR REVIEW:
==================================================
# AI Safety
This is a draft about AI Safety...
==================================================
Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:
(Press Enter to skip, or type your feedback)
Your feedback: Looks good, approved!
✅ Content approved and published!
Reviewer comment: Looks good, approved!
Flow completed. Revisions requested: 0
```
</CodeGroup>
## Combining with Other Decorators
The `@human_feedback` decorator works with other flow decorators. Place it as the innermost decorator (closest to the function):
```python Code
# Correct: @human_feedback is innermost (closest to the function)
@start()
@human_feedback(message="Review this:")
def my_start_method(self):
return "content"
@listen(other_method)
@human_feedback(message="Review this too:")
def my_listener(self, data):
return f"processed: {data}"
```
<Tip>
Place `@human_feedback` as the innermost decorator (last/closest to the function) so it wraps the method directly and can capture the return value before passing to the flow system.
</Tip>
## Best Practices
### 1. Write Clear Request Messages
The `request` parameter is what the human sees. Make it actionable:
```python Code
# ✅ Good - clear and actionable
@human_feedback(message="Does this summary accurately capture the key points? Reply 'yes' or explain what's missing:")
# ❌ Bad - vague
@human_feedback(message="Review this:")
```
### 2. Choose Meaningful Outcomes
When using `emit`, pick outcomes that map naturally to human responses:
```python Code
# ✅ Good - natural language outcomes
emit=["approved", "rejected", "needs_more_detail"]
# ❌ Bad - technical or unclear
emit=["state_1", "state_2", "state_3"]
```
### 3. Always Provide a Default Outcome
Use `default_outcome` to handle cases where users press Enter without typing:
```python Code
@human_feedback(
message="Approve? (press Enter to request revision)",
emit=["approved", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision", # Safe default
)
```
### 4. Use Feedback History for Audit Trails
Access `human_feedback_history` to create audit logs:
```python Code
@listen(final_step)
def create_audit_log(self):
log = []
for fb in self.human_feedback_history:
log.append({
"step": fb.method_name,
"outcome": fb.outcome,
"feedback": fb.feedback,
"timestamp": fb.timestamp.isoformat(),
})
return log
```
### 5. Handle Both Routed and Non-Routed Feedback
When designing flows, consider whether you need routing:
| Scenario | Use |
|----------|-----|
| Simple review, just need the feedback text | No `emit` |
| Need to branch to different paths based on response | Use `emit` |
| Approval gates with approve/reject/revise | Use `emit` |
| Collecting comments for logging only | No `emit` |
## Async Human Feedback (Non-Blocking)
By default, `@human_feedback` blocks execution waiting for console input. For production applications, you may need **async/non-blocking** feedback that integrates with external systems like Slack, email, webhooks, or APIs.
### The Provider Abstraction
Use the `provider` parameter to specify a custom feedback collection strategy:
```python Code
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
class WebhookProvider(HumanFeedbackProvider):
"""Provider that pauses flow and waits for webhook callback."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Notify external system (e.g., send Slack message, create ticket)
self.send_notification(context)
# Pause execution - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
)
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Review this content:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=WebhookProvider("https://myapp.com/api"),
)
def generate_content(self):
return "AI-generated content..."
@listen("approved")
def publish(self, result):
return "Published!"
```
<Tip>
The flow framework **automatically persists state** when `HumanFeedbackPending` is raised. Your provider only needs to notify the external system and raise the exception—no manual persistence calls required.
</Tip>
### Handling Paused Flows
When using an async provider, `kickoff()` returns a `HumanFeedbackPending` object instead of raising an exception:
```python Code
flow = ReviewFlow()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow is paused, state is automatically persisted
print(f"Waiting for feedback at: {result.callback_info['webhook_url']}")
print(f"Flow ID: {result.context.flow_id}")
else:
# Normal completion
print(f"Flow completed: {result}")
```
### Resuming a Paused Flow
When feedback arrives (e.g., via webhook), resume the flow:
```python Code
# Sync handler:
def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = flow.resume(feedback)
return result
# Async handler (FastAPI, aiohttp, etc.):
async def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = await flow.resume_async(feedback)
return result
```
### Key Types
| Type | Description |
|------|-------------|
| `HumanFeedbackProvider` | Protocol for custom feedback providers |
| `PendingFeedbackContext` | Contains all info needed to resume a paused flow |
| `HumanFeedbackPending` | Returned by `kickoff()` when flow is paused for feedback |
| `ConsoleProvider` | Default blocking console input provider |
### PendingFeedbackContext
The context contains everything needed to resume:
```python Code
@dataclass
class PendingFeedbackContext:
flow_id: str # Unique identifier for this flow execution
flow_class: str # Fully qualified class name
method_name: str # Method that triggered feedback
method_output: Any # Output shown to the human
message: str # The request message
emit: list[str] | None # Possible outcomes for routing
default_outcome: str | None
metadata: dict # Custom metadata
llm: str | None # LLM for outcome collapsing
requested_at: datetime
```
### Complete Async Flow Example
```python Code
from crewai.flow import (
Flow, start, listen, human_feedback,
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
)
class SlackNotificationProvider(HumanFeedbackProvider):
"""Provider that sends Slack notifications and pauses for async feedback."""
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Send Slack notification (implement your own)
slack_thread_id = self.post_to_slack(
channel=self.channel,
message=f"Review needed:\n\n{context.method_output}\n\n{context.message}",
)
# Pause execution - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": self.channel,
"thread_id": slack_thread_id,
}
)
class ContentPipeline(Flow):
@start()
@human_feedback(
message="Approve this content for publication?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
provider=SlackNotificationProvider("#content-reviews"),
)
def generate_content(self):
return "AI-generated blog post content..."
@listen("approved")
def publish(self, result):
print(f"Publishing! Reviewer said: {result.feedback}")
return {"status": "published"}
@listen("rejected")
def archive(self, result):
print(f"Archived. Reason: {result.feedback}")
return {"status": "archived"}
@listen("needs_revision")
def queue_revision(self, result):
print(f"Queued for revision: {result.feedback}")
return {"status": "revision_needed"}
# Starting the flow (will pause and wait for Slack response)
def start_content_pipeline():
flow = ContentPipeline()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
return {"status": "pending", "flow_id": result.context.flow_id}
return result
# Resuming when Slack webhook fires (sync handler)
def on_slack_feedback(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = flow.resume(slack_message)
return result
# If your handler is async (FastAPI, aiohttp, Slack Bolt async, etc.)
async def on_slack_feedback_async(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = await flow.resume_async(slack_message)
return result
```
<Warning>
If you're using an async web framework (FastAPI, aiohttp, Slack Bolt async mode), use `await flow.resume_async()` instead of `flow.resume()`. Calling `resume()` from within a running event loop will raise a `RuntimeError`.
</Warning>
### Best Practices for Async Feedback
1. **Check the return type**: `kickoff()` returns `HumanFeedbackPending` when paused—no try/except needed
2. **Use the right resume method**: Use `resume()` in sync code, `await resume_async()` in async code
3. **Store callback info**: Use `callback_info` to store webhook URLs, ticket IDs, etc.
4. **Implement idempotency**: Your resume handler should be idempotent for safety
5. **Automatic persistence**: State is automatically saved when `HumanFeedbackPending` is raised and uses `SQLiteFlowPersistence` by default
6. **Custom persistence**: Pass a custom persistence instance to `from_pending()` if needed
## Related Documentation
- [Flows Overview](/en/concepts/flows) - Learn about CrewAI Flows
- [Flow State Management](/en/guides/flows/mastering-flow-state) - Managing state in flows
- [Flow Persistence](/en/concepts/flows#persistence) - Persisting flow state
- [Routing with @router](/en/concepts/flows#router) - More about conditional routing
- [Human Input on Execution](/en/learn/human-input-on-execution) - Task-level human input

View File

@@ -5,9 +5,22 @@ icon: "user-check"
mode: "wide"
---
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. CrewAI provides multiple ways to implement HITL depending on your needs.
## Setting Up HITL Workflows
## Choosing Your HITL Approach
CrewAI offers two main approaches for implementing human-in-the-loop workflows:
| Approach | Best For | Integration |
|----------|----------|-------------|
| **Flow-based** (`@human_feedback` decorator) | Local development, console-based review, synchronous workflows | [Human Feedback in Flows](/en/learn/human-feedback-in-flows) |
| **Webhook-based** (Enterprise) | Production deployments, async workflows, external integrations (Slack, Teams, etc.) | This guide |
<Tip>
If you're building flows and want to add human review steps with routing based on feedback, check out the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide for the `@human_feedback` decorator.
</Tip>
## Setting Up Webhook-Based HITL Workflows
<Steps>
<Step title="Configure Your Task">

View File

@@ -7,17 +7,28 @@ mode: "wide"
## Introduction
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner.
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner.
This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
## Asynchronous Crew Execution
CrewAI offers two approaches for async execution:
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
| Method | Type | Description |
|--------|------|-------------|
| `akickoff()` | Native async | True async/await throughout the entire execution chain |
| `kickoff_async()` | Thread-based | Wraps synchronous execution in `asyncio.to_thread` |
<Note>
For high-concurrency workloads, `akickoff()` is recommended as it uses native async for task execution, memory operations, and knowledge retrieval.
</Note>
## Native Async Execution with `akickoff()`
The `akickoff()` method provides true native async execution, using async/await throughout the entire execution chain including task execution, memory operations, and knowledge queries.
### Method Signature
```python Code
def kickoff_async(self, inputs: dict) -> CrewOutput:
async def akickoff(self, inputs: dict) -> CrewOutput:
```
### Parameters
@@ -28,23 +39,13 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
- `CrewOutput`: An object representing the result of the crew execution.
## Potential Use Cases
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
## Example: Single Asynchronous Crew Execution
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
### Example: Native Async Crew Execution
```python Code
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
# Create an agent
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
@@ -52,37 +53,165 @@ coding_agent = Agent(
allow_code_execution=True
)
# Create a task that requires code execution
# Create a task
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
# Create a crew and add the task
# Create a crew
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Async function to kickoff the crew asynchronously
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
# Native async execution
async def main():
result = await analysis_crew.akickoff(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
# Run the async function
asyncio.run(async_crew_execution())
asyncio.run(main())
```
## Example: Multiple Asynchronous Crew Executions
### Example: Multiple Native Async Crews
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
Run multiple crews concurrently using `asyncio.gather()` with native async:
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
task_1 = Task(
description="Analyze the first dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
task_2 = Task(
description="Analyze the second dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
async def main():
results = await asyncio.gather(
crew_1.akickoff(inputs={"ages": [25, 30, 35, 40, 45]}),
crew_2.akickoff(inputs={"ages": [20, 22, 24, 28, 30]})
)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
asyncio.run(main())
```
### Example: Native Async for Multiple Inputs
Use `akickoff_for_each()` to execute your crew against multiple inputs concurrently with native async:
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def main():
datasets = [
{"ages": [25, 30, 35, 40, 45]},
{"ages": [20, 22, 24, 28, 30]},
{"ages": [30, 35, 40, 45, 50]}
]
results = await analysis_crew.akickoff_for_each(datasets)
for i, result in enumerate(results, 1):
print(f"Dataset {i} Result:", result)
asyncio.run(main())
```
## Thread-Based Async with `kickoff_async()`
The `kickoff_async()` method provides async execution by wrapping the synchronous `kickoff()` in a thread. This is useful for simpler async integration or backward compatibility.
### Method Signature
```python Code
async def kickoff_async(self, inputs: dict) -> CrewOutput:
```
### Parameters
- `inputs` (dict): A dictionary containing the input data required for the tasks.
### Returns
- `CrewOutput`: An object representing the result of the crew execution.
### Example: Thread-Based Async Execution
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
asyncio.run(async_crew_execution())
```
### Example: Multiple Thread-Based Async Crews
```python Code
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
@@ -90,7 +219,6 @@ coding_agent = Agent(
allow_code_execution=True
)
# Create tasks that require code execution
task_1 = Task(
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
@@ -103,22 +231,76 @@ task_2 = Task(
expected_output="The average age of the participants."
)
# Create two crews and add tasks
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
# Async function to kickoff multiple crews asynchronously and wait for all to finish
async def async_multiple_crews():
# Create coroutines for concurrent execution
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
# Wait for both crews to finish
results = await asyncio.gather(result_1, result_2)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
# Run the async function
asyncio.run(async_multiple_crews())
```
## Async Streaming
Both async methods support streaming when `stream=True` is set on the crew:
```python Code
import asyncio
from crewai import Crew, Agent, Task
agent = Agent(
role="Researcher",
goal="Research and summarize topics",
backstory="You are an expert researcher."
)
task = Task(
description="Research the topic: {topic}",
agent=agent,
expected_output="A comprehensive summary of the topic."
)
crew = Crew(
agents=[agent],
tasks=[task],
stream=True # Enable streaming
)
async def main():
streaming_output = await crew.akickoff(inputs={"topic": "AI trends in 2024"})
# Async iteration over streaming chunks
async for chunk in streaming_output:
print(f"Chunk: {chunk.content}")
# Access final result after streaming completes
result = streaming_output.result
print(f"Final result: {result.raw}")
asyncio.run(main())
```
## Potential Use Cases
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch.
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment.
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities.
## Choosing Between `akickoff()` and `kickoff_async()`
| Feature | `akickoff()` | `kickoff_async()` |
|---------|--------------|-------------------|
| Execution model | Native async/await | Thread-based wrapper |
| Task execution | Async with `aexecute_sync()` | Sync in thread pool |
| Memory operations | Async | Sync in thread pool |
| Knowledge retrieval | Async | Sync in thread pool |
| Best for | High-concurrency, I/O-bound workloads | Simple async integration |
| Streaming support | Yes | Yes |

View File

@@ -95,7 +95,11 @@ print(f"Final result: {streaming.result.raw}")
## Asynchronous Streaming
For async applications, use `kickoff_async()` with async iteration:
For async applications, you can use either `akickoff()` (native async) or `kickoff_async()` (thread-based) with async iteration:
### Native Async with `akickoff()`
The `akickoff()` method provides true native async execution throughout the entire chain:
```python Code
import asyncio
@@ -107,7 +111,35 @@ async def stream_crew():
stream=True
)
# Start async streaming
# Start native async streaming
streaming = await crew.akickoff(inputs={"topic": "AI"})
# Async iteration over chunks
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# Access final result
result = streaming.result
print(f"\n\nFinal output: {result.raw}")
asyncio.run(stream_crew())
```
### Thread-Based Async with `kickoff_async()`
For simpler async integration or backward compatibility:
```python Code
import asyncio
async def stream_crew():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Start thread-based async streaming
streaming = await crew.kickoff_async(inputs={"topic": "AI"})
# Async iteration over chunks
@@ -121,6 +153,10 @@ async def stream_crew():
asyncio.run(stream_crew())
```
<Note>
For high-concurrency workloads, `akickoff()` is recommended as it uses native async for task execution, memory operations, and knowledge retrieval. See the [Kickoff Crew Asynchronously](/en/learn/kickoff-async) guide for more details.
</Note>
## Streaming with kickoff_for_each
When executing a crew for multiple inputs with `kickoff_for_each()`, streaming works differently depending on whether you use sync or async:

View File

@@ -0,0 +1,367 @@
---
title: Merge Agent Handler Tool
description: Enables CrewAI agents to securely access third-party integrations like Linear, GitHub, Slack, and more through Merge's Agent Handler platform
icon: diagram-project
mode: "wide"
---
# `MergeAgentHandlerTool`
The `MergeAgentHandlerTool` enables CrewAI agents to securely access third-party integrations through [Merge's Agent Handler](https://www.merge.dev/products/merge-agent-handler) platform. Agent Handler provides pre-built, secure connectors to popular tools like Linear, GitHub, Slack, Notion, and hundreds more—all with built-in authentication, permissions, and monitoring.
## Installation
```bash
uv pip install 'crewai[tools]'
```
## Requirements
- Merge Agent Handler account with a configured Tool Pack
- Agent Handler API key
- At least one registered user linked to your Tool Pack
- Third-party integrations configured in your Tool Pack
## Getting Started with Agent Handler
1. **Sign up** for a Merge Agent Handler account at [ah.merge.dev/signup](https://ah.merge.dev/signup)
2. **Create a Tool Pack** and configure the integrations you need
3. **Register users** who will authenticate with the third-party services
4. **Get your API key** from the Agent Handler dashboard
5. **Set environment variable**: `export AGENT_HANDLER_API_KEY='your-key-here'`
6. **Start building** with the MergeAgentHandlerTool in CrewAI
## Notes
- Tool Pack IDs and Registered User IDs can be found in your Agent Handler dashboard or created via API
- The tool uses the Model Context Protocol (MCP) for communication with Agent Handler
- Session IDs are automatically generated but can be customized for context persistence
- All tool calls are logged and auditable through the Agent Handler platform
- Tool parameters are dynamically discovered from the Agent Handler API and validated automatically
## Usage
### Single Tool Usage
Here's how to use a specific tool from your Tool Pack:
```python {2, 4-9}
from crewai import Agent, Task, Crew
from crewai_tools import MergeAgentHandlerTool
# Create a tool for Linear issue creation
linear_create_tool = MergeAgentHandlerTool.from_tool_name(
tool_name="linear__create_issue",
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa"
)
# Create a CrewAI agent that uses the tool
project_manager = Agent(
role='Project Manager',
goal='Manage project tasks and issues efficiently',
backstory='I am an expert at tracking project work and creating actionable tasks.',
tools=[linear_create_tool],
verbose=True
)
# Create a task for the agent
create_issue_task = Task(
description="Create a new high-priority issue in Linear titled 'Implement user authentication' with a detailed description of the requirements.",
agent=project_manager,
expected_output="Confirmation that the issue was created with its ID"
)
# Create a crew with the agent
crew = Crew(
agents=[project_manager],
tasks=[create_issue_task],
verbose=True
)
# Run the crew
result = crew.kickoff()
print(result)
```
### Loading Multiple Tools from a Tool Pack
You can load all available tools from your Tool Pack at once:
```python {2, 4-8}
from crewai import Agent, Task, Crew
from crewai_tools import MergeAgentHandlerTool
# Load all tools from the Tool Pack
tools = MergeAgentHandlerTool.from_tool_pack(
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa"
)
# Create an agent with access to all tools
automation_expert = Agent(
role='Automation Expert',
goal='Automate workflows across multiple platforms',
backstory='I can work with any tool in the toolbox to get things done.',
tools=tools,
verbose=True
)
automation_task = Task(
description="Check for any high-priority issues in Linear and post a summary to Slack.",
agent=automation_expert
)
crew = Crew(
agents=[automation_expert],
tasks=[automation_task],
verbose=True
)
result = crew.kickoff()
```
### Loading Specific Tools Only
Load only the tools you need:
```python {2, 4-10}
from crewai import Agent, Task, Crew
from crewai_tools import MergeAgentHandlerTool
# Load specific tools from the Tool Pack
selected_tools = MergeAgentHandlerTool.from_tool_pack(
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa",
tool_names=["linear__create_issue", "linear__get_issues", "slack__post_message"]
)
developer_assistant = Agent(
role='Developer Assistant',
goal='Help developers track and communicate about their work',
backstory='I help developers stay organized and keep the team informed.',
tools=selected_tools,
verbose=True
)
daily_update_task = Task(
description="Get all issues assigned to the current user in Linear and post a summary to the #dev-updates Slack channel.",
agent=developer_assistant
)
crew = Crew(
agents=[developer_assistant],
tasks=[daily_update_task],
verbose=True
)
result = crew.kickoff()
```
## Tool Arguments
### `from_tool_name()` Method
| Argument | Type | Required | Default | Description |
|:---------|:-----|:---------|:--------|:------------|
| **tool_name** | `str` | Yes | None | Name of the specific tool to use (e.g., "linear__create_issue") |
| **tool_pack_id** | `str` | Yes | None | UUID of your Agent Handler Tool Pack |
| **registered_user_id** | `str` | Yes | None | UUID or origin_id of the registered user |
| **base_url** | `str` | No | "https://ah-api.merge.dev" | Base URL for Agent Handler API |
| **session_id** | `str` | No | Auto-generated | MCP session ID for maintaining context |
### `from_tool_pack()` Method
| Argument | Type | Required | Default | Description |
|:---------|:-----|:---------|:--------|:------------|
| **tool_pack_id** | `str` | Yes | None | UUID of your Agent Handler Tool Pack |
| **registered_user_id** | `str` | Yes | None | UUID or origin_id of the registered user |
| **tool_names** | `list[str]` | No | None | Specific tool names to load. If None, loads all available tools |
| **base_url** | `str` | No | "https://ah-api.merge.dev" | Base URL for Agent Handler API |
## Environment Variables
```bash
AGENT_HANDLER_API_KEY=your_api_key_here # Required for authentication
```
## Advanced Usage
### Multi-Agent Workflow with Different Tool Access
```python {2, 4-20}
from crewai import Agent, Task, Crew, Process
from crewai_tools import MergeAgentHandlerTool
# Create specialized tools for different agents
github_tools = MergeAgentHandlerTool.from_tool_pack(
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa",
tool_names=["github__create_pull_request", "github__get_pull_requests"]
)
linear_tools = MergeAgentHandlerTool.from_tool_pack(
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa",
tool_names=["linear__create_issue", "linear__update_issue"]
)
slack_tool = MergeAgentHandlerTool.from_tool_name(
tool_name="slack__post_message",
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa"
)
# Create specialized agents
code_reviewer = Agent(
role='Code Reviewer',
goal='Review pull requests and ensure code quality',
backstory='I am an expert at reviewing code changes and providing constructive feedback.',
tools=github_tools
)
task_manager = Agent(
role='Task Manager',
goal='Track and update project tasks based on code changes',
backstory='I keep the project board up to date with the latest development progress.',
tools=linear_tools
)
communicator = Agent(
role='Team Communicator',
goal='Keep the team informed about important updates',
backstory='I make sure everyone knows what is happening in the project.',
tools=[slack_tool]
)
# Create sequential tasks
review_task = Task(
description="Review all open pull requests in the 'api-service' repository and identify any that need attention.",
agent=code_reviewer,
expected_output="List of pull requests that need review or have issues"
)
update_task = Task(
description="Update Linear issues based on the pull request review findings. Mark completed PRs as done.",
agent=task_manager,
expected_output="Summary of updated Linear issues"
)
notify_task = Task(
description="Post a summary of today's code review and task updates to the #engineering Slack channel.",
agent=communicator,
expected_output="Confirmation that the message was posted"
)
# Create a crew with sequential processing
crew = Crew(
agents=[code_reviewer, task_manager, communicator],
tasks=[review_task, update_task, notify_task],
process=Process.sequential,
verbose=True
)
result = crew.kickoff()
```
### Custom Session Management
Maintain context across multiple tool calls using session IDs:
```python {2, 4-17}
from crewai import Agent, Task, Crew
from crewai_tools import MergeAgentHandlerTool
# Create tools with the same session ID to maintain context
session_id = "project-sprint-planning-2024"
create_tool = MergeAgentHandlerTool(
name="linear_create_issue",
description="Creates a new issue in Linear",
tool_name="linear__create_issue",
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa",
session_id=session_id
)
update_tool = MergeAgentHandlerTool(
name="linear_update_issue",
description="Updates an existing issue in Linear",
tool_name="linear__update_issue",
tool_pack_id="134e0111-0f67-44f6-98f0-597000290bb3",
registered_user_id="91b2b905-e866-40c8-8be2-efe53827a0aa",
session_id=session_id
)
sprint_planner = Agent(
role='Sprint Planner',
goal='Plan and organize sprint tasks',
backstory='I help teams plan effective sprints with well-defined tasks.',
tools=[create_tool, update_tool],
verbose=True
)
planning_task = Task(
description="Create 5 sprint tasks for the authentication feature and set their priorities based on dependencies.",
agent=sprint_planner
)
crew = Crew(
agents=[sprint_planner],
tasks=[planning_task],
verbose=True
)
result = crew.kickoff()
```
## Use Cases
### Unified Integration Access
- Access hundreds of third-party tools through a single unified API without managing multiple SDKs
- Enable agents to work with Linear, GitHub, Slack, Notion, Jira, Asana, and more from one integration point
- Reduce integration complexity by letting Agent Handler manage authentication and API versioning
### Secure Enterprise Workflows
- Leverage built-in authentication and permission management for all third-party integrations
- Maintain enterprise security standards with centralized access control and audit logging
- Enable agents to access company tools without exposing API keys or credentials in code
### Cross-Platform Automation
- Build workflows that span multiple platforms (e.g., create GitHub issues from Linear tasks, sync Notion pages to Slack)
- Enable seamless data flow between different tools in your tech stack
- Create intelligent automation that understands context across different platforms
### Dynamic Tool Discovery
- Load all available tools at runtime without hardcoding integration logic
- Enable agents to discover and use new tools as they're added to your Tool Pack
- Build flexible agents that can adapt to changing tool availability
### User-Specific Tool Access
- Different users can have different tool permissions and access levels
- Enable multi-tenant workflows where agents act on behalf of specific users
- Maintain proper attribution and permissions for all tool actions
## Available Integrations
Merge Agent Handler supports hundreds of integrations across multiple categories:
- **Project Management**: Linear, Jira, Asana, Monday.com, ClickUp
- **Code Management**: GitHub, GitLab, Bitbucket
- **Communication**: Slack, Microsoft Teams, Discord
- **Documentation**: Notion, Confluence, Google Docs
- **CRM**: Salesforce, HubSpot, Pipedrive
- **And many more...**
Visit the [Merge Agent Handler documentation](https://docs.ah.merge.dev/) for a complete list of available integrations.
## Error Handling
The tool provides comprehensive error handling:
- **Authentication Errors**: Invalid or missing API keys
- **Permission Errors**: User lacks permission for the requested action
- **API Errors**: Issues communicating with Agent Handler or third-party services
- **Validation Errors**: Invalid parameters passed to tool methods
All errors are wrapped in `MergeAgentHandlerToolError` for consistent error handling.

View File

@@ -10,6 +10,10 @@ Integration tools let your agents hand off work to other automation platforms an
## **Available Tools**
<CardGroup cols={2}>
<Card title="Merge Agent Handler Tool" icon="diagram-project" href="/en/tools/integration/mergeagenthandlertool">
Securely access hundreds of third-party tools like Linear, GitHub, Slack, and more through Merge's unified API.
</Card>
<Card title="CrewAI Run Automation Tool" icon="robot" href="/en/tools/integration/crewaiautomationtool">
Invoke live CrewAI Platform automations, pass custom inputs, and poll for results directly from your agent.
</Card>

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /status/{kickoff_id}`
3. **Monitor progress** using `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Support
@@ -63,7 +63,7 @@ paths:
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Successfully retrieved required inputs
content:
application/json:
@@ -84,13 +84,21 @@ paths:
outreach_crew:
summary: Outreach crew inputs
value:
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
inputs:
[
"name",
"title",
"company",
"industry",
"our_product",
"linkedin_url",
]
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -170,7 +178,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Crew execution started successfully
content:
application/json:
@@ -182,24 +190,24 @@ paths:
format: uuid
description: Unique identifier for tracking this execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'400':
"400":
description: Invalid request body or missing required inputs
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
$ref: '#/components/responses/UnauthorizedError'
'422':
$ref: "#/components/schemas/Error"
"401":
$ref: "#/components/responses/UnauthorizedError"
"422":
description: Validation error - ensure all required inputs are provided
content:
application/json:
schema:
$ref: '#/components/schemas/ValidationError'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/ValidationError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Get Execution Status
description: |
@@ -222,15 +230,15 @@ paths:
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
'200':
"200":
description: Successfully retrieved execution status
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
examples:
running:
summary: Execution in progress
@@ -262,19 +270,19 @@ paths:
status: "error"
error: "Task execution failed: Invalid API key for external service"
execution_time: 23.1
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -354,7 +362,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -381,28 +389,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -458,7 +466,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
description: Total execution time in seconds
@@ -536,7 +544,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Unauthorized"
message: "Invalid or missing bearer token"
@@ -546,7 +554,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "The requested resource was not found"
@@ -556,7 +564,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Internal Server Error"
message: "An unexpected error occurred"

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /status/{kickoff_id}`
3. **Monitor progress** using `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Support
@@ -63,7 +63,7 @@ paths:
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Successfully retrieved required inputs
content:
application/json:
@@ -84,13 +84,21 @@ paths:
outreach_crew:
summary: Outreach crew inputs
value:
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
inputs:
[
"name",
"title",
"company",
"industry",
"our_product",
"linkedin_url",
]
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -170,7 +178,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Crew execution started successfully
content:
application/json:
@@ -182,24 +190,24 @@ paths:
format: uuid
description: Unique identifier for tracking this execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'400':
"400":
description: Invalid request body or missing required inputs
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
$ref: '#/components/responses/UnauthorizedError'
'422':
$ref: "#/components/schemas/Error"
"401":
$ref: "#/components/responses/UnauthorizedError"
"422":
description: Validation error - ensure all required inputs are provided
content:
application/json:
schema:
$ref: '#/components/schemas/ValidationError'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/ValidationError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Get Execution Status
description: |
@@ -222,15 +230,15 @@ paths:
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
'200':
"200":
description: Successfully retrieved execution status
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
examples:
running:
summary: Execution in progress
@@ -262,19 +270,19 @@ paths:
status: "error"
error: "Task execution failed: Invalid API key for external service"
execution_time: 23.1
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -354,7 +362,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -381,28 +389,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -458,7 +466,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
description: Total execution time in seconds
@@ -536,7 +544,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Unauthorized"
message: "Invalid or missing bearer token"
@@ -546,7 +554,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "The requested resource was not found"
@@ -556,7 +564,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Internal Server Error"
message: "An unexpected error occurred"

View File

@@ -84,7 +84,7 @@ paths:
'500':
$ref: '#/components/responses/ServerError'
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: 실행 상태 조회
description: |

View File

@@ -35,7 +35,7 @@ info:
1. **Descubra os inputs** usando `GET /inputs`
2. **Inicie a execução** usando `POST /kickoff`
3. **Monitore o progresso** usando `GET /status/{kickoff_id}`
3. **Monitore o progresso** usando `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Suporte
@@ -56,7 +56,7 @@ paths:
Retorna a lista de parâmetros de entrada que sua crew espera.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Inputs requeridos obtidos com sucesso
content:
application/json:
@@ -69,12 +69,12 @@ paths:
type: string
description: Nomes dos parâmetros de entrada
example: ["budget", "interests", "duration", "age"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -104,7 +104,7 @@ paths:
age: "35"
responses:
'200':
"200":
description: Execução iniciada com sucesso
content:
application/json:
@@ -115,12 +115,12 @@ paths:
type: string
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'401':
$ref: '#/components/responses/UnauthorizedError'
'500':
$ref: '#/components/responses/ServerError'
"401":
$ref: "#/components/responses/UnauthorizedError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Obter Status da Execução
description: |
@@ -136,25 +136,25 @@ paths:
type: string
format: uuid
responses:
'200':
"200":
description: Status recuperado com sucesso
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID não encontrado
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/Error"
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -234,7 +234,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -261,28 +261,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -324,7 +324,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
@@ -380,16 +380,16 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
NotFoundError:
description: Recurso não encontrado
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
ServerError:
description: Erro interno do servidor
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"

View File

@@ -16,16 +16,17 @@ CrewAI 엔터프라이즈 API 참고 자료에 오신 것을 환영합니다.
CrewAI AOP 대시보드에서 자신의 crew 상세 페이지로 이동하여 Status 탭에서 Bearer Token을 복사하세요.
</Step>
<Step title="필수 입력값 확인하기">
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
</Step>
<Step title="필수 입력값 확인하기">
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
</Step>
<Step title="Crew 실행 시작하기">
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를 받으세요.
</Step>
<Step title="Crew 실행 시작하기">
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를
받으세요.
</Step>
<Step title="진행 상황 모니터링">
`GET /status/{kickoff_id}`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
`GET /{kickoff_id}/status`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### 토큰 유형
| 토큰 유형 | 범위 | 사용 사례 |
|:-----------|:--------|:----------|
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
| 토큰 유형 | 범위 | 사용 사례 |
| :-------------------- | :--------------- | :------------------------------------ |
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
<Tip>
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할 수 있습니다.
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할
수 있습니다.
</Tip>
## 기본 URL
@@ -63,29 +65,33 @@ https://your-crew-name.crewai.com
1. **탐색**: `GET /inputs`를 호출하여 crew가 필요한 것을 파악합니다.
2. **실행**: `POST /kickoff`를 통해 입력값을 제출하여 처리를 시작합니다.
3. **모니터링**: 완료될 때까지 `GET /status/{kickoff_id}`를 주기적으로 조회합니다.
3. **모니터링**: 완료될 때까지 `GET /{kickoff_id}/status`를 주기적으로 조회합니다.
4. **결과**: 완료된 응답에서 최종 출력을 추출합니다.
## 오류 처리
API는 표준 HTTP 상태 코드를 사용합니다:
| 코드 | 의미 |
|------|:--------|
| `200` | 성공 |
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
| 코드 | 의미 |
| ----- | :------------------------------------ |
| `200` | 성공 |
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
| `404` | 찾을 수 없음 - 리소스가 존재하지 않음 |
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
## 인터랙티브 테스트
<Info>
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을 가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를 사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히 보여줍니다.
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을
가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를
사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히
보여줍니다.
</Info>
각 엔드포인트 페이지에서는 다음을 확인할 수 있습니다:
- ✅ 모든 파라미터가 포함된 **정확한 요청 형식**
- ✅ 성공 및 오류 사례에 대한 **응답 예시**
- ✅ 여러 언어(cURL, Python, JavaScript 등)로 제공되는 **코드 샘플**
@@ -103,6 +109,7 @@ API는 표준 HTTP 상태 코드를 사용합니다:
</CardGroup>
**예시 작업 흐름:**
1. **cURL 예제를 복사**합니다 (엔드포인트 페이지에서)
2. **`your-actual-crew-name.crewai.com`**을(를) 실제 crew URL로 교체합니다
3. **Bearer 토큰을** 대시보드에서 복사한 실제 토큰으로 교체합니다
@@ -111,10 +118,18 @@ API는 표준 HTTP 상태 코드를 사용합니다:
## 도움이 필요하신가요?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
API 통합 및 문제 해결에 대한 지원을 받으세요
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
crew를 관리하고 실행 로그를 확인하세요
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "실행 상태 조회"
openapi: "/enterprise-api.ko.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.ko.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -33,6 +33,7 @@ crewAI에서 crew는 일련의 작업을 달성하기 위해 함께 협력하는
| **Planning** *(선택사항)* | `planning` | Crew에 계획 수립 기능을 추가. 활성화하면 각 Crew 반복 전에 모든 Crew 데이터를 AgentPlanner로 전송하여 작업계획을 세우고, 이 계획이 각 작업 설명에 추가됨. |
| **Planning LLM** *(선택사항)* | `planning_llm` | 계획 과정에서 AgentPlanner가 사용하는 언어 모델. |
| **Knowledge Sources** _(선택사항)_ | `knowledge_sources` | crew 수준에서 사용 가능한 지식 소스. 모든 agent가 접근 가능. |
| **Stream** _(선택사항)_ | `stream` | 스트리밍 출력을 활성화하여 crew 실행 중 실시간 업데이트를 받을 수 있습니다. 청크를 반복할 수 있는 `CrewStreamingOutput` 객체를 반환합니다. 기본값은 `False`. |
<Tip>
**Crew Max RPM**: `max_rpm` 속성은 crew가 분당 처리할 수 있는 최대 요청 수를 설정하며, 개별 agent의 `max_rpm` 설정을 crew 단위로 지정할 경우 오버라이드합니다.
@@ -306,12 +307,27 @@ print(result)
### Crew를 시작하는 다양한 방법
crew가 구성되면, 적절한 시작 방법으로 workflow를 시작하세요. CrewAI는 kickoff 프로세스를 더 잘 제어할 수 있도록 여러 방법을 제공합니다: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, 그리고 `kickoff_for_each_async()`.
crew가 구성되면, 적절한 시작 방법으로 workflow를 시작하세요. CrewAI는 kickoff 프로세스를 더 잘 제어할 수 있도록 여러 방법을 제공합니다.
#### 동기 메서드
- `kickoff()`: 정의된 process flow에 따라 실행 프로세스를 시작합니다.
- `kickoff_for_each()`: 입력 이벤트나 컬렉션 내 각 항목에 대해 순차적으로 task를 실행합니다.
- `kickoff_async()`: 비동기적으로 workflow를 시작합니다.
- `kickoff_for_each_async()`: 입력 이벤트나 각 항목에 대해 비동기 처리를 활용하여 task를 동시에 실행합니다.
#### 비동기 메서드
CrewAI는 비동기 실행을 위해 두 가지 접근 방식을 제공합니다:
| 메서드 | 타입 | 설명 |
|--------|------|-------------|
| `akickoff()` | 네이티브 async | 전체 실행 체인에서 진정한 async/await 사용 |
| `akickoff_for_each()` | 네이티브 async | 리스트의 각 입력에 대해 네이티브 async 실행 |
| `kickoff_async()` | 스레드 기반 | 동기 실행을 `asyncio.to_thread`로 래핑 |
| `kickoff_for_each_async()` | 스레드 기반 | 리스트의 각 입력에 대해 스레드 기반 async |
<Note>
고동시성 워크로드의 경우 `akickoff()` 및 `akickoff_for_each()`가 권장됩니다. 이들은 작업 실행, 메모리 작업, 지식 검색에 네이티브 async를 사용합니다.
</Note>
```python Code
# Start the crew's task execution
@@ -324,19 +340,53 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using kickoff_async
# Example of using native async with akickoff
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.akickoff(inputs=inputs)
print(async_result)
# Example of using native async with akickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
for async_result in async_results:
print(async_result)
# Example of using thread-based kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
# Example of using thread-based kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
이러한 메서드는 crew 내에서 task를 관리하고 실행하는 데 유연성을 제공하며, 동기 및 비동기 workflow 모두 필요에 맞게 사용할 수 있도록 지원합니다.
이러한 메서드는 crew 내에서 task를 관리하고 실행하는 데 유연성을 제공하며, 동기 및 비동기 workflow 모두 필요에 맞게 사용할 수 있도록 지원합니다. 자세한 비동기 예제는 [Crew 비동기 시작](/ko/learn/kickoff-async) 가이드를 참조하세요.
### 스트리밍 Crew 실행
crew 실행을 실시간으로 확인하려면 스트리밍을 활성화하여 출력이 생성되는 대로 받을 수 있습니다:
```python Code
# 스트리밍 활성화
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# 스트리밍 출력을 반복
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(chunk.content, end="", flush=True)
# 최종 결과 접근
result = streaming.result
```
스트리밍에 대한 자세한 내용은 [스트리밍 Crew 실행](/ko/learn/streaming-crew-execution) 가이드를 참조하세요.
### 특정 Task에서 다시 실행하기

View File

@@ -565,6 +565,55 @@ Fourth method running
이 Flow를 실행하면, `start_method`에서 생성된 랜덤 불리언 값에 따라 출력값이 달라집니다.
### Human in the Loop (인간 피드백)
`@human_feedback` 데코레이터는 인간의 피드백을 수집하기 위해 플로우 실행을 일시 중지하는 human-in-the-loop 워크플로우를 가능하게 합니다. 이는 승인 게이트, 품질 검토, 인간의 판단이 필요한 결정 지점에 유용합니다.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="이 콘텐츠를 승인하시겠습니까?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "검토할 콘텐츠..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"승인됨! 피드백: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"거부됨. 이유: {result.feedback}")
```
`emit`이 지정되면, 인간의 자유 형식 피드백이 LLM에 의해 해석되어 지정된 outcome 중 하나로 매핑되고, 해당 `@listen` 데코레이터를 트리거합니다.
라우팅 없이 단순히 피드백만 수집할 수도 있습니다:
```python Code
@start()
@human_feedback(message="이 출력에 대한 코멘트가 있으신가요?")
def my_method(self):
return "검토할 출력"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# result.feedback로 피드백에 접근
# result.output으로 원래 출력에 접근
pass
```
플로우 실행 중 수집된 모든 피드백은 `self.last_human_feedback` (가장 최근) 또는 `self.human_feedback_history` (리스트 형태의 모든 피드백)를 통해 접근할 수 있습니다.
플로우에서의 인간 피드백에 대한 완전한 가이드는 비동기/논블로킹 피드백과 커스텀 프로바이더(Slack, 웹훅 등)를 포함하여 [Flow에서 인간 피드백](/ko/learn/human-feedback-in-flows)을 참조하세요.
## 플로우에 에이전트 추가하기
에이전트는 플로우에 원활하게 통합할 수 있으며, 단순하고 집중된 작업 실행이 필요할 때 전체 Crew의 경량 대안으로 활용됩니다. 아래는 에이전트를 플로우 내에서 사용하여 시장 조사를 수행하는 예시입니다:

View File

@@ -515,8 +515,7 @@ crew = Crew(
"provider": "huggingface",
"config": {
"api_key": "your-hf-token", # Optional for public models
"model": "sentence-transformers/all-MiniLM-L6-v2",
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
"model": "sentence-transformers/all-MiniLM-L6-v2"
}
}
)

View File

@@ -0,0 +1,154 @@
---
title: 프로덕션 아키텍처
description: CrewAI로 프로덕션 수준의 AI 애플리케이션을 구축하기 위한 모범 사례
icon: server
mode: "wide"
---
# Flow 우선 사고방식 (Flow-First Mindset)
CrewAI로 프로덕션 AI 애플리케이션을 구축할 때는 **Flow로 시작하는 것을 권장합니다**.
개별 Crews나 Agents를 실행하는 것도 가능하지만, 이를 Flow로 감싸면 견고하고 확장 가능한 애플리케이션에 필요한 구조를 제공합니다.
## 왜 Flows인가?
1. **상태 관리 (State Management)**: Flows는 애플리케이션의 여러 단계에 걸쳐 상태를 관리하는 내장된 방법을 제공합니다. 이는 Crews 간에 데이터를 전달하고, 컨텍스트를 유지하며, 사용자 입력을 처리하는 데 중요합니다.
2. **제어 (Control)**: Flows를 사용하면 루프, 조건문, 분기 로직을 포함한 정확한 실행 경로를 정의할 수 있습니다. 이는 예외 상황을 처리하고 애플리케이션이 예측 가능하게 동작하도록 보장하는 데 필수적입니다.
3. **관측 가능성 (Observability)**: Flows는 실행을 추적하고, 문제를 디버깅하며, 성능을 모니터링하기 쉽게 만드는 명확한 구조를 제공합니다. 자세한 통찰력을 얻으려면 [CrewAI Tracing](/ko/observability/tracing)을 사용하는 것이 좋습니다. `crewai login`을 실행하여 무료 관측 가능성 기능을 활성화하세요.
## 아키텍처
일반적인 프로덕션 CrewAI 애플리케이션은 다음과 같습니다:
```mermaid
graph TD
Start((시작)) --> Flow[Flow 오케스트레이터]
Flow --> State{상태 관리}
State --> Step1[1단계: 데이터 수집]
Step1 --> Crew1[연구 Crew]
Crew1 --> State
State --> Step2{조건 확인}
Step2 -- "유효함" --> Step3[3단계: 실행]
Step3 --> Crew2[액션 Crew]
Step2 -- "유효하지 않음" --> End((종료))
Crew2 --> End
```
### 1. Flow 클래스
`Flow` 클래스는 진입점입니다. 상태 스키마와 로직을 실행하는 메서드를 정의합니다.
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class AppState(BaseModel):
user_input: str = ""
research_results: str = ""
final_report: str = ""
class ProductionFlow(Flow[AppState]):
@start()
def gather_input(self):
# ... 입력 받는 로직 ...
pass
@listen(gather_input)
def run_research_crew(self):
# ... Crew 트리거 ...
pass
```
### 2. 상태 관리 (State Management)
Pydantic 모델을 사용하여 상태를 정의하세요. 이는 타입 안전성을 보장하고 각 단계에서 어떤 데이터를 사용할 수 있는지 명확하게 합니다.
- **최소한으로 유지**: 단계 간에 유지해야 할 것만 저장하세요.
- **구조화된 데이터 사용**: 가능하면 비구조화된 딕셔너리는 피하세요.
### 3. 작업 단위로서의 Crews
복잡한 작업은 Crews에게 위임하세요. Crew는 특정 목표(예: "주제 연구", "블로그 게시물 작성")에 집중해야 합니다.
- **Crews를 과도하게 설계하지 마세요**: 집중력을 유지하세요.
- **상태를 명시적으로 전달하세요**: Flow 상태에서 필요한 데이터를 Crew 입력으로 전달하세요.
```python
@listen(gather_input)
def run_research_crew(self):
crew = ResearchCrew()
result = crew.kickoff(inputs={"topic": self.state.user_input})
self.state.research_results = result.raw
```
## Control Primitives
CrewAI의 Control Primitives를 활용하여 Crew에 견고함과 제어력을 더하세요.
### 1. Task Guardrails
[Task Guardrails](/ko/concepts/tasks#task-guardrails)를 사용하여 작업 결과가 수락되기 전에 유효성을 검사하세요. 이를 통해 agent가 고품질 결과를 생성하도록 보장할 수 있습니다.
```python
def validate_content(result: TaskOutput) -> Tuple[bool, Any]:
if len(result.raw) < 100:
return (False, "Content is too short. Please expand.")
return (True, result.raw)
task = Task(
...,
guardrail=validate_content
)
```
### 2. 구조화된 출력 (Structured Outputs)
작업 간에 데이터를 전달하거나 애플리케이션으로 전달할 때는 항상 구조화된 출력(`output_pydantic` 또는 `output_json`)을 사용하세요. 이는 파싱 오류를 방지하고 타입 안전성을 보장합니다.
```python
class ResearchResult(BaseModel):
summary: str
sources: List[str]
task = Task(
...,
output_pydantic=ResearchResult
)
```
### 3. LLM Hooks
[LLM Hooks](/ko/learn/llm-hooks)를 사용하여 LLM으로 전송되기 전에 메시지를 검사하거나 수정하고, 응답을 정리(sanitize)하세요.
```python
@before_llm_call
def log_request(context):
print(f"Agent {context.agent.role} is calling the LLM...")
```
## 배포 패턴
Flow를 배포할 때 다음을 고려하세요:
### CrewAI Enterprise
Flow를 배포하는 가장 쉬운 방법은 CrewAI Enterprise를 사용하는 것입니다. 인프라, 인증 및 모니터링을 대신 처리합니다.
시작하려면 [배포 가이드](/ko/enterprise/guides/deploy-crew)를 확인하세요.
```bash
crewai deploy create
```
### 비동기 실행 (Async Execution)
장기 실행 작업의 경우 `kickoff_async`를 사용하여 API 차단을 방지하세요.
### 지속성 (Persistence)
`@persist` 데코레이터를 사용하여 Flow의 상태를 데이터베이스에 저장하세요. 이를 통해 프로세스가 중단되거나 사람의 입력을 기다려야 할 때 실행을 재개할 수 있습니다.
```python
@persist
class ProductionFlow(Flow[AppState]):
# ...
```
## 요약
- **Flow로 시작하세요.**
- **명확한 State를 정의하세요.**
- **복잡한 작업에는 Crews를 사용하세요.**
- **API와 지속성을 갖추어 배포하세요.**

View File

@@ -62,13 +62,13 @@ CrewAI CLI를 사용하여 Gmail 트리거 통합을 로컬에서 테스트하
crewai triggers list
# 실제 payload로 Gmail 트리거 시뮬레이션
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
`crewai triggers run` 명령은 완전한 Gmail payload로 크루를 실행하여 배포 전에 파싱 로직을 테스트할 수 있게 해줍니다.
<Warning>
개발 중에는 `crewai triggers run gmail/new_email`을 사용하세요 (`crewai run`이 아님). 배포 후에는 크루가 자동으로 트리거 payload를 받습니다.
개발 중에는 `crewai triggers run gmail/new_email_received`을 사용하세요 (`crewai run`이 아님). 배포 후에는 크루가 자동으로 트리거 payload를 받습니다.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- `crewai triggers run gmail/new_email`로 로컬 테스트하여 정확한 payload 구조를 확인하세요
- `crewai triggers run gmail/new_email_received`로 로컬 테스트하여 정확한 payload 구조를 확인하세요
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- 주의: 트리거 실행을 시뮬레이션하려면 `crewai triggers run`을 사용하세요 (`crewai run`이 아님)

View File

@@ -7,109 +7,89 @@ mode: "wide"
# CrewAI란 무엇인가?
**CrewAI는 LangChain이나 기타 agent 프레임워크에 의존하지 않고, 완전히 독립적으로 처음부터 스크래치로 개발된 가볍고 매우 빠른 Python 프레임워크입니다.**
**CrewAI는 자율 AI agent를 조직하고 복잡한 workflow를 구축하기 위한 최고의 오픈 소스 프레임워크입니다.**
CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공하여, 어떤 시나리오에도 맞춤화된 자율 AI agent를 만드는 데 이상적입니다:
**Crews**의 협업 지능과 **Flows**의 정밀한 제어를 결합하여 개발자가 프로덕션 수준의 멀티 에이전트 시스템을 구축할 수 있도록 지원합니다.
- **[CrewAI Crews](/ko/guides/crews/first-crew)**: 자율성과 협업 지능을 극대화하여, 각 agent가 특정 역할, 도구, 목표를 가진 AI 팀을 만들 수 있습니다.
- **[CrewAI Flows](/ko/guides/flows/first-flow)**: 이벤트 기반의 세밀한 제어와 단일 LLM 호출을 통한 정확한 작업 orchestration을 지원하며, Crews와 네이티브로 통합됩니다.
- **[CrewAI Flows](/ko/guides/flows/first-flow)**: AI 애플리케이션의 중추(Backbone)입니다. Flows를 사용하면 상태를 관리하고 실행을 제어하는 구조화된 이벤트 기반 workflow를 만들 수 있습니다. AI agent가 작업할 수 있는 기반을 제공합니다.
- **[CrewAI Crews](/ko/guides/crews/first-crew)**: Flow 내의 작업 단위입니다. Crews는 Flow가 위임한 특정 작업을 해결하기 위해 협력하는 자율 agent 팀입니다.
10만 명이 넘는 개발자가 커뮤니티 과정을 통해 인증을 받았으며, CrewAI는 기업용 AI 자동화의 표준으로 빠르게 자리잡고 있습니다.
10만 명이 넘는 개발자가 커뮤니티 과정을 통해 인증을 받았으며, CrewAI는 기업용 AI 자동화의 표준니다.
## Crew의 작동 방식
## CrewAI 아키텍처
CrewAI의 아키텍처는 자율성과 제어의 균형을 맞추도록 설계되었습니다.
### 1. Flows: 중추 (Backbone)
<Note>
회사가 비즈니스 목표를 달성하기 위해 여러 부서(영업, 엔지니어링, 마케팅 등)가 리더십 아래에서 함께 일하는 것처럼, CrewAI는 복잡한 작업을 달성하기 위해 전문화된 역할의 AI agent들이 협력하는 조직을 만들 수 있도록 도와줍니다.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
| 구성 요소 | 설명 | 주요 특징 |
|:----------|:----:|:----------|
| **Crew** | 최상위 조직 | • AI agent 팀 관리<br/>• workflow 감독<br/>• 협업 보장<br/>• 결과 전달 |
| **AI agents** | 전문 팀원 | • 특정 역할 보유(Researcher, Writer 등)<br/>• 지정된 도구 사용<br/>• 작업 위임 가능<br/>• 자율적 의사결정 가능 |
| **Process** | workflow 관리 시스템 | • 협업 패턴 정의<br/>• 작업 할당 제어<br/>• 상호작용 관리<br/>• 효율적 실행 보장 |
| **Task** | 개별 할당 | • 명확한 목표 보유<br/>• 특정 도구 사용<br/>• 더 큰 프로세스에 기여<br/>• 실행 가능한 결과 도출 |
### 전체 구조의 동작 방식
1. **Crew**가 전체 운영을 조직합니다
2. **AI agents**가 자신들의 전문 작업을 수행합니다
3. **Process**가 원활한 협업을 보장합니다
4. **Tasks**가 완료되어 목표를 달성합니다
## 주요 기능
<CardGroup cols={2}>
<Card title="역할 기반 agent" icon="users">
Researcher, Analyst, Writer 등 다양한 역할과 전문성, 목표를 가진 agent를 생성할 수 있습니다
</Card>
<Card title="유연한 도구" icon="screwdriver-wrench">
agent에게 외부 서비스 및 데이터 소스와 상호작용할 수 있는 맞춤형 도구와 API를 제공합니다
</Card>
<Card title="지능형 협업" icon="people-arrows">
agent들이 함께 작업하며, 인사이트를 공유하고 작업을 조율하여 복잡한 목표를 달성합니다
</Card>
<Card title="작업 관리" icon="list-check">
순차적 또는 병렬 workflow를 정의할 수 있으며, agent가 작업 의존성을 자동으로 처리합니다
</Card>
</CardGroup>
## Flow의 작동 원리
<Note>
Crew가 자율 협업에 탁월하다면, Flow는 구조화된 자동화를 제공하여 workflow 실행에 대한 세밀한 제어를 제공합니다. Flow는 조건부 로직, 반복문, 동적 상태 관리를 정확하게 처리하면서 작업이 신뢰성 있게, 안전하게, 효율적으로 실행되도록 보장합니다. Flow는 Crew와 원활하게 통합되어 높은 자율성과 엄격한 제어의 균형을 이룰 수 있게 해줍니다.
Flow를 애플리케이션의 "관리자" 또는 "프로세스 정의"라고 생각하세요. 단계, 로직, 그리고 시스템 내에서 데이터가 이동하는 방식을 정의합니다.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/flows.png" alt="CrewAI Framework Overview" />
</Frame>
| 구성 요소 | 설명 | 주요 기능 |
|:----------|:-----------:|:------------|
| **Flow** | 구조화된 workflow orchestration | • 실행 경로 관리<br/>• 상태 전환 처리<br/>• 작업 순서 제어<br/>• 신뢰성 있는 실행 보장 |
| **Events** | workflow 액션 트리거 | • 특정 프로세스 시작<br/>• 동적 응답 가능<br/>• 조건부 분기 지원<br/>• 실시간 적응 허용 |
| **States** | workflow 실행 컨텍스트 | • 실행 데이터 유지<br/>• 데이터 영속성 지원<br/>• 재개 가능성 보장<br/>• 실행 무결성 확보 |
| **Crew Support** | workflow 자동화 강화 | • 필요할 때 agency 삽입<br/>• 구조화된 workflow 보완<br/>• 자동화와 인텔리전스의 균형<br/>• 적응적 의사결정 지원 |
Flows의 기능:
- **상태 관리**: 단계 및 실행 전반에 걸쳐 데이터를 유지합니다.
- **이벤트 기반 실행**: 이벤트 또는 외부 입력을 기반으로 작업을 트리거합니다.
- **제어 흐름**: 조건부 로직, 반복문, 분기를 사용합니다.
### 주요 기능
### 2. Crews: 지능 (Intelligence)
<Note>
Crews는 힘든 일을 처리하는 "팀"입니다. Flow 내에서 창의성과 협업이 필요한 복잡한 문제를 해결하기 위해 Crew를 트리거할 수 있습니다.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="/images/crews.png" alt="CrewAI Framework Overview" />
</Frame>
Crews의 기능:
- **역할 수행 Agent**: 특정 목표와 도구를 가진 전문 agent입니다.
- **자율 협업**: agent들이 협력하여 작업을 해결합니다.
- **작업 위임**: agent의 능력에 따라 작업이 할당되고 실행됩니다.
## 전체 작동 방식
1. **Flow**가 이벤트를 트리거하거나 프로세스를 시작합니다.
2. **Flow**가 상태를 관리하고 다음에 무엇을 할지 결정합니다.
3. **Flow**가 복잡한 작업을 **Crew**에게 위임합니다.
4. **Crew**의 agent들이 협력하여 작업을 완료합니다.
5. **Crew**가 결과를 **Flow**에 반환합니다.
6. **Flow**가 결과를 바탕으로 실행을 계속합니다.
## 주요 기능
<CardGroup cols={2}>
<Card title="이벤트 기반 orchestration" icon="bolt">
이벤트에 동적으로 반응하여 정밀한 실행 경로를 정의합니다
<Card title="프로덕션 등급 Flows" icon="arrow-progress">
장기 실행 프로세스와 복잡한 로직을 처리할 수 있는 신뢰할 수 있고 상태를 유지하는 workflow를 구축합니다.
</Card>
<Card title="세밀한 제어" icon="sliders">
workflow 상태와 조건부 실행을 안전하고 효율적으로 관리합니다
<Card title="자율 Crews" icon="users">
높은 수준의 목표를 달성하기 위해 계획하고, 실행하고, 협력할 수 있는 agent 팀을 배포합니다.
</Card>
<Card title="네이티브 Crew 통합" icon="puzzle-piece">
Crews와 손쉽게 결합하여 자율성과 지능을 강화합니다
<Card title="유연한 도구" icon="screwdriver-wrench">
agent를 모든 API, 데이터베이스 또는 로컬 도구에 연결합니다.
</Card>
<Card title="결정론적 실행" icon="route">
명시적 제어 흐름과 오류 처리로 예측 가능한 결과를 보장합니다
<Card title="엔터프라이즈 보안" icon="lock">
엔터프라이즈 배포를 위한 보안 및 규정 준수를 고려하여 설계되었습니다.
</Card>
</CardGroup>
## Crew Flow를 언제 사용할까
## Crews vs Flows 사용 시기
<Note>
[Crew](/ko/guides/crews/first-crew)와 [Flow](/ko/guides/flows/first-flow)를 언제 사용할지 이해하는 것은 CrewAI의 잠재력을 애플리케이션에서 극대화하는 데 핵심적입니다.
</Note>
**짧은 답변: 둘 다 사용하세요.**
| 사용 사례 | 권장 접근 방식 | 이유 |
|:---------|:---------------------|:-----|
| **개방형 연구** | [Crew](/ko/guides/crews/first-crew) | 창의적 사고, 탐색, 적응이 필요한 작업에 적합 |
| **콘텐츠 생성** | [Crew](/ko/guides/crews/first-crew) | 기사, 보고서, 마케팅 자료 등 협업형 생성에 적합 |
| **의사결정 workflow** | [Flow](/ko/guides/flows/first-flow) | 예측 가능하고 감사 가능한 의사결정 경로 및 정밀 제어가 필요할 때 |
| **API orchestration** | [Flow](/ko/guides/flows/first-flow) | 특정 순서로 여러 외부 서비스에 신뢰성 있게 통합할 때 |
| **하이브리드 애플리케이션** | 혼합 접근 방식 | [Flow](/ko/guides/flows/first-flow)로 전체 프로세스를 orchestration하고, [Crew](/ko/guides/crews/first-crew)로 복잡한 하위 작업을 처리 |
모든 프로덕션 애플리케이션의 경우, **Flow로 시작하세요**.
### 의사결정 프레임워크
- 애플리케이션의 전체 구조, 상태, 로직을 정의하려면 **Flow를 사용하세요**.
- 자율성이 필요한 특정하고 복잡한 작업을 수행하기 위해 agent 팀이 필요할 때 Flow 단계 내에서 **Crew를 사용하세요**.
- **[Crews](/ko/guides/crews/first-crew)를 선택할 때:** 자율적인 문제 해결, 창의적 협업 또는 탐구적 작업이 필요할 때
- **[Flows](/ko/guides/flows/first-flow)를 선택할 때:** 결정론적 결과, 감사 가능성, 또는 실행에 대한 정밀한 제어가 필요할 때
- **둘 다 결합할 때:** 애플리케이션에 구조화된 프로세스와 자율적 지능이 모두 필요할 때
| 사용 사례 | 아키텍처 |
| :--- | :--- |
| **간단한 자동화** | Python 작업이 포함된 단일 Flow |
| **복잡한 연구** | 상태를 관리하는 Flow -> 연구를 수행하는 Crew |
| **애플리케이션 백엔드** | API 요청을 처리하는 Flow -> 콘텐츠를 생성하는 Crew -> DB에 저장하는 Flow |
## CrewAI를 선택해야 하는 이유?
@@ -123,13 +103,6 @@ CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공
## 지금 바로 빌드를 시작해보세요!
<CardGroup cols={2}>
<Card
title="첫 번째 Crew 만들기"
icon="users-gear"
href="/ko/guides/crews/first-crew"
>
복잡한 문제를 함께 해결하는 협업 AI 팀을 단계별로 만드는 튜토리얼입니다.
</Card>
<Card
title="첫 번째 Flow 만들기"
icon="diagram-project"
@@ -137,6 +110,13 @@ CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공
>
실행을 정밀하게 제어할 수 있는 구조화된, 이벤트 기반 workflow를 만드는 방법을 배워보세요.
</Card>
<Card
title="첫 번째 Crew 만들기"
icon="users-gear"
href="/ko/guides/crews/first-crew"
>
복잡한 문제를 함께 해결하는 협업 AI 팀을 단계별로 만드는 튜토리얼입니다.
</Card>
</CardGroup>
<CardGroup cols={3}>
@@ -161,4 +141,4 @@ CrewAI는 고수준의 간편함과 정밀한 저수준 제어를 모두 제공
>
다른 개발자와 소통하며, 도움을 받고 CrewAI 경험을 공유해보세요.
</Card>
</CardGroup>
</CardGroup>

View File

@@ -0,0 +1,581 @@
---
title: Flow에서 인간 피드백
description: "@human_feedback 데코레이터를 사용하여 CrewAI Flow에 인간 피드백을 직접 통합하는 방법을 알아보세요"
icon: user-check
mode: "wide"
---
## 개요
`@human_feedback` 데코레이터는 CrewAI Flow 내에서 직접 human-in-the-loop(HITL) 워크플로우를 가능하게 합니다. Flow 실행을 일시 중지하고, 인간에게 검토를 위해 출력을 제시하고, 피드백을 수집하고, 선택적으로 피드백 결과에 따라 다른 리스너로 라우팅할 수 있습니다.
이는 특히 다음과 같은 경우에 유용합니다:
- **품질 보증**: AI가 생성한 콘텐츠를 다운스트림에서 사용하기 전에 검토
- **결정 게이트**: 자동화된 워크플로우에서 인간이 중요한 결정을 내리도록 허용
- **승인 워크플로우**: 승인/거부/수정 패턴 구현
- **대화형 개선**: 출력을 반복적으로 개선하기 위해 피드백 수집
```mermaid
flowchart LR
A[Flow 메서드] --> B[출력 생성됨]
B --> C[인간이 검토]
C --> D{피드백}
D -->|emit 지정됨| E[LLM이 Outcome으로 매핑]
D -->|emit 없음| F[HumanFeedbackResult]
E --> G["@listen('approved')"]
E --> H["@listen('rejected')"]
F --> I[다음 리스너]
```
## 빠른 시작
Flow에 인간 피드백을 추가하는 가장 간단한 방법은 다음과 같습니다:
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback
class SimpleReviewFlow(Flow):
@start()
@human_feedback(message="이 콘텐츠를 검토해 주세요:")
def generate_content(self):
return "검토가 필요한 AI 생성 콘텐츠입니다."
@listen(generate_content)
def process_feedback(self, result):
print(f"콘텐츠: {result.output}")
print(f"인간의 의견: {result.feedback}")
flow = SimpleReviewFlow()
flow.kickoff()
```
이 Flow를 실행하면:
1. `generate_content`를 실행하고 문자열을 반환합니다
2. 요청 메시지와 함께 사용자에게 출력을 표시합니다
3. 사용자가 피드백을 입력할 때까지 대기합니다 (또는 Enter를 눌러 건너뜁니다)
4. `HumanFeedbackResult` 객체를 `process_feedback`에 전달합니다
## @human_feedback 데코레이터
### 매개변수
| 매개변수 | 타입 | 필수 | 설명 |
|----------|------|------|------|
| `message` | `str` | 예 | 메서드 출력과 함께 인간에게 표시되는 메시지 |
| `emit` | `Sequence[str]` | 아니오 | 가능한 outcome 목록. 피드백이 이 중 하나로 매핑되어 `@listen` 데코레이터를 트리거합니다 |
| `llm` | `str \| BaseLLM` | `emit` 지정 시 | 피드백을 해석하고 outcome에 매핑하는 데 사용되는 LLM |
| `default_outcome` | `str` | 아니오 | 피드백이 제공되지 않을 때 사용할 outcome. `emit`에 있어야 합니다 |
| `metadata` | `dict` | 아니오 | 엔터프라이즈 통합을 위한 추가 데이터 |
| `provider` | `HumanFeedbackProvider` | 아니오 | 비동기/논블로킹 피드백을 위한 커스텀 프로바이더. [비동기 인간 피드백](#비동기-인간-피드백-논블로킹) 참조 |
### 기본 사용법 (라우팅 없음)
`emit`을 지정하지 않으면, 데코레이터는 단순히 피드백을 수집하고 다음 리스너에 `HumanFeedbackResult`를 전달합니다:
```python Code
@start()
@human_feedback(message="이 분석에 대해 어떻게 생각하시나요?")
def analyze_data(self):
return "분석 결과: 매출 15% 증가, 비용 8% 감소"
@listen(analyze_data)
def handle_feedback(self, result):
# result는 HumanFeedbackResult입니다
print(f"분석: {result.output}")
print(f"피드백: {result.feedback}")
```
### emit을 사용한 라우팅
`emit`을 지정하면, 데코레이터는 라우터가 됩니다. 인간의 자유 형식 피드백이 LLM에 의해 해석되어 지정된 outcome 중 하나로 매핑됩니다:
```python Code
@start()
@human_feedback(
message="이 콘텐츠의 출판을 승인하시겠습니까?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_content(self):
return "블로그 게시물 초안 내용..."
@listen("approved")
def publish(self, result):
print(f"출판 중! 사용자 의견: {result.feedback}")
@listen("rejected")
def discard(self, result):
print(f"폐기됨. 이유: {result.feedback}")
@listen("needs_revision")
def revise(self, result):
print(f"다음을 기반으로 수정 중: {result.feedback}")
```
<Tip>
LLM은 가능한 경우 구조화된 출력(function calling)을 사용하여 응답이 지정된 outcome 중 하나임을 보장합니다. 이로 인해 라우팅이 신뢰할 수 있고 예측 가능해집니다.
</Tip>
## HumanFeedbackResult
`HumanFeedbackResult` 데이터클래스는 인간 피드백 상호작용에 대한 모든 정보를 포함합니다:
```python Code
from crewai.flow.human_feedback import HumanFeedbackResult
@dataclass
class HumanFeedbackResult:
output: Any # 인간에게 표시된 원래 메서드 출력
feedback: str # 인간의 원시 피드백 텍스트
outcome: str | None # 매핑된 outcome (emit이 지정된 경우)
timestamp: datetime # 피드백이 수신된 시간
method_name: str # 데코레이터된 메서드의 이름
metadata: dict # 데코레이터에 전달된 모든 메타데이터
```
### 리스너에서 접근하기
`emit`이 있는 `@human_feedback` 메서드에 의해 리스너가 트리거되면, `HumanFeedbackResult`를 받습니다:
```python Code
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"원래 출력: {result.output}")
print(f"사용자 피드백: {result.feedback}")
print(f"Outcome: {result.outcome}") # "approved"
print(f"수신 시간: {result.timestamp}")
```
## 피드백 히스토리 접근하기
`Flow` 클래스는 인간 피드백에 접근하기 위한 두 가지 속성을 제공합니다:
### last_human_feedback
가장 최근의 `HumanFeedbackResult`를 반환합니다:
```python Code
@listen(some_method)
def check_feedback(self):
if self.last_human_feedback:
print(f"마지막 피드백: {self.last_human_feedback.feedback}")
```
### human_feedback_history
Flow 동안 수집된 모든 `HumanFeedbackResult` 객체의 리스트입니다:
```python Code
@listen(final_step)
def summarize(self):
print(f"수집된 총 피드백: {len(self.human_feedback_history)}")
for i, fb in enumerate(self.human_feedback_history):
print(f"{i+1}. {fb.method_name}: {fb.outcome or '라우팅 없음'}")
```
<Warning>
각 `HumanFeedbackResult`는 `human_feedback_history`에 추가되므로, 여러 피드백 단계가 서로 덮어쓰지 않습니다. 이 리스트를 사용하여 Flow 동안 수집된 모든 피드백에 접근하세요.
</Warning>
## 완전한 예제: 콘텐츠 승인 워크플로우
콘텐츠 검토 및 승인 워크플로우를 구현하는 전체 예제입니다:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
draft: str = ""
final_content: str = ""
revision_count: int = 0
class ContentApprovalFlow(Flow[ContentState]):
"""콘텐츠를 생성하고 인간의 승인을 받는 Flow입니다."""
@start()
def get_topic(self):
self.state.topic = input("어떤 주제에 대해 글을 쓸까요? ")
return self.state.topic
@listen(get_topic)
def generate_draft(self, topic):
# 실제 사용에서는 LLM을 호출합니다
self.state.draft = f"# {topic}\n\n{topic}에 대한 초안입니다..."
return self.state.draft
@listen(generate_draft)
@human_feedback(
message="이 초안을 검토해 주세요. 'approved', 'rejected'로 답하거나 수정 피드백을 제공해 주세요:",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_draft(self, draft):
return draft
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
self.state.final_content = result.output
print("\n✅ 콘텐츠가 승인되어 출판되었습니다!")
print(f"검토자 코멘트: {result.feedback}")
return "published"
@listen("rejected")
def handle_rejection(self, result: HumanFeedbackResult):
print("\n❌ 콘텐츠가 거부되었습니다")
print(f"이유: {result.feedback}")
return "rejected"
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
self.state.revision_count += 1
print(f"\n📝 수정 #{self.state.revision_count} 요청됨")
print(f"피드백: {result.feedback}")
# 실제 Flow에서는 generate_draft로 돌아갈 수 있습니다
# 이 예제에서는 단순히 확인합니다
return "revision_requested"
# Flow 실행
flow = ContentApprovalFlow()
result = flow.kickoff()
print(f"\nFlow 완료. 요청된 수정: {flow.state.revision_count}")
```
```text Output
어떤 주제에 대해 글을 쓸까요? AI 안전
==================================================
OUTPUT FOR REVIEW:
==================================================
# AI 안전
AI 안전에 대한 초안입니다...
==================================================
이 초안을 검토해 주세요. 'approved', 'rejected'로 답하거나 수정 피드백을 제공해 주세요:
(Press Enter to skip, or type your feedback)
Your feedback: 좋아 보입니다, 승인!
✅ 콘텐츠가 승인되어 출판되었습니다!
검토자 코멘트: 좋아 보입니다, 승인!
Flow 완료. 요청된 수정: 0
```
</CodeGroup>
## 다른 데코레이터와 결합하기
`@human_feedback` 데코레이터는 다른 Flow 데코레이터와 함께 작동합니다. 가장 안쪽 데코레이터(함수에 가장 가까운)로 배치하세요:
```python Code
# 올바름: @human_feedback이 가장 안쪽(함수에 가장 가까움)
@start()
@human_feedback(message="이것을 검토해 주세요:")
def my_start_method(self):
return "content"
@listen(other_method)
@human_feedback(message="이것도 검토해 주세요:")
def my_listener(self, data):
return f"processed: {data}"
```
<Tip>
`@human_feedback`를 가장 안쪽 데코레이터(마지막/함수에 가장 가까움)로 배치하여 메서드를 직접 래핑하고 Flow 시스템에 전달하기 전에 반환 값을 캡처할 수 있도록 하세요.
</Tip>
## 모범 사례
### 1. 명확한 요청 메시지 작성
`message` 매개변수는 인간이 보는 것입니다. 실행 가능하게 만드세요:
```python Code
# ✅ 좋음 - 명확하고 실행 가능
@human_feedback(message="이 요약이 핵심 포인트를 정확하게 캡처했나요? '예'로 답하거나 무엇이 빠졌는지 설명해 주세요:")
# ❌ 나쁨 - 모호함
@human_feedback(message="이것을 검토해 주세요:")
```
### 2. 의미 있는 Outcome 선택
`emit`을 사용할 때, 인간의 응답에 자연스럽게 매핑되는 outcome을 선택하세요:
```python Code
# ✅ 좋음 - 자연어 outcome
emit=["approved", "rejected", "needs_more_detail"]
# ❌ 나쁨 - 기술적이거나 불명확
emit=["state_1", "state_2", "state_3"]
```
### 3. 항상 기본 Outcome 제공
사용자가 입력 없이 Enter를 누르는 경우를 처리하기 위해 `default_outcome`을 사용하세요:
```python Code
@human_feedback(
message="승인하시겠습니까? (수정 요청하려면 Enter 누르세요)",
emit=["approved", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision", # 안전한 기본값
)
```
### 4. 감사 추적을 위한 피드백 히스토리 사용
감사 로그를 생성하기 위해 `human_feedback_history`에 접근하세요:
```python Code
@listen(final_step)
def create_audit_log(self):
log = []
for fb in self.human_feedback_history:
log.append({
"step": fb.method_name,
"outcome": fb.outcome,
"feedback": fb.feedback,
"timestamp": fb.timestamp.isoformat(),
})
return log
```
### 5. 라우팅된 피드백과 라우팅되지 않은 피드백 모두 처리
Flow를 설계할 때, 라우팅이 필요한지 고려하세요:
| 시나리오 | 사용 |
|----------|------|
| 간단한 검토, 피드백 텍스트만 필요 | `emit` 없음 |
| 응답에 따라 다른 경로로 분기 필요 | `emit` 사용 |
| 승인/거부/수정이 있는 승인 게이트 | `emit` 사용 |
| 로깅만을 위한 코멘트 수집 | `emit` 없음 |
## 비동기 인간 피드백 (논블로킹)
기본적으로 `@human_feedback`은 콘솔 입력을 기다리며 실행을 차단합니다. 프로덕션 애플리케이션에서는 Slack, 이메일, 웹훅 또는 API와 같은 외부 시스템과 통합되는 **비동기/논블로킹** 피드백이 필요할 수 있습니다.
### Provider 추상화
커스텀 피드백 수집 전략을 지정하려면 `provider` 매개변수를 사용하세요:
```python Code
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
class WebhookProvider(HumanFeedbackProvider):
"""웹훅 콜백을 기다리며 Flow를 일시 중지하는 Provider."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# 외부 시스템에 알림 (예: Slack 메시지 전송, 티켓 생성)
self.send_notification(context)
# 실행 일시 중지 - 프레임워크가 자동으로 영속성 처리
raise HumanFeedbackPending(
context=context,
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
)
class ReviewFlow(Flow):
@start()
@human_feedback(
message="이 콘텐츠를 검토해 주세요:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=WebhookProvider("https://myapp.com/api"),
)
def generate_content(self):
return "AI가 생성한 콘텐츠..."
@listen("approved")
def publish(self, result):
return "출판됨!"
```
<Tip>
Flow 프레임워크는 `HumanFeedbackPending`이 발생하면 **자동으로 상태를 영속화**합니다. Provider는 외부 시스템에 알리고 예외를 발생시키기만 하면 됩니다—수동 영속성 호출이 필요하지 않습니다.
</Tip>
### 일시 중지된 Flow 처리
비동기 provider를 사용하면 `kickoff()`는 예외를 발생시키는 대신 `HumanFeedbackPending` 객체를 반환합니다:
```python Code
flow = ReviewFlow()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow가 일시 중지됨, 상태가 자동으로 영속화됨
print(f"피드백 대기 중: {result.callback_info['webhook_url']}")
print(f"Flow ID: {result.context.flow_id}")
else:
# 정상 완료
print(f"Flow 완료: {result}")
```
### 일시 중지된 Flow 재개
피드백이 도착하면 (예: 웹훅을 통해) Flow를 재개합니다:
```python Code
# 동기 핸들러:
def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = flow.resume(feedback)
return result
# 비동기 핸들러 (FastAPI, aiohttp 등):
async def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = await flow.resume_async(feedback)
return result
```
### 주요 타입
| 타입 | 설명 |
|------|------|
| `HumanFeedbackProvider` | 커스텀 피드백 provider를 위한 프로토콜 |
| `PendingFeedbackContext` | 일시 중지된 Flow를 재개하는 데 필요한 모든 정보 포함 |
| `HumanFeedbackPending` | Flow가 피드백을 위해 일시 중지되면 `kickoff()`에서 반환됨 |
| `ConsoleProvider` | 기본 블로킹 콘솔 입력 provider |
### PendingFeedbackContext
컨텍스트는 재개에 필요한 모든 것을 포함합니다:
```python Code
@dataclass
class PendingFeedbackContext:
flow_id: str # 이 Flow 실행의 고유 식별자
flow_class: str # 정규화된 클래스 이름
method_name: str # 피드백을 트리거한 메서드
method_output: Any # 인간에게 표시된 출력
message: str # 요청 메시지
emit: list[str] | None # 라우팅을 위한 가능한 outcome
default_outcome: str | None
metadata: dict # 커스텀 메타데이터
llm: str | None # outcome 매핑을 위한 LLM
requested_at: datetime
```
### 완전한 비동기 Flow 예제
```python Code
from crewai.flow import (
Flow, start, listen, human_feedback,
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
)
class SlackNotificationProvider(HumanFeedbackProvider):
"""Slack 알림을 보내고 비동기 피드백을 위해 일시 중지하는 Provider."""
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Slack 알림 전송 (직접 구현)
slack_thread_id = self.post_to_slack(
channel=self.channel,
message=f"검토 필요:\n\n{context.method_output}\n\n{context.message}",
)
# 실행 일시 중지 - 프레임워크가 자동으로 영속성 처리
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": self.channel,
"thread_id": slack_thread_id,
}
)
class ContentPipeline(Flow):
@start()
@human_feedback(
message="이 콘텐츠의 출판을 승인하시겠습니까?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
provider=SlackNotificationProvider("#content-reviews"),
)
def generate_content(self):
return "AI가 생성한 블로그 게시물 콘텐츠..."
@listen("approved")
def publish(self, result):
print(f"출판 중! 검토자 의견: {result.feedback}")
return {"status": "published"}
@listen("rejected")
def archive(self, result):
print(f"보관됨. 이유: {result.feedback}")
return {"status": "archived"}
@listen("needs_revision")
def queue_revision(self, result):
print(f"수정 대기열에 추가됨: {result.feedback}")
return {"status": "revision_needed"}
# Flow 시작 (Slack 응답을 기다리며 일시 중지)
def start_content_pipeline():
flow = ContentPipeline()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
return {"status": "pending", "flow_id": result.context.flow_id}
return result
# Slack 웹훅이 실행될 때 재개 (동기 핸들러)
def on_slack_feedback(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = flow.resume(slack_message)
return result
# 핸들러가 비동기인 경우 (FastAPI, aiohttp, Slack Bolt 비동기 등)
async def on_slack_feedback_async(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = await flow.resume_async(slack_message)
return result
```
<Warning>
비동기 웹 프레임워크(FastAPI, aiohttp, Slack Bolt 비동기 모드)를 사용하는 경우 `flow.resume()` 대신 `await flow.resume_async()`를 사용하세요. 실행 중인 이벤트 루프 내에서 `resume()`을 호출하면 `RuntimeError`가 발생합니다.
</Warning>
### 비동기 피드백 모범 사례
1. **반환 타입 확인**: `kickoff()`는 일시 중지되면 `HumanFeedbackPending`을 반환합니다—try/except가 필요하지 않습니다
2. **올바른 resume 메서드 사용**: 동기 코드에서는 `resume()`, 비동기 코드에서는 `await resume_async()` 사용
3. **콜백 정보 저장**: `callback_info`를 사용하여 웹훅 URL, 티켓 ID 등을 저장
4. **멱등성 구현**: 안전을 위해 resume 핸들러는 멱등해야 합니다
5. **자동 영속성**: `HumanFeedbackPending`이 발생하면 상태가 자동으로 저장되며 기본적으로 `SQLiteFlowPersistence` 사용
6. **커스텀 영속성**: 필요한 경우 `from_pending()`에 커스텀 영속성 인스턴스 전달
## 관련 문서
- [Flow 개요](/ko/concepts/flows) - CrewAI Flow에 대해 알아보기
- [Flow 상태 관리](/ko/guides/flows/mastering-flow-state) - Flow에서 상태 관리하기
- [Flow 영속성](/ko/concepts/flows#persistence) - Flow 상태 영속화
- [@router를 사용한 라우팅](/ko/concepts/flows#router) - 조건부 라우팅에 대해 더 알아보기
- [실행 시 인간 입력](/ko/learn/human-input-on-execution) - 태스크 수준 인간 입력

View File

@@ -7,17 +7,28 @@ mode: "wide"
## 소개
CrewAI는 crew를 비동기적으로 시작할 수 있는 기능을 제공합니다. 이를 통해 crew 실행을 블로킹(blocking) 없이 시작할 수 있습니다.
CrewAI는 crew를 비동기적으로 시작할 수 있는 기능을 제공합니다. 이를 통해 crew 실행을 블로킹(blocking) 없이 시작할 수 있습니다.
이 기능은 여러 개의 crew를 동시에 실행하거나 crew가 실행되는 동안 다른 작업을 수행해야 할 때 특히 유용합니다.
## 비동기 Crew 실행
CrewAI는 비동기 실행을 위해 두 가지 접근 방식을 제공합니다:
Crew를 비동기적으로 시작하려면 `kickoff_async()` 메서드를 사용하세요. 이 메서드는 별도의 스레드에서 crew 실행을 시작하여, 메인 스레드가 다른 작업을 계속 실행할 수 있도록 합니다.
| 메서드 | 타입 | 설명 |
|--------|------|-------------|
| `akickoff()` | 네이티브 async | 전체 실행 체인에서 진정한 async/await 사용 |
| `kickoff_async()` | 스레드 기반 | 동기 실행을 `asyncio.to_thread`로 래핑 |
<Note>
고동시성 워크로드의 경우 `akickoff()`가 권장됩니다. 이는 작업 실행, 메모리 작업, 지식 검색에 네이티브 async를 사용합니다.
</Note>
## `akickoff()`를 사용한 네이티브 비동기 실행
`akickoff()` 메서드는 작업 실행, 메모리 작업, 지식 쿼리를 포함한 전체 실행 체인에서 async/await를 사용하여 진정한 네이티브 비동기 실행을 제공합니다.
### 메서드 시그니처
```python Code
def kickoff_async(self, inputs: dict) -> CrewOutput:
async def akickoff(self, inputs: dict) -> CrewOutput:
```
### 매개변수
@@ -28,23 +39,13 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
- `CrewOutput`: crew 실행 결과를 나타내는 객체입니다.
## 잠재적 사용 사례
- **병렬 콘텐츠 생성**: 여러 개의 독립적인 crew를 비동기적으로 시작하여, 각 crew가 다른 주제에 대한 콘텐츠 생성을 담당합니다. 예를 들어, 한 crew는 AI 트렌드에 대한 기사 조사 및 초안을 작성하는 반면, 또 다른 crew는 신제품 출시와 관련된 소셜 미디어 게시물을 생성할 수 있습니다. 각 crew는 독립적으로 운영되므로 콘텐츠 생산을 효율적으로 확장할 수 있습니다.
- **동시 시장 조사 작업**: 여러 crew를 비동기적으로 시작하여 시장 조사를 병렬로 수행합니다. 한 crew는 업계 동향을 분석하고, 또 다른 crew는 경쟁사 전략을 조사하며, 또 다른 crew는 소비자 감정을 평가할 수 있습니다. 각 crew는 독립적으로 자신의 작업을 완료하므로 더 빠르고 포괄적인 인사이트를 얻을 수 있습니다.
- **독립적인 여행 계획 모듈**: 각각 독립적으로 여행의 다양한 측면을 계획하도록 crew를 따로 실행합니다. 한 crew는 항공편 옵션을, 다른 crew는 숙박을, 세 번째 crew는 활동 계획을 담당할 수 있습니다. 각 crew는 비동기적으로 작업하므로 여행의 다양한 요소를 동시에 그리고 독립적으로 더 빠르게 계획할 수 있습니다.
## 예시: 단일 비동기 crew 실행
다음은 asyncio를 사용하여 crew를 비동기적으로 시작하고 결과를 await하는 방법의 예시입니다:
### 예시: 네이티브 비동기 Crew 실행
```python Code
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
# 에이전트 생성
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
@@ -52,37 +53,165 @@ coding_agent = Agent(
allow_code_execution=True
)
# Create a task that requires code execution
# 작업 생성
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
# Create a crew and add the task
# Crew 생성
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Async function to kickoff the crew asynchronously
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
# 네이티브 비동기 실행
async def main():
result = await analysis_crew.akickoff(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
# Run the async function
asyncio.run(async_crew_execution())
asyncio.run(main())
```
## 예: 다중 비동기 Crew 실행
###: 여러 네이티브 비동기 Crew
이 예제에서는 여러 Crew를 비동기적으로 시작하고 `asyncio.gather()`를 사용하여 모두 완료될 때까지 기다리는 방법을 보여줍니다:
`asyncio.gather()`를 사용하여 네이티브 async로 여러 crew를 동시에 실행:
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
task_1 = Task(
description="Analyze the first dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
task_2 = Task(
description="Analyze the second dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
async def main():
results = await asyncio.gather(
crew_1.akickoff(inputs={"ages": [25, 30, 35, 40, 45]}),
crew_2.akickoff(inputs={"ages": [20, 22, 24, 28, 30]})
)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
asyncio.run(main())
```
### 예시: 여러 입력에 대한 네이티브 비동기
`akickoff_for_each()`를 사용하여 네이티브 async로 여러 입력에 대해 crew를 동시에 실행:
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def main():
datasets = [
{"ages": [25, 30, 35, 40, 45]},
{"ages": [20, 22, 24, 28, 30]},
{"ages": [30, 35, 40, 45, 50]}
]
results = await analysis_crew.akickoff_for_each(datasets)
for i, result in enumerate(results, 1):
print(f"Dataset {i} Result:", result)
asyncio.run(main())
```
## `kickoff_async()`를 사용한 스레드 기반 비동기
`kickoff_async()` 메서드는 동기 `kickoff()`를 스레드로 래핑하여 비동기 실행을 제공합니다. 이는 더 간단한 비동기 통합이나 하위 호환성에 유용합니다.
### 메서드 시그니처
```python Code
async def kickoff_async(self, inputs: dict) -> CrewOutput:
```
### 매개변수
- `inputs` (dict): 작업에 필요한 입력 데이터를 포함하는 딕셔너리입니다.
### 반환
- `CrewOutput`: crew 실행 결과를 나타내는 객체입니다.
### 예시: 스레드 기반 비동기 실행
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
asyncio.run(async_crew_execution())
```
### 예시: 여러 스레드 기반 비동기 Crew
```python Code
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
@@ -90,7 +219,6 @@ coding_agent = Agent(
allow_code_execution=True
)
# Create tasks that require code execution
task_1 = Task(
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
@@ -103,22 +231,76 @@ task_2 = Task(
expected_output="The average age of the participants."
)
# Create two crews and add tasks
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
# Async function to kickoff multiple crews asynchronously and wait for all to finish
async def async_multiple_crews():
# Create coroutines for concurrent execution
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
# Wait for both crews to finish
results = await asyncio.gather(result_1, result_2)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
# Run the async function
asyncio.run(async_multiple_crews())
```
```
## 비동기 스트리밍
두 비동기 메서드 모두 crew에 `stream=True`가 설정된 경우 스트리밍을 지원합니다:
```python Code
import asyncio
from crewai import Crew, Agent, Task
agent = Agent(
role="Researcher",
goal="Research and summarize topics",
backstory="You are an expert researcher."
)
task = Task(
description="Research the topic: {topic}",
agent=agent,
expected_output="A comprehensive summary of the topic."
)
crew = Crew(
agents=[agent],
tasks=[task],
stream=True # 스트리밍 활성화
)
async def main():
streaming_output = await crew.akickoff(inputs={"topic": "AI trends in 2024"})
# 스트리밍 청크에 대한 비동기 반복
async for chunk in streaming_output:
print(f"Chunk: {chunk.content}")
# 스트리밍 완료 후 최종 결과 접근
result = streaming_output.result
print(f"Final result: {result.raw}")
asyncio.run(main())
```
## 잠재적 사용 사례
- **병렬 콘텐츠 생성**: 여러 개의 독립적인 crew를 비동기적으로 시작하여, 각 crew가 다른 주제에 대한 콘텐츠 생성을 담당합니다. 예를 들어, 한 crew는 AI 트렌드에 대한 기사 조사 및 초안을 작성하는 반면, 또 다른 crew는 신제품 출시와 관련된 소셜 미디어 게시물을 생성할 수 있습니다.
- **동시 시장 조사 작업**: 여러 crew를 비동기적으로 시작하여 시장 조사를 병렬로 수행합니다. 한 crew는 업계 동향을 분석하고, 또 다른 crew는 경쟁사 전략을 조사하며, 또 다른 crew는 소비자 감정을 평가할 수 있습니다.
- **독립적인 여행 계획 모듈**: 각각 독립적으로 여행의 다양한 측면을 계획하도록 crew를 따로 실행합니다. 한 crew는 항공편 옵션을, 다른 crew는 숙박을, 세 번째 crew는 활동 계획을 담당할 수 있습니다.
## `akickoff()`와 `kickoff_async()` 선택하기
| 기능 | `akickoff()` | `kickoff_async()` |
|---------|--------------|-------------------|
| 실행 모델 | 네이티브 async/await | 스레드 기반 래퍼 |
| 작업 실행 | `aexecute_sync()`로 비동기 | 스레드 풀에서 동기 |
| 메모리 작업 | 비동기 | 스레드 풀에서 동기 |
| 지식 검색 | 비동기 | 스레드 풀에서 동기 |
| 적합한 용도 | 고동시성, I/O 바운드 워크로드 | 간단한 비동기 통합 |
| 스트리밍 지원 | 예 | 예 |

View File

@@ -0,0 +1,356 @@
---
title: 스트리밍 Crew 실행
description: CrewAI crew 실행에서 실시간 출력을 스트리밍하기
icon: wave-pulse
mode: "wide"
---
## 소개
CrewAI는 crew 실행 중 실시간 출력을 스트리밍하는 기능을 제공하여, 전체 프로세스가 완료될 때까지 기다리지 않고 결과가 생성되는 대로 표시할 수 있습니다. 이 기능은 대화형 애플리케이션을 구축하거나, 사용자 피드백을 제공하거나, 장시간 실행되는 프로세스를 모니터링할 때 특히 유용합니다.
## 스트리밍 작동 방식
스트리밍이 활성화되면 CrewAI는 LLM 응답과 도구 호출을 실시간으로 캡처하여, 어떤 task와 agent가 실행 중인지에 대한 컨텍스트를 포함한 구조화된 청크로 패키징합니다. 이러한 청크를 실시간으로 반복 처리하고 실행이 완료되면 최종 결과에 접근할 수 있습니다.
## 스트리밍 활성화
스트리밍을 활성화하려면 crew를 생성할 때 `stream` 파라미터를 `True`로 설정하세요:
```python Code
from crewai import Agent, Crew, Task
# 에이전트와 태스크 생성
researcher = Agent(
role="Research Analyst",
goal="Gather comprehensive information on topics",
backstory="You are an experienced researcher with excellent analytical skills.",
)
task = Task(
description="Research the latest developments in AI",
expected_output="A detailed report on recent AI advancements",
agent=researcher,
)
# 스트리밍 활성화
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True # 스트리밍 출력 활성화
)
```
## 동기 스트리밍
스트리밍이 활성화된 crew에서 `kickoff()`를 호출하면, 청크가 도착할 때마다 반복 처리할 수 있는 `CrewStreamingOutput` 객체가 반환됩니다:
```python Code
# 스트리밍 실행 시작
streaming = crew.kickoff(inputs={"topic": "artificial intelligence"})
# 청크가 도착할 때마다 반복
for chunk in streaming:
print(chunk.content, end="", flush=True)
# 스트리밍 완료 후 최종 결과 접근
result = streaming.result
print(f"\n\n최종 출력: {result.raw}")
```
### 스트림 청크 정보
각 청크는 실행에 대한 풍부한 컨텍스트를 제공합니다:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(f"Task: {chunk.task_name} (인덱스 {chunk.task_index})")
print(f"Agent: {chunk.agent_role}")
print(f"Content: {chunk.content}")
print(f"Type: {chunk.chunk_type}") # TEXT 또는 TOOL_CALL
if chunk.tool_call:
print(f"Tool: {chunk.tool_call.tool_name}")
print(f"Arguments: {chunk.tool_call.arguments}")
```
### 스트리밍 결과 접근
`CrewStreamingOutput` 객체는 여러 유용한 속성을 제공합니다:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
# 청크 반복 및 수집
for chunk in streaming:
print(chunk.content, end="", flush=True)
# 반복 완료 후
print(f"\n완료됨: {streaming.is_completed}")
print(f"전체 텍스트: {streaming.get_full_text()}")
print(f"전체 청크 수: {len(streaming.chunks)}")
print(f"최종 결과: {streaming.result.raw}")
```
## 비동기 스트리밍
비동기 애플리케이션의 경우, 비동기 반복과 함께 `akickoff()`(네이티브 async) 또는 `kickoff_async()`(스레드 기반)를 사용할 수 있습니다:
### `akickoff()`를 사용한 네이티브 Async
`akickoff()` 메서드는 전체 체인에서 진정한 네이티브 async 실행을 제공합니다:
```python Code
import asyncio
async def stream_crew():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# 네이티브 async 스트리밍 시작
streaming = await crew.akickoff(inputs={"topic": "AI"})
# 청크에 대한 비동기 반복
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# 최종 결과 접근
result = streaming.result
print(f"\n\n최종 출력: {result.raw}")
asyncio.run(stream_crew())
```
### `kickoff_async()`를 사용한 스레드 기반 Async
더 간단한 async 통합이나 하위 호환성을 위해:
```python Code
import asyncio
async def stream_crew():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# 스레드 기반 async 스트리밍 시작
streaming = await crew.kickoff_async(inputs={"topic": "AI"})
# 청크에 대한 비동기 반복
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# 최종 결과 접근
result = streaming.result
print(f"\n\n최종 출력: {result.raw}")
asyncio.run(stream_crew())
```
<Note>
고동시성 워크로드의 경우, 태스크 실행, 메모리 작업, 지식 검색에 네이티브 async를 사용하는 `akickoff()`가 권장됩니다. 자세한 내용은 [Crew 비동기 시작](/ko/learn/kickoff-async) 가이드를 참조하세요.
</Note>
## kickoff_for_each를 사용한 스트리밍
`kickoff_for_each()`로 여러 입력에 대해 crew를 실행할 때, 동기 또는 비동기 여부에 따라 스트리밍이 다르게 작동합니다:
### 동기 kickoff_for_each
동기 `kickoff_for_each()`를 사용하면, 각 입력에 대해 하나씩 `CrewStreamingOutput` 객체의 리스트가 반환됩니다:
```python Code
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
inputs_list = [
{"topic": "AI in healthcare"},
{"topic": "AI in finance"}
]
# 스트리밍 출력 리스트 반환
streaming_outputs = crew.kickoff_for_each(inputs=inputs_list)
# 각 스트리밍 출력에 대해 반복
for i, streaming in enumerate(streaming_outputs):
print(f"\n=== 입력 {i + 1} ===")
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\n\n결과 {i + 1}: {result.raw}")
```
### 비동기 kickoff_for_each_async
비동기 `kickoff_for_each_async()`를 사용하면, 모든 crew의 청크가 동시에 도착하는 대로 반환하는 단일 `CrewStreamingOutput`이 반환됩니다:
```python Code
import asyncio
async def stream_multiple_crews():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
inputs_list = [
{"topic": "AI in healthcare"},
{"topic": "AI in finance"}
]
# 모든 crew에 대한 단일 스트리밍 출력 반환
streaming = await crew.kickoff_for_each_async(inputs=inputs_list)
# 모든 crew의 청크가 생성되는 대로 도착
async for chunk in streaming:
print(f"[{chunk.task_name}] {chunk.content}", end="", flush=True)
# 모든 결과 접근
results = streaming.results # CrewOutput 객체 리스트
for i, result in enumerate(results):
print(f"\n\n결과 {i + 1}: {result.raw}")
asyncio.run(stream_multiple_crews())
```
## 스트림 청크 타입
청크는 `chunk_type` 필드로 표시되는 다양한 타입을 가질 수 있습니다:
### TEXT 청크
LLM 응답의 표준 텍스트 콘텐츠:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
```
### TOOL_CALL 청크
수행 중인 도구 호출에 대한 정보:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TOOL_CALL:
print(f"\n도구 호출: {chunk.tool_call.tool_name}")
print(f"인자: {chunk.tool_call.arguments}")
```
## 실용적인 예시: 스트리밍을 사용한 UI 구축
다음은 스트리밍을 사용한 대화형 애플리케이션을 구축하는 방법을 보여주는 완전한 예시입니다:
```python Code
import asyncio
from crewai import Agent, Crew, Task
from crewai.types.streaming import StreamChunkType
async def interactive_research():
# 스트리밍이 활성화된 crew 생성
researcher = Agent(
role="Research Analyst",
goal="Provide detailed analysis on any topic",
backstory="You are an expert researcher with broad knowledge.",
)
task = Task(
description="Research and analyze: {topic}",
expected_output="A comprehensive analysis with key insights",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True,
verbose=False
)
# 사용자 입력 받기
topic = input("연구할 주제를 입력하세요: ")
print(f"\n{'='*60}")
print(f"연구 중: {topic}")
print(f"{'='*60}\n")
# 스트리밍 실행 시작
streaming = await crew.kickoff_async(inputs={"topic": topic})
current_task = ""
async for chunk in streaming:
# 태스크 전환 표시
if chunk.task_name != current_task:
current_task = chunk.task_name
print(f"\n[{chunk.agent_role}] 작업 중: {chunk.task_name}")
print("-" * 60)
# 텍스트 청크 표시
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
# 도구 호출 표시
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
print(f"\n🔧 도구 사용: {chunk.tool_call.tool_name}")
# 최종 결과 표시
result = streaming.result
print(f"\n\n{'='*60}")
print("분석 완료!")
print(f"{'='*60}")
print(f"\n토큰 사용량: {result.token_usage}")
asyncio.run(interactive_research())
```
## 사용 사례
스트리밍은 다음과 같은 경우에 특히 유용합니다:
- **대화형 애플리케이션**: 에이전트가 작업하는 동안 사용자에게 실시간 피드백 제공
- **장시간 실행 태스크**: 연구, 분석 또는 콘텐츠 생성의 진행 상황 표시
- **디버깅 및 모니터링**: 에이전트 동작과 의사 결정을 실시간으로 관찰
- **사용자 경험**: 점진적인 결과를 표시하여 체감 지연 시간 감소
- **라이브 대시보드**: crew 실행 상태를 표시하는 모니터링 인터페이스 구축
## 중요 사항
- 스트리밍은 crew의 모든 에이전트에 대해 자동으로 LLM 스트리밍을 활성화합니다
- `.result` 속성에 접근하기 전에 모든 청크를 반복해야 합니다
- 스트리밍을 사용하는 `kickoff_for_each_async()`의 경우, 모든 출력을 가져오려면 `.results`(복수형)를 사용하세요
- 스트리밍은 최소한의 오버헤드를 추가하며 실제로 체감 성능을 향상시킬 수 있습니다
- 각 청크는 풍부한 UI를 위한 전체 컨텍스트(태스크, 에이전트, 청크 타입)를 포함합니다
## 오류 처리
스트리밍 실행 중 오류 처리:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
try:
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\n성공: {result.raw}")
except Exception as e:
print(f"\n스트리밍 중 오류 발생: {e}")
if streaming.is_completed:
print("스트리밍은 완료되었지만 오류가 발생했습니다")
```
스트리밍을 활용하면 CrewAI로 더 반응성이 좋고 대화형인 애플리케이션을 구축하여 사용자에게 에이전트 실행과 결과에 대한 실시간 가시성을 제공할 수 있습니다.

View File

@@ -0,0 +1,213 @@
---
title: CrewAI Tracing
description: CrewAI AOP 플랫폼을 사용한 CrewAI Crews 및 Flows의 내장 추적
icon: magnifying-glass-chart
mode: "wide"
---
# CrewAI 내장 추적 (Built-in Tracing)
CrewAI는 Crews와 Flows를 실시간으로 모니터링하고 디버깅할 수 있는 내장 추적 기능을 제공합니다. 이 가이드는 CrewAI의 통합 관측 가능성 플랫폼을 사용하여 **Crews**와 **Flows** 모두에 대한 추적을 활성화하는 방법을 보여줍니다.
> **CrewAI Tracing이란?** CrewAI의 내장 추적은 agent 결정, 작업 실행 타임라인, 도구 사용, LLM 호출을 포함한 AI agent에 대한 포괄적인 관측 가능성을 제공하며, 모두 [CrewAI AOP 플랫폼](https://app.crewai.com)을 통해 액세스할 수 있습니다.
![CrewAI Tracing Interface](/images/crewai-tracing.png)
## 사전 요구 사항
CrewAI 추적을 사용하기 전에 다음이 필요합니다:
1. **CrewAI AOP 계정**: [app.crewai.com](https://app.crewai.com)에서 무료 계정에 가입하세요
2. **CLI 인증**: CrewAI CLI를 사용하여 로컬 환경을 인증하세요
```bash
crewai login
```
## 설정 지침
### 1단계: CrewAI AOP 계정 생성
[app.crewai.com](https://app.crewai.com)을 방문하여 무료 계정을 만드세요. 이를 통해 추적, 메트릭을 보고 crews를 관리할 수 있는 CrewAI AOP 플랫폼에 액세스할 수 있습니다.
### 2단계: CrewAI CLI 설치 및 인증
아직 설치하지 않았다면 CLI 도구와 함께 CrewAI를 설치하세요:
```bash
uv add crewai[tools]
```
그런 다음 CrewAI AOP 계정으로 CLI를 인증하세요:
```bash
crewai login
```
이 명령은 다음을 수행합니다:
1. 브라우저에서 인증 페이지를 엽니다
2. 장치 코드를 입력하라는 메시지를 표시합니다
3. CrewAI AOP 계정으로 로컬 환경을 인증합니다
4. 로컬 개발을 위한 추적 기능을 활성화합니다
### 3단계: Crew에서 추적 활성화
`tracing` 매개변수를 `True`로 설정하여 Crew에 대한 추적을 활성화할 수 있습니다:
```python
from crewai import Agent, Crew, Process, Task
from crewai_tools import SerperDevTool
# Define your agents
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI and data science",
backstory=\"\"\"You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.\"\"\",
verbose=True,
tools=[SerperDevTool()],
)
writer = Agent(
role="Tech Content Strategist",
goal="Craft compelling content on tech advancements",
backstory=\"\"\"You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.\"\"\",
verbose=True,
)
# Create tasks for your agents
research_task = Task(
description=\"\"\"Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.\"\"\",
expected_output="Full analysis report in bullet points",
agent=researcher,
)
writing_task = Task(
description=\"\"\"Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.\"\"\",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer,
)
# Enable tracing in your crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
tracing=True, # Enable built-in tracing
verbose=True
)
# Execute your crew
result = crew.kickoff()
```
### 4단계: Flow에서 추적 활성화
마찬가지로 CrewAI Flows에 대한 추적을 활성화할 수 있습니다:
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class ExampleFlow(Flow[ExampleState]):
def __init__(self):
super().__init__(tracing=True) # Enable tracing for the flow
@start()
def first_method(self):
print("Starting the flow")
self.state.counter = 1
self.state.message = "Flow started"
return "continue"
@listen("continue")
def second_method(self):
print("Continuing the flow")
self.state.counter += 1
self.state.message = "Flow continued"
return "finish"
@listen("finish")
def final_method(self):
print("Finishing the flow")
self.state.counter += 1
self.state.message = "Flow completed"
# Create and run the flow with tracing enabled
flow = ExampleFlow(tracing=True)
result = flow.kickoff()
```
### 5단계: CrewAI AOP 대시보드에서 추적 보기
crew 또는 flow를 실행한 후 CrewAI AOP 대시보드에서 CrewAI 애플리케이션이 생성한 추적을 볼 수 있습니다. agent 상호 작용, 도구 사용 및 LLM 호출의 세부 단계를 볼 수 있습니다.
아래 링크를 클릭하여 추적을 보거나 대시보드의 추적 탭으로 이동하세요 [여기](https://app.crewai.com/crewai_plus/trace_batches)
![CrewAI Tracing Interface](/images/view-traces.png)
### 대안: 환경 변수 구성
환경 변수를 설정하여 전역적으로 추적을 활성화할 수도 있습니다:
```bash
export CREWAI_TRACING_ENABLED=true
```
또는 `.env` 파일에 추가하세요:
```env
CREWAI_TRACING_ENABLED=true
```
이 환경 변수가 설정되면 `tracing=True`를 명시적으로 설정하지 않아도 모든 Crews와 Flows에 자동으로 추적이 활성화됩니다.
## 추적 보기
### CrewAI AOP 대시보드 액세스
1. [app.crewai.com](https://app.crewai.com)을 방문하여 계정에 로그인하세요
2. 프로젝트 대시보드로 이동하세요
3. **Traces** 탭을 클릭하여 실행 세부 정보를 확인하세요
### 추적에서 볼 수 있는 내용
CrewAI 추적은 다음에 대한 포괄적인 가시성을 제공합니다:
- **Agent 결정**: agent가 작업을 통해 어떻게 추론하고 결정을 내리는지 확인하세요
- **작업 실행 타임라인**: 작업 시퀀스 및 종속성의 시각적 표현
- **도구 사용**: 어떤 도구가 호출되고 그 결과를 모니터링하세요
- **LLM 호출**: 프롬프트 및 응답을 포함한 모든 언어 모델 상호 작용을 추적하세요
- **성능 메트릭**: 실행 시간, 토큰 사용량 및 비용
- **오류 추적**: 세부 오류 정보 및 스택 추적
### 추적 기능
- **실행 타임라인**: 실행의 다양한 단계를 클릭하여 확인하세요
- **세부 로그**: 디버깅을 위한 포괄적인 로그에 액세스하세요
- **성능 분석**: 실행 패턴을 분석하고 성능을 최적화하세요
- **내보내기 기능**: 추가 분석을 위해 추적을 다운로드하세요
### 인증 문제
인증 문제가 발생하는 경우:
1. 로그인되어 있는지 확인하세요: `crewai login`
2. 인터넷 연결을 확인하세요
3. [app.crewai.com](https://app.crewai.com)에서 계정을 확인하세요
### 추적이 나타나지 않음
대시보드에 추적이 표시되지 않는 경우:
1. Crew/Flow에서 `tracing=True`가 설정되어 있는지 확인하세요
2. 환경 변수를 사용하는 경우 `CREWAI_TRACING_ENABLED=true`인지 확인하세요
3. `crewai login`으로 인증되었는지 확인하세요
4. crew/flow가 실제로 실행되고 있는지 확인하세요

View File

@@ -16,16 +16,17 @@ Bem-vindo à referência da API do CrewAI AOP. Esta API permite que você intera
Navegue até a página de detalhes do seu crew no painel do CrewAI AOP e copie seu Bearer Token na aba Status.
</Step>
<Step title="Descubra os Inputs Necessários">
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
</Step>
<Step title="Descubra os Inputs Necessários">
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
</Step>
<Step title="Inicie uma Execução de Crew">
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e receber um `kickoff_id`.
</Step>
<Step title="Inicie uma Execução de Crew">
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e
receber um `kickoff_id`.
</Step>
<Step title="Monitore o Progresso">
Use `GET /status/{kickoff_id}` para checar o status da execução e recuperar os resultados.
Use `GET /{kickoff_id}/status` para checar o status da execução e recuperar os resultados.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Tipos de Token
| Tipo de Token | Escopo | Caso de Uso |
|:--------------------|:------------------------|:---------------------------------------------------------|
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
| Tipo de Token | Escopo | Caso de Uso |
| :-------------------- | :----------------------------- | :------------------------------------------------------------------- |
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
<Tip>
Você pode encontrar ambos os tipos de token na aba Status da página de detalhes do seu crew no painel do CrewAI AOP.
Você pode encontrar ambos os tipos de token na aba Status da página de
detalhes do seu crew no painel do CrewAI AOP.
</Tip>
## URL Base
@@ -63,29 +65,33 @@ Substitua `your-crew-name` pela URL real do seu crew no painel.
1. **Descoberta**: Chame `GET /inputs` para entender o que seu crew precisa
2. **Execução**: Envie os inputs via `POST /kickoff` para iniciar o processamento
3. **Monitoramento**: Faça polling em `GET /status/{kickoff_id}` até a conclusão
3. **Monitoramento**: Faça polling em `GET /{kickoff_id}/status` até a conclusão
4. **Resultados**: Extraia o output final da resposta concluída
## Tratamento de Erros
A API utiliza códigos de status HTTP padrão:
| Código | Significado |
|--------|:--------------------------------------|
| `200` | Sucesso |
| `400` | Requisição Inválida - Formato de input inválido |
| `401` | Não Autorizado - Bearer token inválido |
| `404` | Não Encontrado - Recurso não existe |
| Código | Significado |
| ------ | :----------------------------------------------- |
| `200` | Sucesso |
| `400` | Requisição Inválida - Formato de input inválido |
| `401` | Não Autorizado - Bearer token inválido |
| `404` | Não Encontrado - Recurso não existe |
| `422` | Erro de Validação - Inputs obrigatórios ausentes |
| `500` | Erro no Servidor - Contate o suporte |
| `500` | Erro no Servidor - Contate o suporte |
## Testes Interativos
<Info>
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua própria URL de crew, utilizamos o **modo referência** em vez de um playground interativo para evitar confusão. Isso mostra exatamente como as requisições devem ser feitas, sem botões de envio não funcionais.
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua
própria URL de crew, utilizamos o **modo referência** em vez de um playground
interativo para evitar confusão. Isso mostra exatamente como as requisições
devem ser feitas, sem botões de envio não funcionais.
</Info>
Cada página de endpoint mostra para você:
- ✅ **Formato exato da requisição** com todos os parâmetros
- ✅ **Exemplos de resposta** para casos de sucesso e erro
- ✅ **Exemplos de código** em várias linguagens (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Cada página de endpoint mostra para você:
</CardGroup>
**Exemplo de fluxo:**
1. **Copie este exemplo cURL** de qualquer página de endpoint
2. **Substitua `your-actual-crew-name.crewai.com`** pela URL real do seu crew
3. **Substitua o Bearer token** pelo seu token real do painel
@@ -111,10 +118,18 @@ Cada página de endpoint mostra para você:
## Precisa de Ajuda?
<CardGroup cols={2}>
<Card title="Suporte Enterprise" icon="headset" href="mailto:support@crewai.com">
<Card
title="Suporte Enterprise"
icon="headset"
href="mailto:support@crewai.com"
>
Obtenha ajuda com integração da API e resolução de problemas
</Card>
<Card title="Painel Enterprise" icon="chart-line" href="https://app.crewai.com">
<Card
title="Painel Enterprise"
icon="chart-line"
href="https://app.crewai.com"
>
Gerencie seus crews e visualize logs de execução
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "Obter o status da execução"
openapi: "/enterprise-api.pt-BR.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.pt-BR.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -32,6 +32,8 @@ Uma crew no crewAI representa um grupo colaborativo de agentes trabalhando em co
| **Prompt File** _(opcional)_ | `prompt_file` | Caminho para o arquivo JSON de prompt a ser utilizado pela crew. |
| **Planning** *(opcional)* | `planning` | Adiciona habilidade de planejamento à Crew. Quando ativado, antes de cada iteração, todos os dados da Crew são enviados a um AgentPlanner que planejará as tasks e este plano será adicionado à descrição de cada task. |
| **Planning LLM** *(opcional)* | `planning_llm` | O modelo de linguagem usado pelo AgentPlanner em um processo de planejamento. |
| **Knowledge Sources** _(opcional)_ | `knowledge_sources` | Fontes de conhecimento disponíveis no nível da crew, acessíveis a todos os agentes. |
| **Stream** _(opcional)_ | `stream` | Habilita saída em streaming para receber atualizações em tempo real durante a execução da crew. Retorna um objeto `CrewStreamingOutput` que pode ser iterado para chunks. O padrão é `False`. |
<Tip>
**Crew Max RPM**: O atributo `max_rpm` define o número máximo de requisições por minuto que a crew pode executar para evitar limites de taxa e irá sobrescrever as configurações de `max_rpm` dos agentes individuais se você o definir.
@@ -303,12 +305,27 @@ print(result)
### Diferentes Formas de Iniciar uma Crew
Assim que sua crew estiver definida, inicie o fluxo de trabalho com o método kickoff apropriado. O CrewAI oferece vários métodos para melhor controle do processo: `kickoff()`, `kickoff_for_each()`, `kickoff_async()` e `kickoff_for_each_async()`.
Assim que sua crew estiver definida, inicie o fluxo de trabalho com o método kickoff apropriado. O CrewAI oferece vários métodos para melhor controle do processo.
#### Métodos Síncronos
- `kickoff()`: Inicia o processo de execução seguindo o fluxo definido.
- `kickoff_for_each()`: Executa tasks sequencialmente para cada evento de entrada ou item da coleção fornecida.
- `kickoff_async()`: Inicia o workflow de forma assíncrona.
- `kickoff_for_each_async()`: Executa as tasks concorrentemente para cada entrada, aproveitando o processamento assíncrono.
#### Métodos Assíncronos
O CrewAI oferece duas abordagens para execução assíncrona:
| Método | Tipo | Descrição |
|--------|------|-------------|
| `akickoff()` | Async nativo | Async/await verdadeiro em toda a cadeia de execução |
| `akickoff_for_each()` | Async nativo | Execução async nativa para cada entrada em uma lista |
| `kickoff_async()` | Baseado em thread | Envolve execução síncrona em `asyncio.to_thread` |
| `kickoff_for_each_async()` | Baseado em thread | Async baseado em thread para cada entrada em uma lista |
<Note>
Para cargas de trabalho de alta concorrência, `akickoff()` e `akickoff_for_each()` são recomendados pois usam async nativo para execução de tasks, operações de memória e recuperação de conhecimento.
</Note>
```python Code
# Iniciar execução das tasks da crew
@@ -321,19 +338,53 @@ results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Exemplo com kickoff_async
# Exemplo usando async nativo com akickoff
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.akickoff(inputs=inputs)
print(async_result)
# Exemplo usando async nativo com akickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.akickoff_for_each(inputs=inputs_array)
for async_result in async_results:
print(async_result)
# Exemplo usando kickoff_async baseado em thread
inputs = {'topic': 'AI in healthcare'}
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Exemplo com kickoff_for_each_async
# Exemplo usando kickoff_for_each_async baseado em thread
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
Esses métodos fornecem flexibilidade para gerenciar e executar tasks dentro de sua crew, permitindo fluxos de trabalho síncronos e assíncronos de acordo com sua necessidade.
Esses métodos fornecem flexibilidade para gerenciar e executar tasks dentro de sua crew, permitindo fluxos de trabalho síncronos e assíncronos de acordo com sua necessidade. Para exemplos detalhados de async, consulte o guia [Inicie uma Crew de Forma Assíncrona](/pt-BR/learn/kickoff-async).
### Streaming na Execução da Crew
Para visibilidade em tempo real da execução da crew, você pode habilitar streaming para receber saída conforme é gerada:
```python Code
# Habilitar streaming
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Iterar sobre saída em streaming
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Acessar resultado final
result = streaming.result
```
Saiba mais sobre streaming no guia [Streaming na Execução da Crew](/pt-BR/learn/streaming-crew-execution).
### Repetindo Execução a partir de uma Task Específica

View File

@@ -307,6 +307,55 @@ Os métodos `third_method` e `fourth_method` escutam a saída do `second_method`
Ao executar esse Flow, a saída será diferente dependendo do valor booleano aleatório gerado pelo `start_method`.
### Human in the Loop (feedback humano)
O decorador `@human_feedback` permite fluxos de trabalho human-in-the-loop, pausando a execução do flow para coletar feedback de um humano. Isso é útil para portões de aprovação, revisão de qualidade e pontos de decisão que requerem julgamento humano.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Você aprova este conteúdo?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "Conteúdo para revisão..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Aprovado! Feedback: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"Rejeitado. Motivo: {result.feedback}")
```
Quando `emit` é especificado, o feedback livre do humano é interpretado por um LLM e mapeado para um dos outcomes especificados, que então dispara o decorador `@listen` correspondente.
Você também pode usar `@human_feedback` sem roteamento para simplesmente coletar feedback:
```python Code
@start()
@human_feedback(message="Algum comentário sobre esta saída?")
def my_method(self):
return "Saída para revisão"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# Acesse o feedback via result.feedback
# Acesse a saída original via result.output
pass
```
Acesse todo o feedback coletado durante um flow via `self.last_human_feedback` (mais recente) ou `self.human_feedback_history` (todo o feedback em uma lista).
Para um guia completo sobre feedback humano em flows, incluindo feedback assíncrono/não-bloqueante com providers customizados (Slack, webhooks, etc.), veja [Feedback Humano em Flows](/pt-BR/learn/human-feedback-in-flows).
## Adicionando Agentes aos Flows
Os agentes podem ser integrados facilmente aos seus flows, oferecendo uma alternativa leve às crews completas quando você precisar executar tarefas simples e focadas. Veja um exemplo de como utilizar um agente em um flow para realizar uma pesquisa de mercado:

View File

@@ -515,8 +515,7 @@ crew = Crew(
"provider": "huggingface",
"config": {
"api_key": "your-hf-token", # Opcional para modelos públicos
"model": "sentence-transformers/all-MiniLM-L6-v2",
"api_url": "https://api-inference.huggingface.co" # ou seu endpoint customizado
"model": "sentence-transformers/all-MiniLM-L6-v2"
}
}
)

View File

@@ -0,0 +1,154 @@
---
title: Arquitetura de Produção
description: Melhores práticas para construir aplicações de IA prontas para produção com CrewAI
icon: server
mode: "wide"
---
# A Mentalidade Flow-First
Ao construir aplicações de IA de produção com CrewAI, **recomendamos começar com um Flow**.
Embora seja possível executar Crews ou Agentes individuais, envolvê-los em um Flow fornece a estrutura necessária para uma aplicação robusta e escalável.
## Por que Flows?
1. **Gerenciamento de Estado**: Flows fornecem uma maneira integrada de gerenciar o estado em diferentes etapas da sua aplicação. Isso é crucial para passar dados entre Crews, manter o contexto e lidar com entradas do usuário.
2. **Controle**: Flows permitem definir caminhos de execução precisos, incluindo loops, condicionais e lógica de ramificação. Isso é essencial para lidar com casos extremos e garantir que sua aplicação se comporte de maneira previsível.
3. **Observabilidade**: Flows fornecem uma estrutura clara que facilita o rastreamento da execução, a depuração de problemas e o monitoramento do desempenho. Recomendamos o uso do [CrewAI Tracing](/pt-BR/observability/tracing) para insights detalhados. Basta executar `crewai login` para habilitar recursos de observabilidade gratuitos.
## A Arquitetura
Uma aplicação CrewAI de produção típica se parece com isso:
```mermaid
graph TD
Start((Início)) --> Flow[Orquestrador de Flow]
Flow --> State{Gerenciamento de Estado}
State --> Step1[Etapa 1: Coleta de Dados]
Step1 --> Crew1[Crew de Pesquisa]
Crew1 --> State
State --> Step2{Verificação de Condição}
Step2 -- "Válido" --> Step3[Etapa 3: Execução]
Step3 --> Crew2[Crew de Ação]
Step2 -- "Inválido" --> End((Fim))
Crew2 --> End
```
### 1. A Classe Flow
Sua classe `Flow` é o ponto de entrada. Ela define o esquema de estado e os métodos que executam sua lógica.
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class AppState(BaseModel):
user_input: str = ""
research_results: str = ""
final_report: str = ""
class ProductionFlow(Flow[AppState]):
@start()
def gather_input(self):
# ... lógica para obter entrada ...
pass
@listen(gather_input)
def run_research_crew(self):
# ... acionar um Crew ...
pass
```
### 2. Gerenciamento de Estado
Use modelos Pydantic para definir seu estado. Isso garante a segurança de tipos e deixa claro quais dados estão disponíveis em cada etapa.
- **Mantenha o mínimo**: Armazene apenas o que você precisa persistir entre as etapas.
- **Use dados estruturados**: Evite dicionários não estruturados quando possível.
### 3. Crews como Unidades de Trabalho
Delegue tarefas complexas para Crews. Um Crew deve ser focado em um objetivo específico (por exemplo, "Pesquisar um tópico", "Escrever uma postagem no blog").
- **Não superengendre Crews**: Mantenha-os focados.
- **Passe o estado explicitamente**: Passe os dados necessários do estado do Flow para as entradas do Crew.
```python
@listen(gather_input)
def run_research_crew(self):
crew = ResearchCrew()
result = crew.kickoff(inputs={"topic": self.state.user_input})
self.state.research_results = result.raw
```
## Primitivas de Controle
Aproveite as primitivas de controle do CrewAI para adicionar robustez e controle aos seus Crews.
### 1. Task Guardrails
Use [Task Guardrails](/pt-BR/concepts/tasks#task-guardrails) para validar as saídas das tarefas antes que sejam aceitas. Isso garante que seus agentes produzam resultados de alta qualidade.
```python
def validate_content(result: TaskOutput) -> Tuple[bool, Any]:
if len(result.raw) < 100:
return (False, "Content is too short. Please expand.")
return (True, result.raw)
task = Task(
...,
guardrail=validate_content
)
```
### 2. Saídas Estruturadas
Sempre use saídas estruturadas (`output_pydantic` ou `output_json`) ao passar dados entre tarefas ou para sua aplicação. Isso evita erros de análise e garante a segurança de tipos.
```python
class ResearchResult(BaseModel):
summary: str
sources: List[str]
task = Task(
...,
output_pydantic=ResearchResult
)
```
### 3. LLM Hooks
Use [LLM Hooks](/pt-BR/learn/llm-hooks) para inspecionar ou modificar mensagens antes que elas sejam enviadas para o LLM, ou para higienizar respostas.
```python
@before_llm_call
def log_request(context):
print(f"Agent {context.agent.role} is calling the LLM...")
```
## Padrões de Implantação
Ao implantar seu Flow, considere o seguinte:
### CrewAI Enterprise
A maneira mais fácil de implantar seu Flow é usando o CrewAI Enterprise. Ele lida com a infraestrutura, autenticação e monitoramento para você.
Confira o [Guia de Implantação](/pt-BR/enterprise/guides/deploy-crew) para começar.
```bash
crewai deploy create
```
### Execução Assíncrona
Para tarefas de longa duração, use `kickoff_async` para evitar bloquear sua API.
### Persistência
Use o decorador `@persist` para salvar o estado do seu Flow em um banco de dados. Isso permite retomar a execução se o processo falhar ou se você precisar esperar pela entrada humana.
```python
@persist
class ProductionFlow(Flow[AppState]):
# ...
```
## Resumo
- **Comece com um Flow.**
- **Defina um Estado claro.**
- **Use Crews para tarefas complexas.**
- **Implante com uma API e persistência.**

View File

@@ -62,13 +62,13 @@ Teste sua integração de trigger do Gmail localmente usando a CLI da CrewAI:
crewai triggers list
# Simule um trigger do Gmail com payload realista
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
O comando `crewai triggers run` executará sua crew com um payload completo do Gmail, permitindo que você teste sua lógica de parsing antes do deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (não `crewai run`) para simular execução de trigger durante o desenvolvimento. Após o deployment, sua crew receberá automaticamente o payload do trigger.
Use `crewai triggers run gmail/new_email_received` (não `crewai run`) para simular execução de trigger durante o desenvolvimento. Após o deployment, sua crew receberá automaticamente o payload do trigger.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Teste localmente com `crewai triggers run gmail/new_email` para ver a estrutura exata do payload
- Teste localmente com `crewai triggers run gmail/new_email_received` para ver a estrutura exata do payload
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Lembre-se: use `crewai triggers run` (não `crewai run`) para simular execução de trigger

View File

@@ -7,110 +7,89 @@ mode: "wide"
# O que é CrewAI?
**CrewAI é um framework Python enxuto e ultrarrápido, construído totalmente do zero—completamente independente do LangChain ou de outros frameworks de agentes.**
**CrewAI é o principal framework open-source para orquestrar agentes de IA autônomos e construir fluxos de trabalho complexos.**
O CrewAI capacita desenvolvedores tanto com simplicidade de alto nível quanto com controle detalhado de baixo nível, ideal para criar agentes de IA autônomos sob medida para qualquer cenário:
Ele capacita desenvolvedores a construir sistemas multi-agente prontos para produção, combinando a inteligência colaborativa dos **Crews** com o controle preciso dos **Flows**.
- **[Crews do CrewAI](/pt-BR/guides/crews/first-crew)**: Otimizados para autonomia e inteligência colaborativa, permitindo criar equipes de IA onde cada agente possui funções, ferramentas e objetivos específicos.
- **[Flows do CrewAI](/pt-BR/guides/flows/first-flow)**: Proporcionam controle granular, orientado por eventos, com chamadas LLM individuais para uma orquestração precisa das tarefas, além de suportar Crews nativamente.
- **[Flows do CrewAI](/pt-BR/guides/flows/first-flow)**: A espinha dorsal da sua aplicação de IA. Flows permitem criar fluxos de trabalho estruturados e orientados a eventos que gerenciam estado e controlam a execução. Eles fornecem a estrutura para seus agentes de IA trabalharem.
- **[Crews do CrewAI](/pt-BR/guides/crews/first-crew)**: As unidades de trabalho dentro do seu Flow. Crews são equipes de agentes autônomos que colaboram para resolver tarefas específicas delegadas a eles pelo Flow.
Com mais de 100.000 desenvolvedores certificados em nossos cursos comunitários, o CrewAI está se tornando rapidamente o padrão para automação de IA pronta para empresas.
Com mais de 100.000 desenvolvedores certificados em nossos cursos comunitários, o CrewAI é o padrão para automação de IA pronta para empresas.
## A Arquitetura do CrewAI
## Como funcionam os Crews
A arquitetura do CrewAI foi projetada para equilibrar autonomia com controle.
### 1. Flows: A Espinha Dorsal
<Note>
Assim como uma empresa possui departamentos (Vendas, Engenharia, Marketing) trabalhando juntos sob uma liderança para atingir objetivos de negócio, o CrewAI ajuda você a criar uma “organização” de agentes de IA com funções especializadas colaborando para realizar tarefas complexas.
</Note>
<Frame caption="Visão Geral do Framework CrewAI">
<img src="/images/crews.png" alt="Visão Geral do Framework CrewAI" />
</Frame>
| Componente | Descrição | Principais Funcionalidades |
|:-----------|:-----------:|:-------------------------|
| **Crew** | Organização de mais alto nível | • Gerencia equipes de agentes de IA<br/>• Supervisiona fluxos de trabalho<br/>• Garante colaboração<br/>• Entrega resultados |
| **Agentes de IA** | Membros especializados da equipe | • Possuem funções específicas (pesquisador, escritor)<br/>• Utilizam ferramentas designadas<br/>• Podem delegar tarefas<br/>• Tomam decisões autônomas |
| **Process** | Sistema de gestão do fluxo de trabalho | • Define padrões de colaboração<br/>• Controla designação de tarefas<br/>• Gerencia interações<br/>• Garante execução eficiente |
| **Tasks** | Atribuições individuais | • Objetivos claros<br/>• Utilizam ferramentas específicas<br/>• Alimentam processos maiores<br/>• Geram resultados acionáveis |
### Como tudo trabalha junto
1. O **Crew** organiza toda a operação
2. **Agentes de IA** realizam tarefas especializadas
3. O **Process** garante colaboração fluida
4. **Tasks** são concluídas para alcançar o objetivo
## Principais Funcionalidades
<CardGroup cols={2}>
<Card title="Agentes Baseados em Funções" icon="users">
Crie agentes especializados com funções, conhecimentos e objetivos definidos de pesquisadores e analistas a escritores
</Card>
<Card title="Ferramentas Flexíveis" icon="screwdriver-wrench">
Equipe os agentes com ferramentas e APIs personalizadas para interagir com serviços e fontes de dados externas
</Card>
<Card title="Colaboração Inteligente" icon="people-arrows">
Agentes trabalham juntos, compartilhando insights e coordenando tarefas para conquistar objetivos complexos
</Card>
<Card title="Gerenciamento de Tarefas" icon="list-check">
Defina fluxos de trabalho sequenciais ou paralelos, com agentes lidando automaticamente com dependências entre tarefas
</Card>
</CardGroup>
## Como funcionam os Flows
<Note>
Enquanto Crews se destacam na colaboração autônoma, Flows proporcionam automações estruturadas, oferecendo controle granular sobre a execução dos fluxos de trabalho. Flows garantem execução confiável, segura e eficiente, lidando com lógica condicional, loops e gerenciamento dinâmico de estados com precisão. Flows se integram perfeitamente com Crews, permitindo equilibrar alta autonomia com controle rigoroso.
Pense em um Flow como o "gerente" ou a "definição do processo" da sua aplicação. Ele define as etapas, a lógica e como os dados se movem através do seu sistema.
</Note>
<Frame caption="Visão Geral do Framework CrewAI">
<img src="/images/flows.png" alt="Visão Geral do Framework CrewAI" />
</Frame>
| Componente | Descrição | Principais Funcionalidades |
|:-----------|:-----------:|:-------------------------|
| **Flow** | Orquestração de fluxo de trabalho estruturada | • Gerencia caminhos de execução<br/>• Lida com transições de estado<br/>• Controla a sequência de tarefas<br/>• Garante execução confiável |
| **Events** | Gatilhos para ações nos fluxos | • Iniciam processos específicos<br/>• Permitem respostas dinâmicas<br/>• Suportam ramificações condicionais<br/>• Adaptam-se em tempo real |
| **States** | Contextos de execução dos fluxos | • Mantêm dados de execução<br/>• Permitem persistência<br/>• Suportam retomada<br/>• Garantem integridade na execução |
| **Crew Support** | Aprimora automação de fluxos | • Injeta autonomia quando necessário<br/>• Complementa fluxos estruturados<br/>• Equilibra automação e inteligência<br/>• Permite tomada de decisão adaptativa |
Flows fornecem:
- **Gerenciamento de Estado**: Persistem dados através de etapas e execuções.
- **Execução Orientada a Eventos**: Acionam ações com base em eventos ou entradas externas.
- **Controle de Fluxo**: Usam lógica condicional, loops e ramificações.
### Capacidades-Chave
### 2. Crews: A Inteligência
<Note>
Crews são as "equipes" que fazem o trabalho pesado. Dentro de um Flow, você pode acionar um Crew para lidar com um problema complexo que requer criatividade e colaboração.
</Note>
<Frame caption="Visão Geral do Framework CrewAI">
<img src="/images/crews.png" alt="Visão Geral do Framework CrewAI" />
</Frame>
Crews fornecem:
- **Agentes com Funções**: Agentes especializados com objetivos e ferramentas específicas.
- **Colaboração Autônoma**: Agentes trabalham juntos para resolver tarefas.
- **Delegação de Tarefas**: Tarefas são atribuídas e executadas com base nas capacidades dos agentes.
## Como Tudo Funciona Junto
1. **O Flow** aciona um evento ou inicia um processo.
2. **O Flow** gerencia o estado e decide o que fazer a seguir.
3. **O Flow** delega uma tarefa complexa para um **Crew**.
4. Os agentes do **Crew** colaboram para completar a tarefa.
5. **O Crew** retorna o resultado para o **Flow**.
6. **O Flow** continua a execução com base no resultado.
## Principais Funcionalidades
<CardGroup cols={2}>
<Card title="Orquestração Orientada por Eventos" icon="bolt">
Defina caminhos de execução precisos respondendo dinamicamente a eventos
<Card title="Flows de Nível de Produção" icon="arrow-progress">
Construa fluxos de trabalho confiáveis e com estado que podem lidar com processos de longa duração e lógica complexa.
</Card>
<Card title="Controle Detalhado" icon="sliders">
Gerencie estados de fluxo de trabalho e execução condicional de forma segura e eficiente
<Card title="Crews Autônomos" icon="users">
Implante equipes de agentes que podem planejar, executar e colaborar para alcançar objetivos de alto nível.
</Card>
<Card title="Integração Nativa com Crew" icon="puzzle-piece">
Combine de forma simples com Crews para maior autonomia e inteligência
<Card title="Ferramentas Flexíveis" icon="screwdriver-wrench">
Conecte seus agentes a qualquer API, banco de dados ou ferramenta local.
</Card>
<Card title="Execução Determinística" icon="route">
Garanta resultados previsíveis com controle explícito de fluxo e tratamento de erros
<Card title="Segurança Empresarial" icon="lock">
Projetado com segurança e conformidade em mente para implantações empresariais.
</Card>
</CardGroup>
## Quando usar Crews versus Flows
## Quando usar Crews vs. Flows
<Note>
Entender quando utilizar [Crews](/pt-BR/guides/crews/first-crew) ou [Flows](/pt-BR/guides/flows/first-flow) é fundamental para maximizar o potencial do CrewAI em suas aplicações.
</Note>
**A resposta curta: Use ambos.**
| Caso de uso | Abordagem recomendada | Por quê? |
|:------------|:---------------------|:---------|
| **Pesquisa aberta** | [Crews](/pt-BR/guides/crews/first-crew) | Quando as tarefas exigem criatividade, exploração e adaptação |
| **Geração de conteúdo** | [Crews](/pt-BR/guides/crews/first-crew) | Para criação colaborativa de artigos, relatórios ou materiais de marketing |
| **Fluxos de decisão** | [Flows](/pt-BR/guides/flows/first-flow) | Quando é necessário caminhos de decisão previsíveis, auditáveis e com controle preciso |
| **Orquestração de APIs** | [Flows](/pt-BR/guides/flows/first-flow) | Para integração confiável com múltiplos serviços externos em sequência específica |
| **Aplicações híbridas** | Abordagem combinada | Use [Flows](/pt-BR/guides/flows/first-flow) para orquestrar o processo geral com [Crews](/pt-BR/guides/crews/first-crew) lidando com subtarefas complexas |
Para qualquer aplicação pronta para produção, **comece com um Flow**.
### Framework de Decisão
- **Use um Flow** para definir a estrutura geral, estado e lógica da sua aplicação.
- **Use um Crew** dentro de uma etapa do Flow quando precisar de uma equipe de agentes para realizar uma tarefa específica e complexa que requer autonomia.
- **Escolha [Crews](/pt-BR/guides/crews/first-crew) quando:** Precisa de resolução autônoma de problemas, colaboração criativa ou tarefas exploratórias
- **Escolha [Flows](/pt-BR/guides/flows/first-flow) quando:** Requer resultados determinísticos, auditabilidade ou controle preciso sobre a execução
- **Combine ambos quando:** Sua aplicação precisa de processos estruturados e também de bolsões de inteligência autônoma
| Caso de Uso | Arquitetura |
| :--- | :--- |
| **Automação Simples** | Flow único com tarefas Python |
| **Pesquisa Complexa** | Flow gerenciando estado -> Crew realizando pesquisa |
| **Backend de Aplicação** | Flow lidando com requisições API -> Crew gerando conteúdo -> Flow salvando no BD |
## Por que escolher o CrewAI?
@@ -124,13 +103,6 @@ Com mais de 100.000 desenvolvedores certificados em nossos cursos comunitários,
## Pronto para começar a construir?
<CardGroup cols={2}>
<Card
title="Crie Seu Primeiro Crew"
icon="users-gear"
href="/pt-BR/guides/crews/first-crew"
>
Tutorial passo a passo para criar uma equipe de IA colaborativa que trabalha junto para resolver problemas complexos.
</Card>
<Card
title="Crie Seu Primeiro Flow"
icon="diagram-project"
@@ -138,6 +110,13 @@ Com mais de 100.000 desenvolvedores certificados em nossos cursos comunitários,
>
Aprenda a criar fluxos de trabalho estruturados e orientados por eventos com controle preciso de execução.
</Card>
<Card
title="Crie Seu Primeiro Crew"
icon="users-gear"
href="/pt-BR/guides/crews/first-crew"
>
Tutorial passo a passo para criar uma equipe de IA colaborativa que trabalha junto para resolver problemas complexos.
</Card>
</CardGroup>
<CardGroup cols={3}>

View File

@@ -0,0 +1,581 @@
---
title: Feedback Humano em Flows
description: Aprenda como integrar feedback humano diretamente nos seus CrewAI Flows usando o decorador @human_feedback
icon: user-check
mode: "wide"
---
## Visão Geral
O decorador `@human_feedback` permite fluxos de trabalho human-in-the-loop (HITL) diretamente nos CrewAI Flows. Ele permite pausar a execução do flow, apresentar a saída para um humano revisar, coletar seu feedback e, opcionalmente, rotear para diferentes listeners com base no resultado do feedback.
Isso é particularmente valioso para:
- **Garantia de qualidade**: Revisar conteúdo gerado por IA antes de ser usado downstream
- **Portões de decisão**: Deixar humanos tomarem decisões críticas em fluxos automatizados
- **Fluxos de aprovação**: Implementar padrões de aprovar/rejeitar/revisar
- **Refinamento interativo**: Coletar feedback para melhorar saídas iterativamente
```mermaid
flowchart LR
A[Método do Flow] --> B[Saída Gerada]
B --> C[Humano Revisa]
C --> D{Feedback}
D -->|emit especificado| E[LLM Mapeia para Outcome]
D -->|sem emit| F[HumanFeedbackResult]
E --> G["@listen('approved')"]
E --> H["@listen('rejected')"]
F --> I[Próximo Listener]
```
## Início Rápido
Aqui está a maneira mais simples de adicionar feedback humano a um flow:
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback
class SimpleReviewFlow(Flow):
@start()
@human_feedback(message="Por favor, revise este conteúdo:")
def generate_content(self):
return "Este é um conteúdo gerado por IA que precisa de revisão."
@listen(generate_content)
def process_feedback(self, result):
print(f"Conteúdo: {result.output}")
print(f"Humano disse: {result.feedback}")
flow = SimpleReviewFlow()
flow.kickoff()
```
Quando este flow é executado, ele irá:
1. Executar `generate_content` e retornar a string
2. Exibir a saída para o usuário com a mensagem de solicitação
3. Aguardar o usuário digitar o feedback (ou pressionar Enter para pular)
4. Passar um objeto `HumanFeedbackResult` para `process_feedback`
## O Decorador @human_feedback
### Parâmetros
| Parâmetro | Tipo | Obrigatório | Descrição |
|-----------|------|-------------|-----------|
| `message` | `str` | Sim | A mensagem mostrada ao humano junto com a saída do método |
| `emit` | `Sequence[str]` | Não | Lista de possíveis outcomes. O feedback é mapeado para um destes, que dispara decoradores `@listen` |
| `llm` | `str \| BaseLLM` | Quando `emit` especificado | LLM usado para interpretar o feedback e mapear para um outcome |
| `default_outcome` | `str` | Não | Outcome a usar se nenhum feedback for fornecido. Deve estar em `emit` |
| `metadata` | `dict` | Não | Dados adicionais para integrações enterprise |
| `provider` | `HumanFeedbackProvider` | Não | Provider customizado para feedback assíncrono/não-bloqueante. Veja [Feedback Humano Assíncrono](#feedback-humano-assíncrono-não-bloqueante) |
### Uso Básico (Sem Roteamento)
Quando você não especifica `emit`, o decorador simplesmente coleta o feedback e passa um `HumanFeedbackResult` para o próximo listener:
```python Code
@start()
@human_feedback(message="O que você acha desta análise?")
def analyze_data(self):
return "Resultados da análise: Receita aumentou 15%, custos diminuíram 8%"
@listen(analyze_data)
def handle_feedback(self, result):
# result é um HumanFeedbackResult
print(f"Análise: {result.output}")
print(f"Feedback: {result.feedback}")
```
### Roteamento com emit
Quando você especifica `emit`, o decorador se torna um roteador. O feedback livre do humano é interpretado por um LLM e mapeado para um dos outcomes especificados:
```python Code
@start()
@human_feedback(
message="Você aprova este conteúdo para publicação?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_content(self):
return "Rascunho do post do blog aqui..."
@listen("approved")
def publish(self, result):
print(f"Publicando! Usuário disse: {result.feedback}")
@listen("rejected")
def discard(self, result):
print(f"Descartando. Motivo: {result.feedback}")
@listen("needs_revision")
def revise(self, result):
print(f"Revisando baseado em: {result.feedback}")
```
<Tip>
O LLM usa saídas estruturadas (function calling) quando disponível para garantir que a resposta seja um dos seus outcomes especificados. Isso torna o roteamento confiável e previsível.
</Tip>
## HumanFeedbackResult
O dataclass `HumanFeedbackResult` contém todas as informações sobre uma interação de feedback humano:
```python Code
from crewai.flow.human_feedback import HumanFeedbackResult
@dataclass
class HumanFeedbackResult:
output: Any # A saída original do método mostrada ao humano
feedback: str # O texto bruto do feedback do humano
outcome: str | None # O outcome mapeado (se emit foi especificado)
timestamp: datetime # Quando o feedback foi recebido
method_name: str # Nome do método decorado
metadata: dict # Qualquer metadata passado ao decorador
```
### Acessando em Listeners
Quando um listener é disparado por um método `@human_feedback` com `emit`, ele recebe o `HumanFeedbackResult`:
```python Code
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Saída original: {result.output}")
print(f"Feedback do usuário: {result.feedback}")
print(f"Outcome: {result.outcome}") # "approved"
print(f"Recebido em: {result.timestamp}")
```
## Acessando o Histórico de Feedback
A classe `Flow` fornece dois atributos para acessar o feedback humano:
### last_human_feedback
Retorna o `HumanFeedbackResult` mais recente:
```python Code
@listen(some_method)
def check_feedback(self):
if self.last_human_feedback:
print(f"Último feedback: {self.last_human_feedback.feedback}")
```
### human_feedback_history
Uma lista de todos os objetos `HumanFeedbackResult` coletados durante o flow:
```python Code
@listen(final_step)
def summarize(self):
print(f"Total de feedbacks coletados: {len(self.human_feedback_history)}")
for i, fb in enumerate(self.human_feedback_history):
print(f"{i+1}. {fb.method_name}: {fb.outcome or 'sem roteamento'}")
```
<Warning>
Cada `HumanFeedbackResult` é adicionado a `human_feedback_history`, então múltiplos passos de feedback não sobrescrevem uns aos outros. Use esta lista para acessar todo o feedback coletado durante o flow.
</Warning>
## Exemplo Completo: Fluxo de Aprovação de Conteúdo
Aqui está um exemplo completo implementando um fluxo de revisão e aprovação de conteúdo:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
draft: str = ""
final_content: str = ""
revision_count: int = 0
class ContentApprovalFlow(Flow[ContentState]):
"""Um flow que gera conteúdo e obtém aprovação humana."""
@start()
def get_topic(self):
self.state.topic = input("Sobre qual tópico devo escrever? ")
return self.state.topic
@listen(get_topic)
def generate_draft(self, topic):
# Em uso real, isso chamaria um LLM
self.state.draft = f"# {topic}\n\nEste é um rascunho sobre {topic}..."
return self.state.draft
@listen(generate_draft)
@human_feedback(
message="Por favor, revise este rascunho. Responda 'approved', 'rejected', ou forneça feedback de revisão:",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_draft(self, draft):
return draft
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
self.state.final_content = result.output
print("\n✅ Conteúdo aprovado e publicado!")
print(f"Comentário do revisor: {result.feedback}")
return "published"
@listen("rejected")
def handle_rejection(self, result: HumanFeedbackResult):
print("\n❌ Conteúdo rejeitado")
print(f"Motivo: {result.feedback}")
return "rejected"
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
self.state.revision_count += 1
print(f"\n📝 Revisão #{self.state.revision_count} solicitada")
print(f"Feedback: {result.feedback}")
# Em um flow real, você pode voltar para generate_draft
# Para este exemplo, apenas reconhecemos
return "revision_requested"
# Executar o flow
flow = ContentApprovalFlow()
result = flow.kickoff()
print(f"\nFlow concluído. Revisões solicitadas: {flow.state.revision_count}")
```
```text Output
Sobre qual tópico devo escrever? Segurança em IA
==================================================
OUTPUT FOR REVIEW:
==================================================
# Segurança em IA
Este é um rascunho sobre Segurança em IA...
==================================================
Por favor, revise este rascunho. Responda 'approved', 'rejected', ou forneça feedback de revisão:
(Press Enter to skip, or type your feedback)
Your feedback: Parece bom, aprovado!
✅ Conteúdo aprovado e publicado!
Comentário do revisor: Parece bom, aprovado!
Flow concluído. Revisões solicitadas: 0
```
</CodeGroup>
## Combinando com Outros Decoradores
O decorador `@human_feedback` funciona com outros decoradores de flow. Coloque-o como o decorador mais interno (mais próximo da função):
```python Code
# Correto: @human_feedback é o mais interno (mais próximo da função)
@start()
@human_feedback(message="Revise isto:")
def my_start_method(self):
return "content"
@listen(other_method)
@human_feedback(message="Revise isto também:")
def my_listener(self, data):
return f"processed: {data}"
```
<Tip>
Coloque `@human_feedback` como o decorador mais interno (último/mais próximo da função) para que ele envolva o método diretamente e possa capturar o valor de retorno antes de passar para o sistema de flow.
</Tip>
## Melhores Práticas
### 1. Escreva Mensagens de Solicitação Claras
O parâmetro `message` é o que o humano vê. Torne-o acionável:
```python Code
# ✅ Bom - claro e acionável
@human_feedback(message="Este resumo captura com precisão os pontos-chave? Responda 'sim' ou explique o que está faltando:")
# ❌ Ruim - vago
@human_feedback(message="Revise isto:")
```
### 2. Escolha Outcomes Significativos
Ao usar `emit`, escolha outcomes que mapeiem naturalmente para respostas humanas:
```python Code
# ✅ Bom - outcomes em linguagem natural
emit=["approved", "rejected", "needs_more_detail"]
# ❌ Ruim - técnico ou pouco claro
emit=["state_1", "state_2", "state_3"]
```
### 3. Sempre Forneça um Outcome Padrão
Use `default_outcome` para lidar com casos onde usuários pressionam Enter sem digitar:
```python Code
@human_feedback(
message="Aprovar? (pressione Enter para solicitar revisão)",
emit=["approved", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision", # Padrão seguro
)
```
### 4. Use o Histórico de Feedback para Trilhas de Auditoria
Acesse `human_feedback_history` para criar logs de auditoria:
```python Code
@listen(final_step)
def create_audit_log(self):
log = []
for fb in self.human_feedback_history:
log.append({
"step": fb.method_name,
"outcome": fb.outcome,
"feedback": fb.feedback,
"timestamp": fb.timestamp.isoformat(),
})
return log
```
### 5. Trate Feedback Roteado e Não Roteado
Ao projetar flows, considere se você precisa de roteamento:
| Cenário | Use |
|---------|-----|
| Revisão simples, só precisa do texto do feedback | Sem `emit` |
| Precisa ramificar para caminhos diferentes baseado na resposta | Use `emit` |
| Portões de aprovação com aprovar/rejeitar/revisar | Use `emit` |
| Coletando comentários apenas para logging | Sem `emit` |
## Feedback Humano Assíncrono (Não-Bloqueante - Human in the loop)
Por padrão, `@human_feedback` bloqueia a execução aguardando entrada no console. Para aplicações de produção, você pode precisar de feedback **assíncrono/não-bloqueante** que se integre com sistemas externos como Slack, email, webhooks ou APIs.
### A Abstração de Provider
Use o parâmetro `provider` para especificar uma estratégia customizada de coleta de feedback:
```python Code
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
class WebhookProvider(HumanFeedbackProvider):
"""Provider que pausa o flow e aguarda callback de webhook."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Notifica sistema externo (ex: envia mensagem Slack, cria ticket)
self.send_notification(context)
# Pausa execução - framework cuida da persistência automaticamente
raise HumanFeedbackPending(
context=context,
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
)
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Revise este conteúdo:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=WebhookProvider("https://myapp.com/api"),
)
def generate_content(self):
return "Conteúdo gerado por IA..."
@listen("approved")
def publish(self, result):
return "Publicado!"
```
<Tip>
O framework de flow **persiste automaticamente o estado** quando `HumanFeedbackPending` é lançado. Seu provider só precisa notificar o sistema externo e lançar a exceção—não são necessárias chamadas manuais de persistência.
</Tip>
### Tratando Flows Pausados
Ao usar um provider assíncrono, `kickoff()` retorna um objeto `HumanFeedbackPending` em vez de lançar uma exceção:
```python Code
flow = ReviewFlow()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow está pausado, estado é automaticamente persistido
print(f"Aguardando feedback em: {result.callback_info['webhook_url']}")
print(f"Flow ID: {result.context.flow_id}")
else:
# Conclusão normal
print(f"Flow concluído: {result}")
```
### Retomando um Flow Pausado
Quando o feedback chega (ex: via webhook), retome o flow:
```python Code
# Handler síncrono:
def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = flow.resume(feedback)
return result
# Handler assíncrono (FastAPI, aiohttp, etc.):
async def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = await flow.resume_async(feedback)
return result
```
### Tipos Principais
| Tipo | Descrição |
|------|-----------|
| `HumanFeedbackProvider` | Protocolo para providers de feedback customizados |
| `PendingFeedbackContext` | Contém todas as informações necessárias para retomar um flow pausado |
| `HumanFeedbackPending` | Retornado por `kickoff()` quando o flow está pausado para feedback |
| `ConsoleProvider` | Provider padrão de entrada bloqueante no console |
### PendingFeedbackContext
O contexto contém tudo necessário para retomar:
```python Code
@dataclass
class PendingFeedbackContext:
flow_id: str # Identificador único desta execução de flow
flow_class: str # Nome qualificado completo da classe
method_name: str # Método que disparou o feedback
method_output: Any # Saída mostrada ao humano
message: str # A mensagem de solicitação
emit: list[str] | None # Outcomes possíveis para roteamento
default_outcome: str | None
metadata: dict # Metadata customizado
llm: str | None # LLM para mapeamento de outcome
requested_at: datetime
```
### Exemplo Completo de Flow Assíncrono
```python Code
from crewai.flow import (
Flow, start, listen, human_feedback,
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
)
class SlackNotificationProvider(HumanFeedbackProvider):
"""Provider que envia notificações Slack e pausa para feedback assíncrono."""
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Envia notificação Slack (implemente você mesmo)
slack_thread_id = self.post_to_slack(
channel=self.channel,
message=f"Revisão necessária:\n\n{context.method_output}\n\n{context.message}",
)
# Pausa execução - framework cuida da persistência automaticamente
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": self.channel,
"thread_id": slack_thread_id,
}
)
class ContentPipeline(Flow):
@start()
@human_feedback(
message="Aprova este conteúdo para publicação?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
provider=SlackNotificationProvider("#content-reviews"),
)
def generate_content(self):
return "Conteúdo de blog post gerado por IA..."
@listen("approved")
def publish(self, result):
print(f"Publicando! Revisor disse: {result.feedback}")
return {"status": "published"}
@listen("rejected")
def archive(self, result):
print(f"Arquivado. Motivo: {result.feedback}")
return {"status": "archived"}
@listen("needs_revision")
def queue_revision(self, result):
print(f"Na fila para revisão: {result.feedback}")
return {"status": "revision_needed"}
# Iniciando o flow (vai pausar e aguardar resposta do Slack)
def start_content_pipeline():
flow = ContentPipeline()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
return {"status": "pending", "flow_id": result.context.flow_id}
return result
# Retomando quando webhook do Slack dispara (handler síncrono)
def on_slack_feedback(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = flow.resume(slack_message)
return result
# Se seu handler é assíncrono (FastAPI, aiohttp, Slack Bolt async, etc.)
async def on_slack_feedback_async(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = await flow.resume_async(slack_message)
return result
```
<Warning>
Se você está usando um framework web assíncrono (FastAPI, aiohttp, Slack Bolt modo async), use `await flow.resume_async()` em vez de `flow.resume()`. Chamar `resume()` de dentro de um event loop em execução vai lançar um `RuntimeError`.
</Warning>
### Melhores Práticas para Feedback Assíncrono
1. **Verifique o tipo de retorno**: `kickoff()` retorna `HumanFeedbackPending` quando pausado—não precisa de try/except
2. **Use o método resume correto**: Use `resume()` em código síncrono, `await resume_async()` em código assíncrono
3. **Armazene informações de callback**: Use `callback_info` para armazenar URLs de webhook, IDs de tickets, etc.
4. **Implemente idempotência**: Seu handler de resume deve ser idempotente por segurança
5. **Persistência automática**: O estado é automaticamente salvo quando `HumanFeedbackPending` é lançado e usa `SQLiteFlowPersistence` por padrão
6. **Persistência customizada**: Passe uma instância de persistência customizada para `from_pending()` se necessário
## Documentação Relacionada
- [Visão Geral de Flows](/pt-BR/concepts/flows) - Aprenda sobre CrewAI Flows
- [Gerenciamento de Estado em Flows](/pt-BR/guides/flows/mastering-flow-state) - Gerenciando estado em flows
- [Persistência de Flows](/pt-BR/concepts/flows#persistence) - Persistindo estado de flows
- [Roteamento com @router](/pt-BR/concepts/flows#router) - Mais sobre roteamento condicional
- [Input Humano na Execução](/pt-BR/learn/human-input-on-execution) - Input humano no nível de task

View File

@@ -7,17 +7,28 @@ mode: "wide"
## Introdução
A CrewAI oferece a capacidade de iniciar uma crew de forma assíncrona, permitindo que você comece a execução da crew de maneira não bloqueante.
A CrewAI oferece a capacidade de iniciar uma crew de forma assíncrona, permitindo que você comece a execução da crew de maneira não bloqueante.
Esse recurso é especialmente útil quando você deseja executar múltiplas crews simultaneamente ou quando precisa realizar outras tarefas enquanto a crew está em execução.
## Execução Assíncrona de Crew
O CrewAI oferece duas abordagens para execução assíncrona:
Para iniciar uma crew de forma assíncrona, utilize o método `kickoff_async()`. Este método inicia a execução da crew em uma thread separada, permitindo que a thread principal continue executando outras tarefas.
| Método | Tipo | Descrição |
|--------|------|-------------|
| `akickoff()` | Async nativo | Async/await verdadeiro em toda a cadeia de execução |
| `kickoff_async()` | Baseado em thread | Envolve execução síncrona em `asyncio.to_thread` |
<Note>
Para cargas de trabalho de alta concorrência, `akickoff()` é recomendado pois usa async nativo para execução de tasks, operações de memória e recuperação de conhecimento.
</Note>
## Execução Async Nativa com `akickoff()`
O método `akickoff()` fornece execução async nativa verdadeira, usando async/await em toda a cadeia de execução, incluindo execução de tasks, operações de memória e consultas de conhecimento.
### Assinatura do Método
```python Code
def kickoff_async(self, inputs: dict) -> CrewOutput:
async def akickoff(self, inputs: dict) -> CrewOutput:
```
### Parâmetros
@@ -28,97 +39,268 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
- `CrewOutput`: Um objeto que representa o resultado da execução da crew.
## Possíveis Casos de Uso
- **Geração Paralela de Conteúdo**: Inicie múltiplas crews independentes de forma assíncrona, cada uma responsável por gerar conteúdo sobre temas diferentes. Por exemplo, uma crew pode pesquisar e redigir um artigo sobre tendências em IA, enquanto outra gera posts para redes sociais sobre o lançamento de um novo produto. Cada crew atua de forma independente, permitindo a escala eficiente da produção de conteúdo.
- **Tarefas Conjuntas de Pesquisa de Mercado**: Lance múltiplas crews de forma assíncrona para realizar pesquisas de mercado em paralelo. Uma crew pode analisar tendências do setor, outra examinar estratégias de concorrentes e ainda outra avaliar o sentimento do consumidor. Cada crew conclui sua tarefa de forma independente, proporcionando insights mais rápidos e abrangentes.
- **Módulos Independentes de Planejamento de Viagem**: Execute crews separadas para planejar diferentes aspectos de uma viagem de forma independente. Uma crew pode cuidar das opções de voo, outra das acomodações e uma terceira do planejamento das atividades. Cada crew trabalha de maneira assíncrona, permitindo que os vários componentes da viagem sejam planejados ao mesmo tempo e de maneira independente, para resultados mais rápidos.
## Exemplo: Execução Assíncrona de uma Única Crew
Veja um exemplo de como iniciar uma crew de forma assíncrona utilizando asyncio e aguardando o resultado:
### Exemplo: Execução Async Nativa de Crew
```python Code
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
# Criar um agente
coding_agent = Agent(
role="Analista de Dados Python",
goal="Analisar dados e fornecer insights usando Python",
backstory="Você é um analista de dados experiente com fortes habilidades em Python.",
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create a task that requires code execution
# Criar uma tarefa
data_analysis_task = Task(
description="Analise o conjunto de dados fornecido e calcule a idade média dos participantes. Idades: {ages}",
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="A idade média dos participantes."
expected_output="The average age of the participants."
)
# Create a crew and add the task
# Criar uma crew
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Async function to kickoff the crew asynchronously
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
# Execução async nativa
async def main():
result = await analysis_crew.akickoff(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
# Run the async function
asyncio.run(async_crew_execution())
asyncio.run(main())
```
## Exemplo: Execução Assíncrona de Múltiplas Crews
### Exemplo: Múltiplas Crews Async Nativas
Neste exemplo, mostraremos como iniciar múltiplas crews de forma assíncrona e aguardar todas serem concluídas usando `asyncio.gather()`:
Execute múltiplas crews concorrentemente usando `asyncio.gather()` com async nativo:
```python Code
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Analista de Dados Python",
goal="Analisar dados e fornecer insights usando Python",
backstory="Você é um analista de dados experiente com fortes habilidades em Python.",
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create tasks that require code execution
task_1 = Task(
description="Analise o primeiro conjunto de dados e calcule a idade média dos participantes. Idades: {ages}",
description="Analyze the first dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="A idade média dos participantes."
expected_output="The average age of the participants."
)
task_2 = Task(
description="Analise o segundo conjunto de dados e calcule a idade média dos participantes. Idades: {ages}",
description="Analyze the second dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="A idade média dos participantes."
expected_output="The average age of the participants."
)
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
async def main():
results = await asyncio.gather(
crew_1.akickoff(inputs={"ages": [25, 30, 35, 40, 45]}),
crew_2.akickoff(inputs={"ages": [20, 22, 24, 28, 30]})
)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
asyncio.run(main())
```
### Exemplo: Async Nativo para Múltiplas Entradas
Use `akickoff_for_each()` para executar sua crew contra múltiplas entradas concorrentemente com async nativo:
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the dataset and calculate the average age. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def main():
datasets = [
{"ages": [25, 30, 35, 40, 45]},
{"ages": [20, 22, 24, 28, 30]},
{"ages": [30, 35, 40, 45, 50]}
]
results = await analysis_crew.akickoff_for_each(datasets)
for i, result in enumerate(results, 1):
print(f"Dataset {i} Result:", result)
asyncio.run(main())
```
## Async Baseado em Thread com `kickoff_async()`
O método `kickoff_async()` fornece execução async envolvendo o `kickoff()` síncrono em uma thread. Isso é útil para integração async mais simples ou compatibilidade retroativa.
### Assinatura do Método
```python Code
async def kickoff_async(self, inputs: dict) -> CrewOutput:
```
### Parâmetros
- `inputs` (dict): Um dicionário contendo os dados de entrada necessários para as tarefas.
### Retorno
- `CrewOutput`: Um objeto que representa o resultado da execução da crew.
### Exemplo: Execução Async Baseada em Thread
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
asyncio.run(async_crew_execution())
```
### Exemplo: Múltiplas Crews Async Baseadas em Thread
```python Code
import asyncio
from crewai import Crew, Agent, Task
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
task_1 = Task(
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
task_2 = Task(
description="Analyze the second dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="The average age of the participants."
)
# Create two crews and add tasks
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
# Async function to kickoff multiple crews asynchronously and wait for all to finish
async def async_multiple_crews():
# Create coroutines for concurrent execution
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
# Wait for both crews to finish
results = await asyncio.gather(result_1, result_2)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
# Run the async function
asyncio.run(async_multiple_crews())
```
```
## Streaming Assíncrono
Ambos os métodos async suportam streaming quando `stream=True` está definido na crew:
```python Code
import asyncio
from crewai import Crew, Agent, Task
agent = Agent(
role="Researcher",
goal="Research and summarize topics",
backstory="You are an expert researcher."
)
task = Task(
description="Research the topic: {topic}",
agent=agent,
expected_output="A comprehensive summary of the topic."
)
crew = Crew(
agents=[agent],
tasks=[task],
stream=True # Habilitar streaming
)
async def main():
streaming_output = await crew.akickoff(inputs={"topic": "AI trends in 2024"})
# Iteração async sobre chunks de streaming
async for chunk in streaming_output:
print(f"Chunk: {chunk.content}")
# Acessar resultado final após streaming completar
result = streaming_output.result
print(f"Final result: {result.raw}")
asyncio.run(main())
```
## Possíveis Casos de Uso
- **Geração Paralela de Conteúdo**: Inicie múltiplas crews independentes de forma assíncrona, cada uma responsável por gerar conteúdo sobre temas diferentes. Por exemplo, uma crew pode pesquisar e redigir um artigo sobre tendências em IA, enquanto outra gera posts para redes sociais sobre o lançamento de um novo produto.
- **Tarefas Conjuntas de Pesquisa de Mercado**: Lance múltiplas crews de forma assíncrona para realizar pesquisas de mercado em paralelo. Uma crew pode analisar tendências do setor, outra examinar estratégias de concorrentes e ainda outra avaliar o sentimento do consumidor.
- **Módulos Independentes de Planejamento de Viagem**: Execute crews separadas para planejar diferentes aspectos de uma viagem de forma independente. Uma crew pode cuidar das opções de voo, outra das acomodações e uma terceira do planejamento das atividades.
## Escolhendo entre `akickoff()` e `kickoff_async()`
| Recurso | `akickoff()` | `kickoff_async()` |
|---------|--------------|-------------------|
| Modelo de execução | Async/await nativo | Wrapper baseado em thread |
| Execução de tasks | Async com `aexecute_sync()` | Síncrono em thread pool |
| Operações de memória | Async | Síncrono em thread pool |
| Recuperação de conhecimento | Async | Síncrono em thread pool |
| Melhor para | Alta concorrência, cargas I/O-bound | Integração async simples |
| Suporte a streaming | Sim | Sim |

View File

@@ -0,0 +1,356 @@
---
title: Streaming na Execução da Crew
description: Transmita saída em tempo real da execução da sua crew no CrewAI
icon: wave-pulse
mode: "wide"
---
## Introdução
O CrewAI fornece a capacidade de transmitir saída em tempo real durante a execução da crew, permitindo que você exiba resultados conforme são gerados, em vez de esperar que todo o processo seja concluído. Este recurso é particularmente útil para construir aplicações interativas, fornecer feedback ao usuário e monitorar processos de longa duração.
## Como o Streaming Funciona
Quando o streaming está ativado, o CrewAI captura respostas do LLM e chamadas de ferramentas conforme acontecem, empacotando-as em chunks estruturados que incluem contexto sobre qual task e agent está executando. Você pode iterar sobre esses chunks em tempo real e acessar o resultado final quando a execução for concluída.
## Ativando o Streaming
Para ativar o streaming, defina o parâmetro `stream` como `True` ao criar sua crew:
```python Code
from crewai import Agent, Crew, Task
# Crie seus agentes e tasks
researcher = Agent(
role="Research Analyst",
goal="Gather comprehensive information on topics",
backstory="You are an experienced researcher with excellent analytical skills.",
)
task = Task(
description="Research the latest developments in AI",
expected_output="A detailed report on recent AI advancements",
agent=researcher,
)
# Ativar streaming
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True # Ativar saída em streaming
)
```
## Streaming Síncrono
Quando você chama `kickoff()` em uma crew com streaming ativado, ele retorna um objeto `CrewStreamingOutput` que você pode iterar para receber chunks conforme chegam:
```python Code
# Iniciar execução com streaming
streaming = crew.kickoff(inputs={"topic": "artificial intelligence"})
# Iterar sobre chunks conforme chegam
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Acessar o resultado final após o streaming completar
result = streaming.result
print(f"\n\nSaída final: {result.raw}")
```
### Informações do Chunk de Stream
Cada chunk fornece contexto rico sobre a execução:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
for chunk in streaming:
print(f"Task: {chunk.task_name} (índice {chunk.task_index})")
print(f"Agent: {chunk.agent_role}")
print(f"Content: {chunk.content}")
print(f"Type: {chunk.chunk_type}") # TEXT ou TOOL_CALL
if chunk.tool_call:
print(f"Tool: {chunk.tool_call.tool_name}")
print(f"Arguments: {chunk.tool_call.arguments}")
```
### Acessando Resultados do Streaming
O objeto `CrewStreamingOutput` fornece várias propriedades úteis:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
# Iterar e coletar chunks
for chunk in streaming:
print(chunk.content, end="", flush=True)
# Após a iteração completar
print(f"\nCompletado: {streaming.is_completed}")
print(f"Texto completo: {streaming.get_full_text()}")
print(f"Todos os chunks: {len(streaming.chunks)}")
print(f"Resultado final: {streaming.result.raw}")
```
## Streaming Assíncrono
Para aplicações assíncronas, você pode usar `akickoff()` (async nativo) ou `kickoff_async()` (baseado em threads) com iteração assíncrona:
### Async Nativo com `akickoff()`
O método `akickoff()` fornece execução async nativa verdadeira em toda a cadeia:
```python Code
import asyncio
async def stream_crew():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Iniciar streaming async nativo
streaming = await crew.akickoff(inputs={"topic": "AI"})
# Iteração assíncrona sobre chunks
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# Acessar resultado final
result = streaming.result
print(f"\n\nSaída final: {result.raw}")
asyncio.run(stream_crew())
```
### Async Baseado em Threads com `kickoff_async()`
Para integração async mais simples ou compatibilidade retroativa:
```python Code
import asyncio
async def stream_crew():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
# Iniciar streaming async baseado em threads
streaming = await crew.kickoff_async(inputs={"topic": "AI"})
# Iteração assíncrona sobre chunks
async for chunk in streaming:
print(chunk.content, end="", flush=True)
# Acessar resultado final
result = streaming.result
print(f"\n\nSaída final: {result.raw}")
asyncio.run(stream_crew())
```
<Note>
Para cargas de trabalho de alta concorrência, `akickoff()` é recomendado pois usa async nativo para execução de tasks, operações de memória e recuperação de conhecimento. Consulte o guia [Iniciar Crew de Forma Assíncrona](/pt-BR/learn/kickoff-async) para mais detalhes.
</Note>
## Streaming com kickoff_for_each
Ao executar uma crew para múltiplas entradas com `kickoff_for_each()`, o streaming funciona de forma diferente dependendo se você usa síncrono ou assíncrono:
### kickoff_for_each Síncrono
Com `kickoff_for_each()` síncrono, você obtém uma lista de objetos `CrewStreamingOutput`, um para cada entrada:
```python Code
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
inputs_list = [
{"topic": "AI in healthcare"},
{"topic": "AI in finance"}
]
# Retorna lista de saídas de streaming
streaming_outputs = crew.kickoff_for_each(inputs=inputs_list)
# Iterar sobre cada saída de streaming
for i, streaming in enumerate(streaming_outputs):
print(f"\n=== Entrada {i + 1} ===")
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\n\nResultado {i + 1}: {result.raw}")
```
### kickoff_for_each_async Assíncrono
Com `kickoff_for_each_async()` assíncrono, você obtém um único `CrewStreamingOutput` que produz chunks de todas as crews conforme chegam concorrentemente:
```python Code
import asyncio
async def stream_multiple_crews():
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True
)
inputs_list = [
{"topic": "AI in healthcare"},
{"topic": "AI in finance"}
]
# Retorna saída de streaming única para todas as crews
streaming = await crew.kickoff_for_each_async(inputs=inputs_list)
# Chunks de todas as crews chegam conforme são gerados
async for chunk in streaming:
print(f"[{chunk.task_name}] {chunk.content}", end="", flush=True)
# Acessar todos os resultados
results = streaming.results # Lista de objetos CrewOutput
for i, result in enumerate(results):
print(f"\n\nResultado {i + 1}: {result.raw}")
asyncio.run(stream_multiple_crews())
```
## Tipos de Chunk de Stream
Chunks podem ser de diferentes tipos, indicados pelo campo `chunk_type`:
### Chunks TEXT
Conteúdo de texto padrão de respostas do LLM:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
```
### Chunks TOOL_CALL
Informações sobre chamadas de ferramentas sendo feitas:
```python Code
for chunk in streaming:
if chunk.chunk_type == StreamChunkType.TOOL_CALL:
print(f"\nChamando ferramenta: {chunk.tool_call.tool_name}")
print(f"Argumentos: {chunk.tool_call.arguments}")
```
## Exemplo Prático: Construindo uma UI com Streaming
Aqui está um exemplo completo mostrando como construir uma aplicação interativa com streaming:
```python Code
import asyncio
from crewai import Agent, Crew, Task
from crewai.types.streaming import StreamChunkType
async def interactive_research():
# Criar crew com streaming ativado
researcher = Agent(
role="Research Analyst",
goal="Provide detailed analysis on any topic",
backstory="You are an expert researcher with broad knowledge.",
)
task = Task(
description="Research and analyze: {topic}",
expected_output="A comprehensive analysis with key insights",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
stream=True,
verbose=False
)
# Obter entrada do usuário
topic = input("Digite um tópico para pesquisar: ")
print(f"\n{'='*60}")
print(f"Pesquisando: {topic}")
print(f"{'='*60}\n")
# Iniciar execução com streaming
streaming = await crew.kickoff_async(inputs={"topic": topic})
current_task = ""
async for chunk in streaming:
# Mostrar transições de task
if chunk.task_name != current_task:
current_task = chunk.task_name
print(f"\n[{chunk.agent_role}] Trabalhando em: {chunk.task_name}")
print("-" * 60)
# Exibir chunks de texto
if chunk.chunk_type == StreamChunkType.TEXT:
print(chunk.content, end="", flush=True)
# Exibir chamadas de ferramentas
elif chunk.chunk_type == StreamChunkType.TOOL_CALL and chunk.tool_call:
print(f"\n🔧 Usando ferramenta: {chunk.tool_call.tool_name}")
# Mostrar resultado final
result = streaming.result
print(f"\n\n{'='*60}")
print("Análise Completa!")
print(f"{'='*60}")
print(f"\nUso de Tokens: {result.token_usage}")
asyncio.run(interactive_research())
```
## Casos de Uso
O streaming é particularmente valioso para:
- **Aplicações Interativas**: Fornecer feedback em tempo real aos usuários enquanto os agentes trabalham
- **Tasks de Longa Duração**: Mostrar progresso para pesquisa, análise ou geração de conteúdo
- **Depuração e Monitoramento**: Observar comportamento e tomada de decisão dos agentes em tempo real
- **Experiência do Usuário**: Reduzir latência percebida mostrando resultados incrementais
- **Dashboards ao Vivo**: Construir interfaces de monitoramento que exibem status de execução da crew
## Notas Importantes
- O streaming ativa automaticamente o streaming do LLM para todos os agentes na crew
- Você deve iterar através de todos os chunks antes de acessar a propriedade `.result`
- Para `kickoff_for_each_async()` com streaming, use `.results` (plural) para obter todas as saídas
- O streaming adiciona overhead mínimo e pode realmente melhorar a performance percebida
- Cada chunk inclui contexto completo (task, agente, tipo de chunk) para UIs ricas
## Tratamento de Erros
Trate erros durante a execução com streaming:
```python Code
streaming = crew.kickoff(inputs={"topic": "AI"})
try:
for chunk in streaming:
print(chunk.content, end="", flush=True)
result = streaming.result
print(f"\nSucesso: {result.raw}")
except Exception as e:
print(f"\nErro durante o streaming: {e}")
if streaming.is_completed:
print("O streaming foi completado mas ocorreu um erro")
```
Ao aproveitar o streaming, você pode construir aplicações mais responsivas e interativas com o CrewAI, fornecendo aos usuários visibilidade em tempo real da execução dos agentes e resultados.

View File

@@ -0,0 +1,213 @@
---
title: CrewAI Tracing
description: Rastreamento integrado para Crews e Flows do CrewAI com a plataforma CrewAI AOP
icon: magnifying-glass-chart
mode: "wide"
---
# Rastreamento Integrado do CrewAI
O CrewAI fornece recursos de rastreamento integrados que permitem monitorar e depurar seus Crews e Flows em tempo real. Este guia demonstra como habilitar o rastreamento para **Crews** e **Flows** usando a plataforma de observabilidade integrada do CrewAI.
> **O que é o CrewAI Tracing?** O rastreamento integrado do CrewAI fornece observabilidade abrangente para seus agentes de IA, incluindo decisões de agentes, cronogramas de execução de tarefas, uso de ferramentas e chamadas de LLM - tudo acessível através da [plataforma CrewAI AOP](https://app.crewai.com).
![CrewAI Tracing Interface](/images/crewai-tracing.png)
## Pré-requisitos
Antes de usar o rastreamento do CrewAI, você precisa:
1. **Conta CrewAI AOP**: Cadastre-se para uma conta gratuita em [app.crewai.com](https://app.crewai.com)
2. **Autenticação CLI**: Use a CLI do CrewAI para autenticar seu ambiente local
```bash
crewai login
```
## Instruções de Configuração
### Passo 1: Crie sua Conta CrewAI AOP
Visite [app.crewai.com](https://app.crewai.com) e crie sua conta gratuita. Isso lhe dará acesso à plataforma CrewAI AOP, onde você pode visualizar rastreamentos, métricas e gerenciar seus crews.
### Passo 2: Instale a CLI do CrewAI e Autentique
Se você ainda não o fez, instale o CrewAI com as ferramentas CLI:
```bash
uv add crewai[tools]
```
Em seguida, autentique sua CLI com sua conta CrewAI AOP:
```bash
crewai login
```
Este comando irá:
1. Abrir seu navegador na página de autenticação
2. Solicitar que você insira um código de dispositivo
3. Autenticar seu ambiente local com sua conta CrewAI AOP
4. Habilitar recursos de rastreamento para seu desenvolvimento local
### Passo 3: Habilite o Rastreamento em seu Crew
Você pode habilitar o rastreamento para seu Crew definindo o parâmetro `tracing` como `True`:
```python
from crewai import Agent, Crew, Process, Task
from crewai_tools import SerperDevTool
# Define your agents
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI and data science",
backstory=\"\"\"You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.\"\"\",
verbose=True,
tools=[SerperDevTool()],
)
writer = Agent(
role="Tech Content Strategist",
goal="Craft compelling content on tech advancements",
backstory=\"\"\"You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.\"\"\",
verbose=True,
)
# Create tasks for your agents
research_task = Task(
description=\"\"\"Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.\"\"\",
expected_output="Full analysis report in bullet points",
agent=researcher,
)
writing_task = Task(
description=\"\"\"Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.\"\"\",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer,
)
# Enable tracing in your crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
tracing=True, # Enable built-in tracing
verbose=True
)
# Execute your crew
result = crew.kickoff()
```
### Passo 4: Habilite o Rastreamento em seu Flow
Da mesma forma, você pode habilitar o rastreamento para Flows do CrewAI:
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class ExampleFlow(Flow[ExampleState]):
def __init__(self):
super().__init__(tracing=True) # Enable tracing for the flow
@start()
def first_method(self):
print("Starting the flow")
self.state.counter = 1
self.state.message = "Flow started"
return "continue"
@listen("continue")
def second_method(self):
print("Continuing the flow")
self.state.counter += 1
self.state.message = "Flow continued"
return "finish"
@listen("finish")
def final_method(self):
print("Finishing the flow")
self.state.counter += 1
self.state.message = "Flow completed"
# Create and run the flow with tracing enabled
flow = ExampleFlow(tracing=True)
result = flow.kickoff()
```
### Passo 5: Visualize os Rastreamentos no Painel CrewAI AOP
Após executar o crew ou flow, você pode visualizar os rastreamentos gerados pela sua aplicação CrewAI no painel CrewAI AOP. Você verá etapas detalhadas das interações dos agentes, usos de ferramentas e chamadas de LLM.
Basta clicar no link abaixo para visualizar os rastreamentos ou ir para a aba de rastreamentos no painel [aqui](https://app.crewai.com/crewai_plus/trace_batches)
![CrewAI Tracing Interface](/images/view-traces.png)
### Alternativa: Configuração de Variável de Ambiente
Você também pode habilitar o rastreamento globalmente definindo uma variável de ambiente:
```bash
export CREWAI_TRACING_ENABLED=true
```
Ou adicione-a ao seu arquivo `.env`:
```env
CREWAI_TRACING_ENABLED=true
```
Quando esta variável de ambiente estiver definida, todos os Crews e Flows terão automaticamente o rastreamento habilitado, mesmo sem definir explicitamente `tracing=True`.
## Visualizando seus Rastreamentos
### Acesse o Painel CrewAI AOP
1. Visite [app.crewai.com](https://app.crewai.com) e faça login em sua conta
2. Navegue até o painel do seu projeto
3. Clique na aba **Traces** para visualizar os detalhes de execução
### O que Você Verá nos Rastreamentos
O rastreamento do CrewAI fornece visibilidade abrangente sobre:
- **Decisões dos Agentes**: Veja como os agentes raciocinam através das tarefas e tomam decisões
- **Cronograma de Execução de Tarefas**: Representação visual de sequências e dependências de tarefas
- **Uso de Ferramentas**: Monitore quais ferramentas são chamadas e seus resultados
- **Chamadas de LLM**: Rastreie todas as interações do modelo de linguagem, incluindo prompts e respostas
- **Métricas de Desempenho**: Tempos de execução, uso de tokens e custos
- **Rastreamento de Erros**: Informações detalhadas de erros e rastreamentos de pilha
### Recursos de Rastreamento
- **Cronograma de Execução**: Clique através de diferentes estágios de execução
- **Logs Detalhados**: Acesse logs abrangentes para depuração
- **Análise de Desempenho**: Analise padrões de execução e otimize o desempenho
- **Capacidades de Exportação**: Baixe rastreamentos para análise adicional
### Problemas de Autenticação
Se você encontrar problemas de autenticação:
1. Certifique-se de estar logado: `crewai login`
2. Verifique sua conexão com a internet
3. Verifique sua conta em [app.crewai.com](https://app.crewai.com)
### Rastreamentos Não Aparecem
Se os rastreamentos não estiverem aparecendo no painel:
1. Confirme que `tracing=True` está definido em seu Crew/Flow
2. Verifique se `CREWAI_TRACING_ENABLED=true` se estiver usando variáveis de ambiente
3. Certifique-se de estar autenticado com `crewai login`
4. Verifique se seu crew/flow está realmente executando

View File

@@ -12,7 +12,7 @@ dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.6.1",
"crewai==1.7.2",
"lancedb~=0.5.4",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",

View File

@@ -291,4 +291,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.6.1"
__version__ = "1.7.2"

View File

@@ -1,5 +1,5 @@
"""Crewai Enterprise Tools."""
import os
import json
import re
from typing import Any, Optional, Union, cast, get_origin
@@ -432,7 +432,11 @@ class CrewAIPlatformActionTool(BaseTool):
payload = cleaned_kwargs
response = requests.post(
url=api_url, headers=headers, json=payload, timeout=60
url=api_url,
headers=headers,
json=payload,
timeout=60,
verify=os.environ.get("CREWAI_FACTORY", "false").lower() != "true",
)
data = response.json()

View File

@@ -1,5 +1,5 @@
from typing import Any
import os
from crewai.tools import BaseTool
import requests
@@ -37,6 +37,7 @@ class CrewaiPlatformToolBuilder:
headers=headers,
timeout=30,
params={"apps": ",".join(self._apps)},
verify=os.environ.get("CREWAI_FACTORY", "false").lower() != "true",
)
response.raise_for_status()
except Exception:

View File

@@ -1,4 +1,6 @@
from typing import Union, get_args, get_origin
from unittest.mock import patch, Mock
import os
from crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool import (
CrewAIPlatformActionTool,
@@ -249,3 +251,109 @@ class TestSchemaProcessing:
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfMixed")
assert result_type is str
class TestCrewAIPlatformActionToolVerify:
"""Test suite for SSL verification behavior based on CREWAI_FACTORY environment variable"""
def setup_method(self):
self.action_schema = {
"function": {
"name": "test_action",
"parameters": {
"properties": {
"test_param": {
"type": "string",
"description": "Test parameter"
}
},
"required": []
}
}
}
def create_test_tool(self):
return CrewAIPlatformActionTool(
description="Test action tool",
action_name="test_action",
action_schema=self.action_schema
)
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_with_ssl_verification_default(self, mock_post):
"""Test that _run uses SSL verification by default when CREWAI_FACTORY is not set"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "false"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_with_ssl_verification_factory_false(self, mock_post):
"""Test that _run uses SSL verification when CREWAI_FACTORY is 'false'"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "FALSE"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_with_ssl_verification_factory_false_uppercase(self, mock_post):
"""Test that _run uses SSL verification when CREWAI_FACTORY is 'FALSE' (case-insensitive)"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "true"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_without_ssl_verification_factory_true(self, mock_post):
"""Test that _run disables SSL verification when CREWAI_FACTORY is 'true'"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is False
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "TRUE"}, clear=True)
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
def test_run_without_ssl_verification_factory_true_uppercase(self, mock_post):
"""Test that _run disables SSL verification when CREWAI_FACTORY is 'TRUE' (case-insensitive)"""
mock_response = Mock()
mock_response.ok = True
mock_response.json.return_value = {"result": "success"}
mock_post.return_value = mock_response
tool = self.create_test_tool()
tool._run(test_param="test_value")
mock_post.assert_called_once()
call_args = mock_post.call_args
assert call_args.kwargs["verify"] is False

View File

@@ -258,3 +258,98 @@ class TestCrewaiPlatformToolBuilder(unittest.TestCase):
assert "simple_string" in description_text
assert "nested_object" in description_text
assert "array_prop" in description_text
class TestCrewaiPlatformToolBuilderVerify(unittest.TestCase):
"""Test suite for SSL verification behavior in CrewaiPlatformToolBuilder"""
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_with_ssl_verification_default(self, mock_get):
"""Test that _fetch_actions uses SSL verification by default when CREWAI_FACTORY is not set"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "false"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_with_ssl_verification_factory_false(self, mock_get):
"""Test that _fetch_actions uses SSL verification when CREWAI_FACTORY is 'false'"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "FALSE"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_with_ssl_verification_factory_false_uppercase(self, mock_get):
"""Test that _fetch_actions uses SSL verification when CREWAI_FACTORY is 'FALSE' (case-insensitive)"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is True
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "true"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_without_ssl_verification_factory_true(self, mock_get):
"""Test that _fetch_actions disables SSL verification when CREWAI_FACTORY is 'true'"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is False
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token", "CREWAI_FACTORY": "TRUE"}, clear=True)
@patch(
"crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get"
)
def test_fetch_actions_without_ssl_verification_factory_true_uppercase(self, mock_get):
"""Test that _fetch_actions disables SSL verification when CREWAI_FACTORY is 'TRUE' (case-insensitive)"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {"actions": {}}
mock_get.return_value = mock_response
builder = CrewaiPlatformToolBuilder(apps=["github"])
builder._fetch_actions()
mock_get.assert_called_once()
call_args = mock_get.call_args
assert call_args.kwargs["verify"] is False

View File

@@ -38,6 +38,7 @@ dependencies = [
"pydantic-settings~=2.10.1",
"mcp~=1.16.0",
"uv~=0.9.13",
"aiosqlite~=0.21.0",
]
[project.urls]
@@ -48,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.6.1",
"crewai-tools==1.7.2",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -83,7 +84,7 @@ bedrock = [
"boto3~=1.40.45",
]
google-genai = [
"google-genai~=1.2.0",
"google-genai~=1.49.0",
]
azure-ai-inference = [
"azure-ai-inference~=1.0.0b9",
@@ -95,6 +96,7 @@ a2a = [
"a2a-sdk~=0.3.10",
"httpx-auth~=0.23.1",
"httpx-sse~=0.4.0",
"aiocache[redis,memcached]~=0.12.3",
]

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.6.1"
__version__ = "1.7.2"
_telemetry_submitted = False

View File

@@ -0,0 +1,4 @@
"""A2A Protocol Extensions for CrewAI.
This module contains extensions to the A2A (Agent-to-Agent) protocol.
"""

View File

@@ -0,0 +1,193 @@
"""Base extension interface for A2A wrapper integrations.
This module defines the protocol for extending A2A wrapper functionality
with custom logic for conversation processing, prompt augmentation, and
agent response handling.
"""
from __future__ import annotations
from collections.abc import Sequence
from typing import TYPE_CHECKING, Any, Protocol
if TYPE_CHECKING:
from a2a.types import Message
from crewai.agent.core import Agent
class ConversationState(Protocol):
"""Protocol for extension-specific conversation state.
Extensions can define their own state classes that implement this protocol
to track conversation-specific data extracted from message history.
"""
def is_ready(self) -> bool:
"""Check if the state indicates readiness for some action.
Returns:
True if the state is ready, False otherwise.
"""
...
class A2AExtension(Protocol):
"""Protocol for A2A wrapper extensions.
Extensions can implement this protocol to inject custom logic into
the A2A conversation flow at various integration points.
"""
def inject_tools(self, agent: Agent) -> None:
"""Inject extension-specific tools into the agent.
Called when an agent is wrapped with A2A capabilities. Extensions
can add tools that enable extension-specific functionality.
Args:
agent: The agent instance to inject tools into.
"""
...
def extract_state_from_history(
self, conversation_history: Sequence[Message]
) -> ConversationState | None:
"""Extract extension-specific state from conversation history.
Called during prompt augmentation to allow extensions to analyze
the conversation history and extract relevant state information.
Args:
conversation_history: The sequence of A2A messages exchanged.
Returns:
Extension-specific conversation state, or None if no relevant state.
"""
...
def augment_prompt(
self,
base_prompt: str,
conversation_state: ConversationState | None,
) -> str:
"""Augment the task prompt with extension-specific instructions.
Called during prompt augmentation to allow extensions to add
custom instructions based on conversation state.
Args:
base_prompt: The base prompt to augment.
conversation_state: Extension-specific state from extract_state_from_history.
Returns:
The augmented prompt with extension-specific instructions.
"""
...
def process_response(
self,
agent_response: Any,
conversation_state: ConversationState | None,
) -> Any:
"""Process and potentially modify the agent response.
Called after parsing the agent's response, allowing extensions to
enhance or modify the response based on conversation state.
Args:
agent_response: The parsed agent response.
conversation_state: Extension-specific state from extract_state_from_history.
Returns:
The processed agent response (may be modified or original).
"""
...
class ExtensionRegistry:
"""Registry for managing A2A extensions.
Maintains a collection of extensions and provides methods to invoke
their hooks at various integration points.
"""
def __init__(self) -> None:
"""Initialize the extension registry."""
self._extensions: list[A2AExtension] = []
def register(self, extension: A2AExtension) -> None:
"""Register an extension.
Args:
extension: The extension to register.
"""
self._extensions.append(extension)
def inject_all_tools(self, agent: Agent) -> None:
"""Inject tools from all registered extensions.
Args:
agent: The agent instance to inject tools into.
"""
for extension in self._extensions:
extension.inject_tools(agent)
def extract_all_states(
self, conversation_history: Sequence[Message]
) -> dict[type[A2AExtension], ConversationState]:
"""Extract conversation states from all registered extensions.
Args:
conversation_history: The sequence of A2A messages exchanged.
Returns:
Mapping of extension types to their conversation states.
"""
states: dict[type[A2AExtension], ConversationState] = {}
for extension in self._extensions:
state = extension.extract_state_from_history(conversation_history)
if state is not None:
states[type(extension)] = state
return states
def augment_prompt_with_all(
self,
base_prompt: str,
extension_states: dict[type[A2AExtension], ConversationState],
) -> str:
"""Augment prompt with instructions from all registered extensions.
Args:
base_prompt: The base prompt to augment.
extension_states: Mapping of extension types to conversation states.
Returns:
The fully augmented prompt.
"""
augmented = base_prompt
for extension in self._extensions:
state = extension_states.get(type(extension))
augmented = extension.augment_prompt(augmented, state)
return augmented
def process_response_with_all(
self,
agent_response: Any,
extension_states: dict[type[A2AExtension], ConversationState],
) -> Any:
"""Process response through all registered extensions.
Args:
agent_response: The parsed agent response.
extension_states: Mapping of extension types to conversation states.
Returns:
The processed agent response.
"""
processed = agent_response
for extension in self._extensions:
state = extension_states.get(type(extension))
processed = extension.process_response(processed, state)
return processed

View File

@@ -0,0 +1,34 @@
"""Extension registry factory for A2A configurations.
This module provides utilities for creating extension registries from A2A configurations.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from crewai.a2a.extensions.base import ExtensionRegistry
if TYPE_CHECKING:
from crewai.a2a.config import A2AConfig
def create_extension_registry_from_config(
a2a_config: list[A2AConfig] | A2AConfig,
) -> ExtensionRegistry:
"""Create an extension registry from A2A configuration.
Args:
a2a_config: A2A configuration (single or list)
Returns:
Configured extension registry with all applicable extensions
"""
registry = ExtensionRegistry()
configs = a2a_config if isinstance(a2a_config, list) else [a2a_config]
for _ in configs:
pass
return registry

View File

@@ -23,6 +23,8 @@ from a2a.types import (
TextPart,
TransportProtocol,
)
from aiocache import cached # type: ignore[import-untyped]
from aiocache.serializers import PickleSerializer # type: ignore[import-untyped]
import httpx
from pydantic import BaseModel, Field, create_model
@@ -65,7 +67,7 @@ def _fetch_agent_card_cached(
endpoint: A2A agent endpoint URL
auth_hash: Hash of the auth object
timeout: Request timeout
_ttl_hash: Time-based hash for cache invalidation (unused in body)
_ttl_hash: Time-based hash for cache invalidation
Returns:
Cached AgentCard
@@ -106,7 +108,18 @@ def fetch_agent_card(
A2AClientHTTPError: If authentication fails
"""
if use_cache:
auth_hash = hash((type(auth).__name__, id(auth))) if auth else 0
if auth:
auth_data = auth.model_dump_json(
exclude={
"_access_token",
"_token_expires_at",
"_refresh_token",
"_authorization_callback",
}
)
auth_hash = hash((type(auth).__name__, auth_data))
else:
auth_hash = 0
_auth_store[auth_hash] = auth
ttl_hash = int(time.time() // cache_ttl)
return _fetch_agent_card_cached(endpoint, auth_hash, timeout, ttl_hash)
@@ -121,6 +134,26 @@ def fetch_agent_card(
loop.close()
@cached(ttl=300, serializer=PickleSerializer()) # type: ignore[untyped-decorator]
async def _fetch_agent_card_async_cached(
endpoint: str,
auth_hash: int,
timeout: int,
) -> AgentCard:
"""Cached async implementation of AgentCard fetching.
Args:
endpoint: A2A agent endpoint URL
auth_hash: Hash of the auth object
timeout: Request timeout in seconds
Returns:
Cached AgentCard object
"""
auth = _auth_store.get(auth_hash)
return await _fetch_agent_card_async(endpoint=endpoint, auth=auth, timeout=timeout)
async def _fetch_agent_card_async(
endpoint: str,
auth: AuthScheme | None,
@@ -339,7 +372,22 @@ async def _execute_a2a_delegation_async(
Returns:
Dictionary with status, result/error, and new history
"""
agent_card = await _fetch_agent_card_async(endpoint, auth, timeout)
if auth:
auth_data = auth.model_dump_json(
exclude={
"_access_token",
"_token_expires_at",
"_refresh_token",
"_authorization_callback",
}
)
auth_hash = hash((type(auth).__name__, auth_data))
else:
auth_hash = 0
_auth_store[auth_hash] = auth
agent_card = await _fetch_agent_card_async_cached(
endpoint=endpoint, auth_hash=auth_hash, timeout=timeout
)
validate_auth_against_agent_card(agent_card, auth)
@@ -556,6 +604,34 @@ async def _execute_a2a_delegation_async(
}
break
except Exception as e:
if isinstance(e, A2AClientHTTPError):
error_msg = f"HTTP Error {e.status_code}: {e!s}"
error_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=error_msg))],
context_id=context_id,
task_id=task_id,
)
new_messages.append(error_message)
crewai_event_bus.emit(
None,
A2AResponseReceivedEvent(
response=error_msg,
turn_number=turn_number,
is_multiturn=is_multiturn,
status="failed",
agent_role=agent_role,
),
)
return {
"status": "failed",
"error": error_msg,
"history": new_messages,
}
current_exception: Exception | BaseException | None = e
while current_exception:
if hasattr(current_exception, "response"):
@@ -752,4 +828,5 @@ def get_a2a_agents_and_response_model(
Tuple of A2A agent IDs and response model
"""
a2a_agents, agent_ids = extract_a2a_agent_ids_from_config(a2a_config=a2a_config)
return a2a_agents, create_agent_response_model(agent_ids)

View File

@@ -15,6 +15,7 @@ from a2a.types import Role
from pydantic import BaseModel, ValidationError
from crewai.a2a.config import A2AConfig
from crewai.a2a.extensions.base import ExtensionRegistry
from crewai.a2a.templates import (
AVAILABLE_AGENTS_TEMPLATE,
CONVERSATION_TURN_INFO_TEMPLATE,
@@ -42,7 +43,9 @@ if TYPE_CHECKING:
from crewai.tools.base_tool import BaseTool
def wrap_agent_with_a2a_instance(agent: Agent) -> None:
def wrap_agent_with_a2a_instance(
agent: Agent, extension_registry: ExtensionRegistry | None = None
) -> None:
"""Wrap an agent instance's execute_task method with A2A support.
This function modifies the agent instance by wrapping its execute_task
@@ -51,7 +54,13 @@ def wrap_agent_with_a2a_instance(agent: Agent) -> None:
Args:
agent: The agent instance to wrap
extension_registry: Optional registry of A2A extensions for injecting tools and custom logic
"""
if extension_registry is None:
extension_registry = ExtensionRegistry()
extension_registry.inject_all_tools(agent)
original_execute_task = agent.execute_task.__func__ # type: ignore[attr-defined]
@wraps(original_execute_task)
@@ -85,6 +94,7 @@ def wrap_agent_with_a2a_instance(agent: Agent) -> None:
agent_response_model=agent_response_model,
context=context,
tools=tools,
extension_registry=extension_registry,
)
object.__setattr__(agent, "execute_task", MethodType(execute_task_with_a2a, agent))
@@ -154,6 +164,7 @@ def _execute_task_with_a2a(
agent_response_model: type[BaseModel],
context: str | None,
tools: list[BaseTool] | None,
extension_registry: ExtensionRegistry,
) -> str:
"""Wrap execute_task with A2A delegation logic.
@@ -165,6 +176,7 @@ def _execute_task_with_a2a(
context: Optional context for task execution
tools: Optional tools available to the agent
agent_response_model: Optional agent response model
extension_registry: Registry of A2A extensions
Returns:
Task execution result (either from LLM or A2A agent)
@@ -190,11 +202,12 @@ def _execute_task_with_a2a(
finally:
task.description = original_description
task.description = _augment_prompt_with_a2a(
task.description, _ = _augment_prompt_with_a2a(
a2a_agents=a2a_agents,
task_description=original_description,
agent_cards=agent_cards,
failed_agents=failed_agents,
extension_registry=extension_registry,
)
task.response_model = agent_response_model
@@ -204,6 +217,11 @@ def _execute_task_with_a2a(
raw_result=raw_result, agent_response_model=agent_response_model
)
if extension_registry and isinstance(agent_response, BaseModel):
agent_response = extension_registry.process_response_with_all(
agent_response, {}
)
if isinstance(agent_response, BaseModel) and isinstance(
agent_response, AgentResponseProtocol
):
@@ -217,6 +235,7 @@ def _execute_task_with_a2a(
tools=tools,
agent_cards=agent_cards,
original_task_description=original_description,
extension_registry=extension_registry,
)
return str(agent_response.message)
@@ -235,7 +254,8 @@ def _augment_prompt_with_a2a(
turn_num: int = 0,
max_turns: int | None = None,
failed_agents: dict[str, str] | None = None,
) -> str:
extension_registry: ExtensionRegistry | None = None,
) -> tuple[str, bool]:
"""Add A2A delegation instructions to prompt.
Args:
@@ -246,13 +266,14 @@ def _augment_prompt_with_a2a(
turn_num: Current turn number (0-indexed)
max_turns: Maximum allowed turns (from config)
failed_agents: Dictionary mapping failed agent endpoints to error messages
extension_registry: Optional registry of A2A extensions
Returns:
Augmented task description with A2A instructions
Tuple of (augmented prompt, disable_structured_output flag)
"""
if not agent_cards:
return task_description
return task_description, False
agents_text = ""
@@ -270,6 +291,7 @@ def _augment_prompt_with_a2a(
agents_text = AVAILABLE_AGENTS_TEMPLATE.substitute(available_a2a_agents=agents_text)
history_text = ""
if conversation_history:
for msg in conversation_history:
history_text += f"\n{msg.model_dump_json(indent=2, exclude_none=True, exclude={'message_id'})}\n"
@@ -277,6 +299,15 @@ def _augment_prompt_with_a2a(
history_text = PREVIOUS_A2A_CONVERSATION_TEMPLATE.substitute(
previous_a2a_conversation=history_text
)
extension_states = {}
disable_structured_output = False
if extension_registry and conversation_history:
extension_states = extension_registry.extract_all_states(conversation_history)
for state in extension_states.values():
if state.is_ready():
disable_structured_output = True
break
turn_info = ""
if max_turns is not None and conversation_history:
@@ -296,16 +327,22 @@ def _augment_prompt_with_a2a(
warning=warning,
)
return f"""{task_description}
augmented_prompt = f"""{task_description}
IMPORTANT: You have the ability to delegate this task to remote A2A agents.
{agents_text}
{history_text}{turn_info}
"""
if extension_registry:
augmented_prompt = extension_registry.augment_prompt_with_all(
augmented_prompt, extension_states
)
return augmented_prompt, disable_structured_output
def _parse_agent_response(
raw_result: str | dict[str, Any], agent_response_model: type[BaseModel]
@@ -373,7 +410,7 @@ def _handle_agent_response_and_continue(
if "agent_card" in a2a_result and agent_id not in agent_cards_dict:
agent_cards_dict[agent_id] = a2a_result["agent_card"]
task.description = _augment_prompt_with_a2a(
task.description, disable_structured_output = _augment_prompt_with_a2a(
a2a_agents=a2a_agents,
task_description=original_task_description,
conversation_history=conversation_history,
@@ -382,7 +419,38 @@ def _handle_agent_response_and_continue(
agent_cards=agent_cards_dict,
)
original_response_model = task.response_model
if disable_structured_output:
task.response_model = None
raw_result = original_fn(self, task, context, tools)
if disable_structured_output:
task.response_model = original_response_model
if disable_structured_output:
final_turn_number = turn_num + 1
result_text = str(raw_result)
crewai_event_bus.emit(
None,
A2AMessageSentEvent(
message=result_text,
turn_number=final_turn_number,
is_multiturn=True,
agent_role=self.role,
),
)
crewai_event_bus.emit(
None,
A2AConversationCompletedEvent(
status="completed",
final_result=result_text,
error=None,
total_turns=final_turn_number,
),
)
return result_text, None
llm_response = _parse_agent_response(
raw_result=raw_result, agent_response_model=agent_response_model
)
@@ -425,6 +493,7 @@ def _delegate_to_a2a(
tools: list[BaseTool] | None,
agent_cards: dict[str, AgentCard] | None = None,
original_task_description: str | None = None,
extension_registry: ExtensionRegistry | None = None,
) -> str:
"""Delegate to A2A agent with multi-turn conversation support.
@@ -437,6 +506,7 @@ def _delegate_to_a2a(
tools: Optional tools available to the agent
agent_cards: Pre-fetched agent cards from _execute_task_with_a2a
original_task_description: The original task description before A2A augmentation
extension_registry: Optional registry of A2A extensions
Returns:
Result from A2A agent
@@ -447,9 +517,13 @@ def _delegate_to_a2a(
a2a_agents, agent_response_model = get_a2a_agents_and_response_model(self.a2a)
agent_ids = tuple(config.endpoint for config in a2a_agents)
current_request = str(agent_response.message)
agent_id = agent_response.a2a_ids[0]
if agent_id not in agent_ids:
if hasattr(agent_response, "a2a_ids") and agent_response.a2a_ids:
agent_id = agent_response.a2a_ids[0]
else:
agent_id = agent_ids[0] if agent_ids else ""
if agent_id and agent_id not in agent_ids:
raise ValueError(
f"Unknown A2A agent ID(s): {agent_response.a2a_ids} not in {agent_ids}"
)
@@ -458,10 +532,11 @@ def _delegate_to_a2a(
task_config = task.config or {}
context_id = task_config.get("context_id")
task_id_config = task_config.get("task_id")
reference_task_ids = task_config.get("reference_task_ids")
metadata = task_config.get("metadata")
extensions = task_config.get("extensions")
reference_task_ids = task_config.get("reference_task_ids", [])
if original_task_description is None:
original_task_description = task.description
@@ -497,11 +572,27 @@ def _delegate_to_a2a(
conversation_history = a2a_result.get("history", [])
if conversation_history:
latest_message = conversation_history[-1]
if latest_message.task_id is not None:
task_id_config = latest_message.task_id
if latest_message.context_id is not None:
context_id = latest_message.context_id
if a2a_result["status"] in ["completed", "input_required"]:
if (
a2a_result["status"] == "completed"
and agent_config.trust_remote_completion_status
):
if (
task_id_config is not None
and task_id_config not in reference_task_ids
):
reference_task_ids.append(task_id_config)
if task.config is None:
task.config = {}
task.config["reference_task_ids"] = reference_task_ids
result_text = a2a_result.get("result", "")
final_turn_number = turn_num + 1
crewai_event_bus.emit(
@@ -513,7 +604,7 @@ def _delegate_to_a2a(
total_turns=final_turn_number,
),
)
return result_text # type: ignore[no-any-return]
return cast(str, result_text)
final_result, next_request = _handle_agent_response_and_continue(
self=self,
@@ -541,6 +632,31 @@ def _delegate_to_a2a(
continue
error_msg = a2a_result.get("error", "Unknown error")
final_result, next_request = _handle_agent_response_and_continue(
self=self,
a2a_result=a2a_result,
agent_id=agent_id,
agent_cards=agent_cards,
a2a_agents=a2a_agents,
original_task_description=original_task_description,
conversation_history=conversation_history,
turn_num=turn_num,
max_turns=max_turns,
task=task,
original_fn=original_fn,
context=context,
tools=tools,
agent_response_model=agent_response_model,
)
if final_result is not None:
return final_result
if next_request is not None:
current_request = next_request
continue
crewai_event_bus.emit(
None,
A2AConversationCompletedEvent(
@@ -550,7 +666,7 @@ def _delegate_to_a2a(
total_turns=turn_num + 1,
),
)
raise Exception(f"A2A delegation failed: {error_msg}")
return f"A2A delegation failed: {error_msg}"
if conversation_history:
for msg in reversed(conversation_history):

View File

@@ -1,8 +1,7 @@
from __future__ import annotations
import asyncio
from collections.abc import Sequence
import json
from collections.abc import Callable, Sequence
import shutil
import subprocess
import time
@@ -19,6 +18,19 @@ from pydantic import BaseModel, Field, InstanceOf, PrivateAttr, model_validator
from typing_extensions import Self
from crewai.a2a.config import A2AConfig
from crewai.agent.utils import (
ahandle_knowledge_retrieval,
apply_training_data,
build_task_prompt_with_schema,
format_task_with_context,
get_knowledge_config,
handle_knowledge_retrieval,
handle_reasoning,
prepare_tools,
process_tool_results,
save_last_messages,
validate_max_execution_time,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.crew_agent_executor import CrewAgentExecutor
@@ -27,17 +39,14 @@ from crewai.events.types.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeQueryStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
)
from crewai.events.types.memory_events import (
MemoryRetrievalCompletedEvent,
MemoryRetrievalStartedEvent,
)
from crewai.experimental.crew_agent_executor_flow import CrewAgentExecutorFlow
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
from crewai.lite_agent import LiteAgent
from crewai.llms.base_llm import BaseLLM
from crewai.mcp import (
@@ -61,7 +70,7 @@ from crewai.utilities.agent_utils import (
render_text_description_and_args,
)
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.converter import Converter, generate_model_description
from crewai.utilities.converter import Converter
from crewai.utilities.guardrail_types import GuardrailType
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.prompts import Prompts
@@ -97,7 +106,7 @@ class Agent(BaseAgent):
The agent can also have memory, can operate in verbose mode, and can delegate tasks to other agents.
Attributes:
agent_executor: An instance of the CrewAgentExecutor class.
agent_executor: An instance of the CrewAgentExecutor or CrewAgentExecutorFlow class.
role: The role of the agent.
goal: The objective of the agent.
backstory: The backstory of the agent.
@@ -213,6 +222,10 @@ class Agent(BaseAgent):
default=None,
description="A2A (Agent-to-Agent) configuration for delegating tasks to remote agents. Can be a single A2AConfig or a dict mapping agent IDs to configs.",
)
executor_class: type[CrewAgentExecutor] | type[CrewAgentExecutorFlow] = Field(
default=CrewAgentExecutor,
description="Class to use for the agent executor. Defaults to CrewAgentExecutor, can optionally use CrewAgentExecutorFlow.",
)
@model_validator(mode="before")
def validate_from_repository(cls, v: Any) -> dict[str, Any] | None | Any: # noqa: N805
@@ -295,53 +308,15 @@ class Agent(BaseAgent):
ValueError: If the max execution time is not a positive integer.
RuntimeError: If the agent execution fails for other reasons.
"""
if self.reasoning:
try:
from crewai.utilities.reasoning_handler import (
AgentReasoning,
AgentReasoningOutput,
)
reasoning_handler = AgentReasoning(task=task, agent=self)
reasoning_output: AgentReasoningOutput = (
reasoning_handler.handle_agent_reasoning()
)
# Add the reasoning plan to the task description
task.description += f"\n\nReasoning Plan:\n{reasoning_output.plan.plan}"
except Exception as e:
self._logger.log("error", f"Error during reasoning process: {e!s}")
handle_reasoning(self, task)
self._inject_date_to_task(task)
if self.tools_handler:
self.tools_handler.last_used_tool = None
task_prompt = task.prompt()
# If the task requires output in JSON or Pydantic format,
# append specific instructions to the task prompt to ensure
# that the final answer does not include any code block markers
# Skip this if task.response_model is set, as native structured outputs handle schema automatically
if (task.output_json or task.output_pydantic) and not task.response_model:
# Generate the schema based on the output format
if task.output_json:
schema_dict = generate_model_description(task.output_json)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + self.i18n.slice(
"formatted_task_instructions"
).format(output_format=schema)
elif task.output_pydantic:
schema_dict = generate_model_description(task.output_pydantic)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + self.i18n.slice(
"formatted_task_instructions"
).format(output_format=schema)
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
task_prompt = build_task_prompt_with_schema(task, task_prompt, self.i18n)
task_prompt = format_task_with_context(task_prompt, context, self.i18n)
if self._is_any_available_memory():
crewai_event_bus.emit(
@@ -379,84 +354,20 @@ class Agent(BaseAgent):
from_task=task,
),
)
knowledge_config = (
self.knowledge_config.model_dump() if self.knowledge_config else {}
knowledge_config = get_knowledge_config(self)
task_prompt = handle_knowledge_retrieval(
self,
task,
task_prompt,
knowledge_config,
self.knowledge.query if self.knowledge else lambda *a, **k: None,
self.crew.query_knowledge if self.crew else lambda *a, **k: None,
)
if self.knowledge or (self.crew and self.crew.knowledge):
crewai_event_bus.emit(
self,
event=KnowledgeRetrievalStartedEvent(
from_task=task,
from_agent=self,
),
)
try:
self.knowledge_search_query = self._get_knowledge_search_query(
task_prompt, task
)
if self.knowledge_search_query:
# Quering agent specific knowledge
if self.knowledge:
agent_knowledge_snippets = self.knowledge.query(
[self.knowledge_search_query], **knowledge_config
)
if agent_knowledge_snippets:
self.agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
)
if self.agent_knowledge_context:
task_prompt += self.agent_knowledge_context
prepare_tools(self, tools, task)
task_prompt = apply_training_data(self, task_prompt)
# Quering crew specific knowledge
knowledge_snippets = self.crew.query_knowledge(
[self.knowledge_search_query], **knowledge_config
)
if knowledge_snippets:
self.crew_knowledge_context = extract_knowledge_context(
knowledge_snippets
)
if self.crew_knowledge_context:
task_prompt += self.crew_knowledge_context
crewai_event_bus.emit(
self,
event=KnowledgeRetrievalCompletedEvent(
query=self.knowledge_search_query,
from_task=task,
from_agent=self,
retrieved_knowledge=(
(self.agent_knowledge_context or "")
+ (
"\n"
if self.agent_knowledge_context
and self.crew_knowledge_context
else ""
)
+ (self.crew_knowledge_context or "")
),
),
)
except Exception as e:
crewai_event_bus.emit(
self,
event=KnowledgeSearchQueryFailedEvent(
query=self.knowledge_search_query or "",
error=str(e),
from_task=task,
from_agent=self,
),
)
tools = tools or self.tools or []
self.create_agent_executor(tools=tools, task=task)
if self.crew and self.crew._train:
task_prompt = self._training_handler(task_prompt=task_prompt)
else:
task_prompt = self._use_trained_data(task_prompt=task_prompt)
# Import agent events locally to avoid circular imports
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
@@ -474,15 +385,8 @@ class Agent(BaseAgent):
),
)
# Determine execution method based on timeout setting
validate_max_execution_time(self.max_execution_time)
if self.max_execution_time is not None:
if (
not isinstance(self.max_execution_time, int)
or self.max_execution_time <= 0
):
raise ValueError(
"Max Execution time must be a positive integer greater than zero"
)
result = self._execute_with_timeout(
task_prompt, task, self.max_execution_time
)
@@ -490,7 +394,6 @@ class Agent(BaseAgent):
result = self._execute_without_timeout(task_prompt, task)
except TimeoutError as e:
# Propagate TimeoutError without retry
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
@@ -502,7 +405,6 @@ class Agent(BaseAgent):
raise e
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
@@ -528,23 +430,13 @@ class Agent(BaseAgent):
if self.max_rpm and self._rpm_controller:
self._rpm_controller.stop_rpm_counter()
# If there was any tool in self.tools_results that had result_as_answer
# set to True, return the results of the last tool that had
# result_as_answer set to True
for tool_result in self.tools_results:
if tool_result.get("result_as_answer", False):
result = tool_result["result"]
result = process_tool_results(self, result)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(agent=self, task=task, output=result),
)
self._last_messages = (
self.agent_executor.messages.copy()
if self.agent_executor and hasattr(self.agent_executor, "messages")
else []
)
save_last_messages(self)
self._cleanup_mcp_clients()
return result
@@ -604,6 +496,208 @@ class Agent(BaseAgent):
}
)["output"]
async def aexecute_task(
self,
task: Task,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> Any:
"""Execute a task with the agent asynchronously.
Args:
task: Task to execute.
context: Context to execute the task in.
tools: Tools to use for the task.
Returns:
Output of the agent.
Raises:
TimeoutError: If execution exceeds the maximum execution time.
ValueError: If the max execution time is not a positive integer.
RuntimeError: If the agent execution fails for other reasons.
"""
handle_reasoning(self, task)
self._inject_date_to_task(task)
if self.tools_handler:
self.tools_handler.last_used_tool = None
task_prompt = task.prompt()
task_prompt = build_task_prompt_with_schema(task, task_prompt, self.i18n)
task_prompt = format_task_with_context(task_prompt, context, self.i18n)
if self._is_any_available_memory():
crewai_event_bus.emit(
self,
event=MemoryRetrievalStartedEvent(
task_id=str(task.id) if task else None,
source_type="agent",
from_agent=self,
from_task=task,
),
)
start_time = time.time()
contextual_memory = ContextualMemory(
self.crew._short_term_memory,
self.crew._long_term_memory,
self.crew._entity_memory,
self.crew._external_memory,
agent=self,
task=task,
)
memory = await contextual_memory.abuild_context_for_task(
task, context or ""
)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
crewai_event_bus.emit(
self,
event=MemoryRetrievalCompletedEvent(
task_id=str(task.id) if task else None,
memory_content=memory,
retrieval_time_ms=(time.time() - start_time) * 1000,
source_type="agent",
from_agent=self,
from_task=task,
),
)
knowledge_config = get_knowledge_config(self)
task_prompt = await ahandle_knowledge_retrieval(
self, task, task_prompt, knowledge_config
)
prepare_tools(self, tools, task)
task_prompt = apply_training_data(self, task_prompt)
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
try:
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
validate_max_execution_time(self.max_execution_time)
if self.max_execution_time is not None:
result = await self._aexecute_with_timeout(
task_prompt, task, self.max_execution_time
)
else:
result = await self._aexecute_without_timeout(task_prompt, task)
except TimeoutError as e:
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise e
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise e
self._times_executed += 1
if self._times_executed > self.max_retry_limit:
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise e
result = await self.aexecute_task(task, context, tools)
if self.max_rpm and self._rpm_controller:
self._rpm_controller.stop_rpm_counter()
result = process_tool_results(self, result)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(agent=self, task=task, output=result),
)
save_last_messages(self)
self._cleanup_mcp_clients()
return result
async def _aexecute_with_timeout(
self, task_prompt: str, task: Task, timeout: int
) -> Any:
"""Execute a task with a timeout asynchronously.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
timeout: Maximum execution time in seconds.
Returns:
The output of the agent.
Raises:
TimeoutError: If execution exceeds the timeout.
RuntimeError: If execution fails for other reasons.
"""
try:
return await asyncio.wait_for(
self._aexecute_without_timeout(task_prompt, task),
timeout=timeout,
)
except asyncio.TimeoutError as e:
raise TimeoutError(
f"Task '{task.description}' execution timed out after {timeout} seconds. "
"Consider increasing max_execution_time or optimizing the task."
) from e
async def _aexecute_without_timeout(self, task_prompt: str, task: Task) -> Any:
"""Execute a task without a timeout asynchronously.
Args:
task_prompt: The prompt to send to the agent.
task: The task being executed.
Returns:
The output of the agent.
"""
if not self.agent_executor:
raise RuntimeError("Agent executor is not initialized.")
result = await self.agent_executor.ainvoke(
{
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
"ask_for_human_input": task.human_input,
}
)
return result["output"]
def create_agent_executor(
self, tools: list[BaseTool] | None = None, task: Task | None = None
) -> None:
@@ -632,29 +726,83 @@ class Agent(BaseAgent):
self.response_template.split("{{ .Response }}")[1].strip()
)
self.agent_executor = CrewAgentExecutor(
llm=self.llm,
task=task, # type: ignore[arg-type]
agent=self,
crew=self.crew,
tools=parsed_tools,
prompt=prompt,
original_tools=raw_tools,
stop_words=stop_words,
max_iter=self.max_iter,
tools_handler=self.tools_handler,
tools_names=get_tool_names(parsed_tools),
tools_description=render_text_description_and_args(parsed_tools),
step_callback=self.step_callback,
function_calling_llm=self.function_calling_llm,
respect_context_window=self.respect_context_window,
request_within_rpm_limit=(
self._rpm_controller.check_or_wait if self._rpm_controller else None
),
callbacks=[TokenCalcHandler(self._token_process)],
response_model=task.response_model if task else None,
rpm_limit_fn = (
self._rpm_controller.check_or_wait if self._rpm_controller else None
)
if self.agent_executor is not None:
self._update_executor_parameters(
task=task,
tools=parsed_tools,
raw_tools=raw_tools,
prompt=prompt,
stop_words=stop_words,
rpm_limit_fn=rpm_limit_fn,
)
else:
self.agent_executor = self.executor_class(
llm=cast(BaseLLM, self.llm),
task=task,
i18n=self.i18n,
agent=self,
crew=self.crew,
tools=parsed_tools,
prompt=prompt,
original_tools=raw_tools,
stop_words=stop_words,
max_iter=self.max_iter,
tools_handler=self.tools_handler,
tools_names=get_tool_names(parsed_tools),
tools_description=render_text_description_and_args(parsed_tools),
step_callback=self.step_callback,
function_calling_llm=self.function_calling_llm,
respect_context_window=self.respect_context_window,
request_within_rpm_limit=rpm_limit_fn,
callbacks=[TokenCalcHandler(self._token_process)],
response_model=task.response_model if task else None,
)
def _update_executor_parameters(
self,
task: Task | None,
tools: list,
raw_tools: list[BaseTool],
prompt: dict,
stop_words: list[str],
rpm_limit_fn: Callable | None,
) -> None:
"""Update executor parameters without recreating instance.
Args:
task: Task to execute.
tools: Parsed tools.
raw_tools: Original tools.
prompt: Generated prompt.
stop_words: Stop words list.
rpm_limit_fn: RPM limit callback function.
"""
self.agent_executor.task = task
self.agent_executor.tools = tools
self.agent_executor.original_tools = raw_tools
self.agent_executor.prompt = prompt
self.agent_executor.stop = stop_words
self.agent_executor.tools_names = get_tool_names(tools)
self.agent_executor.tools_description = render_text_description_and_args(tools)
self.agent_executor.response_model = task.response_model if task else None
self.agent_executor.tools_handler = self.tools_handler
self.agent_executor.request_within_rpm_limit = rpm_limit_fn
if self.agent_executor.llm:
existing_stop = getattr(self.agent_executor.llm, "stop", [])
self.agent_executor.llm.stop = list(
set(
existing_stop + stop_words
if isinstance(existing_stop, list)
else stop_words
)
)
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
agent_tools = AgentTools(agents=agents)
return agent_tools.tools()
@@ -810,6 +958,7 @@ class Agent(BaseAgent):
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_native_tool import MCPNativeTool
transport: StdioTransport | HTTPTransport | SSETransport
if isinstance(mcp_config, MCPServerStdio):
transport = StdioTransport(
command=mcp_config.command,
@@ -903,10 +1052,10 @@ class Agent(BaseAgent):
server_name=server_name,
run_context=None,
)
if mcp_config.tool_filter(context, tool):
if mcp_config.tool_filter(context, tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
except (TypeError, AttributeError):
if mcp_config.tool_filter(tool):
if mcp_config.tool_filter(tool): # type: ignore[call-arg, arg-type]
filtered_tools.append(tool)
else:
# Not callable - include tool
@@ -981,7 +1130,9 @@ class Agent(BaseAgent):
path = parsed.path.replace("/", "_").strip("_")
return f"{domain}_{path}" if path else domain
def _get_mcp_tool_schemas(self, server_params: dict) -> dict[str, dict]:
def _get_mcp_tool_schemas(
self, server_params: dict[str, Any]
) -> dict[str, dict[str, Any]]:
"""Get tool schemas from MCP server for wrapper creation with caching."""
server_url = server_params["url"]
@@ -995,7 +1146,7 @@ class Agent(BaseAgent):
self._logger.log(
"debug", f"Using cached MCP tool schemas for {server_url}"
)
return cached_data
return cached_data # type: ignore[no-any-return]
try:
schemas = asyncio.run(self._get_mcp_tool_schemas_async(server_params))
@@ -1013,7 +1164,7 @@ class Agent(BaseAgent):
async def _get_mcp_tool_schemas_async(
self, server_params: dict[str, Any]
) -> dict[str, dict]:
) -> dict[str, dict[str, Any]]:
"""Async implementation of MCP tool schema retrieval with timeouts and retries."""
server_url = server_params["url"]
return await self._retry_mcp_discovery(
@@ -1021,7 +1172,7 @@ class Agent(BaseAgent):
)
async def _retry_mcp_discovery(
self, operation_func, server_url: str
self, operation_func: Any, server_url: str
) -> dict[str, dict[str, Any]]:
"""Retry MCP discovery operation with exponential backoff, avoiding try-except in loop."""
last_error = None
@@ -1052,7 +1203,7 @@ class Agent(BaseAgent):
@staticmethod
async def _attempt_mcp_discovery(
operation_func, server_url: str
operation_func: Any, server_url: str
) -> tuple[dict[str, dict[str, Any]] | None, str, bool]:
"""Attempt single MCP discovery operation and return (result, error_message, should_retry)."""
try:
@@ -1142,7 +1293,7 @@ class Agent(BaseAgent):
properties = json_schema.get("properties", {})
required_fields = json_schema.get("required", [])
field_definitions = {}
field_definitions: dict[str, Any] = {}
for field_name, field_schema in properties.items():
field_type = self._json_type_to_python(field_schema)
@@ -1162,7 +1313,7 @@ class Agent(BaseAgent):
)
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
return create_model(model_name, **field_definitions)
return create_model(model_name, **field_definitions) # type: ignore[no-any-return]
def _json_type_to_python(self, field_schema: dict[str, Any]) -> type:
"""Convert JSON Schema type to Python type.
@@ -1177,7 +1328,7 @@ class Agent(BaseAgent):
json_type = field_schema.get("type")
if "anyOf" in field_schema:
types = []
types: list[type] = []
for option in field_schema["anyOf"]:
if "const" in option:
types.append(str)
@@ -1185,13 +1336,13 @@ class Agent(BaseAgent):
types.append(self._json_type_to_python(option))
unique_types = list(set(types))
if len(unique_types) > 1:
result = unique_types[0]
result: Any = unique_types[0]
for t in unique_types[1:]:
result = result | t
return result
return result # type: ignore[no-any-return]
return unique_types[0]
type_mapping = {
type_mapping: dict[str | None, type] = {
"string": str,
"number": float,
"integer": int,
@@ -1203,7 +1354,7 @@ class Agent(BaseAgent):
return type_mapping.get(json_type, Any)
@staticmethod
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict]:
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict[str, Any]]:
"""Fetch MCP server configurations from CrewAI AOP API."""
# TODO: Implement AMP API call to "integrations/mcps" endpoint
# Should return list of server configs with URLs
@@ -1438,11 +1589,11 @@ class Agent(BaseAgent):
"""
if self.apps:
platform_tools = self.get_platform_tools(self.apps)
if platform_tools:
if platform_tools and self.tools is not None:
self.tools.extend(platform_tools)
if self.mcps:
mcps = self.get_mcp_tools(self.mcps)
if mcps:
if mcps and self.tools is not None:
self.tools.extend(mcps)
lite_agent = LiteAgent(

View File

@@ -4,9 +4,8 @@ This metaclass enables extension capabilities for agents by detecting
extension fields in class annotations and applying appropriate wrappers.
"""
import warnings
from functools import wraps
from typing import Any
import warnings
from pydantic import model_validator
from pydantic._internal._model_construction import ModelMetaclass
@@ -59,9 +58,15 @@ class AgentMeta(ModelMetaclass):
a2a_value = getattr(self, "a2a", None)
if a2a_value is not None:
from crewai.a2a.extensions.registry import (
create_extension_registry_from_config,
)
from crewai.a2a.wrapper import wrap_agent_with_a2a_instance
wrap_agent_with_a2a_instance(self)
extension_registry = create_extension_registry_from_config(
a2a_value
)
wrap_agent_with_a2a_instance(self, extension_registry)
return result

View File

@@ -0,0 +1,355 @@
"""Utility functions for agent task execution.
This module contains shared logic extracted from the Agent's execute_task
and aexecute_task methods to reduce code duplication.
"""
from __future__ import annotations
import json
from typing import TYPE_CHECKING, Any
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.knowledge_events import (
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
)
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
from crewai.utilities.pydantic_schema_utils import generate_model_description
if TYPE_CHECKING:
from crewai.agent.core import Agent
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities.i18n import I18N
def handle_reasoning(agent: Agent, task: Task) -> None:
"""Handle the reasoning process for an agent before task execution.
Args:
agent: The agent performing the task.
task: The task to execute.
"""
if not agent.reasoning:
return
try:
from crewai.utilities.reasoning_handler import (
AgentReasoning,
AgentReasoningOutput,
)
reasoning_handler = AgentReasoning(task=task, agent=agent)
reasoning_output: AgentReasoningOutput = (
reasoning_handler.handle_agent_reasoning()
)
task.description += f"\n\nReasoning Plan:\n{reasoning_output.plan.plan}"
except Exception as e:
agent._logger.log("error", f"Error during reasoning process: {e!s}")
def build_task_prompt_with_schema(task: Task, task_prompt: str, i18n: I18N) -> str:
"""Build task prompt with JSON/Pydantic schema instructions if applicable.
Args:
task: The task being executed.
task_prompt: The initial task prompt.
i18n: Internationalization instance.
Returns:
The task prompt potentially augmented with schema instructions.
"""
if (task.output_json or task.output_pydantic) and not task.response_model:
if task.output_json:
schema_dict = generate_model_description(task.output_json)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + i18n.slice("formatted_task_instructions").format(
output_format=schema
)
elif task.output_pydantic:
schema_dict = generate_model_description(task.output_pydantic)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + i18n.slice("formatted_task_instructions").format(
output_format=schema
)
return task_prompt
def format_task_with_context(task_prompt: str, context: str | None, i18n: I18N) -> str:
"""Format task prompt with context if provided.
Args:
task_prompt: The task prompt.
context: Optional context string.
i18n: Internationalization instance.
Returns:
The task prompt formatted with context if provided.
"""
if context:
return i18n.slice("task_with_context").format(task=task_prompt, context=context)
return task_prompt
def get_knowledge_config(agent: Agent) -> dict[str, Any]:
"""Get knowledge configuration from agent.
Args:
agent: The agent instance.
Returns:
Dictionary of knowledge configuration.
"""
return agent.knowledge_config.model_dump() if agent.knowledge_config else {}
def handle_knowledge_retrieval(
agent: Agent,
task: Task,
task_prompt: str,
knowledge_config: dict[str, Any],
query_func: Any,
crew_query_func: Any,
) -> str:
"""Handle knowledge retrieval for task execution.
This function handles both agent-specific and crew-specific knowledge queries.
Args:
agent: The agent performing the task.
task: The task being executed.
task_prompt: The current task prompt.
knowledge_config: Knowledge configuration dictionary.
query_func: Function to query agent knowledge (sync or async).
crew_query_func: Function to query crew knowledge (sync or async).
Returns:
The task prompt potentially augmented with knowledge context.
"""
if not (agent.knowledge or (agent.crew and agent.crew.knowledge)):
return task_prompt
crewai_event_bus.emit(
agent,
event=KnowledgeRetrievalStartedEvent(
from_task=task,
from_agent=agent,
),
)
try:
agent.knowledge_search_query = agent._get_knowledge_search_query(
task_prompt, task
)
if agent.knowledge_search_query:
if agent.knowledge:
agent_knowledge_snippets = query_func(
[agent.knowledge_search_query], **knowledge_config
)
if agent_knowledge_snippets:
agent.agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
)
if agent.agent_knowledge_context:
task_prompt += agent.agent_knowledge_context
knowledge_snippets = crew_query_func(
[agent.knowledge_search_query], **knowledge_config
)
if knowledge_snippets:
agent.crew_knowledge_context = extract_knowledge_context(
knowledge_snippets
)
if agent.crew_knowledge_context:
task_prompt += agent.crew_knowledge_context
crewai_event_bus.emit(
agent,
event=KnowledgeRetrievalCompletedEvent(
query=agent.knowledge_search_query,
from_task=task,
from_agent=agent,
retrieved_knowledge=_combine_knowledge_context(agent),
),
)
except Exception as e:
crewai_event_bus.emit(
agent,
event=KnowledgeSearchQueryFailedEvent(
query=agent.knowledge_search_query or "",
error=str(e),
from_task=task,
from_agent=agent,
),
)
return task_prompt
def _combine_knowledge_context(agent: Agent) -> str:
"""Combine agent and crew knowledge contexts into a single string.
Args:
agent: The agent with knowledge contexts.
Returns:
Combined knowledge context string.
"""
agent_ctx = agent.agent_knowledge_context or ""
crew_ctx = agent.crew_knowledge_context or ""
separator = "\n" if agent_ctx and crew_ctx else ""
return agent_ctx + separator + crew_ctx
def apply_training_data(agent: Agent, task_prompt: str) -> str:
"""Apply training data to the task prompt.
Args:
agent: The agent performing the task.
task_prompt: The task prompt.
Returns:
The task prompt with training data applied.
"""
if agent.crew and agent.crew._train:
return agent._training_handler(task_prompt=task_prompt)
return agent._use_trained_data(task_prompt=task_prompt)
def process_tool_results(agent: Agent, result: Any) -> Any:
"""Process tool results, returning result_as_answer if applicable.
Args:
agent: The agent with tool results.
result: The current result.
Returns:
The final result, potentially overridden by tool result_as_answer.
"""
for tool_result in agent.tools_results:
if tool_result.get("result_as_answer", False):
result = tool_result["result"]
return result
def save_last_messages(agent: Agent) -> None:
"""Save the last messages from agent executor.
Args:
agent: The agent instance.
"""
agent._last_messages = (
agent.agent_executor.messages.copy()
if agent.agent_executor and hasattr(agent.agent_executor, "messages")
else []
)
def prepare_tools(
agent: Agent, tools: list[BaseTool] | None, task: Task
) -> list[BaseTool]:
"""Prepare tools for task execution and create agent executor.
Args:
agent: The agent instance.
tools: Optional list of tools.
task: The task being executed.
Returns:
The list of tools to use.
"""
final_tools = tools or agent.tools or []
agent.create_agent_executor(tools=final_tools, task=task)
return final_tools
def validate_max_execution_time(max_execution_time: int | None) -> None:
"""Validate max_execution_time parameter.
Args:
max_execution_time: The maximum execution time to validate.
Raises:
ValueError: If max_execution_time is not a positive integer.
"""
if max_execution_time is not None:
if not isinstance(max_execution_time, int) or max_execution_time <= 0:
raise ValueError(
"Max Execution time must be a positive integer greater than zero"
)
async def ahandle_knowledge_retrieval(
agent: Agent,
task: Task,
task_prompt: str,
knowledge_config: dict[str, Any],
) -> str:
"""Handle async knowledge retrieval for task execution.
Args:
agent: The agent performing the task.
task: The task being executed.
task_prompt: The current task prompt.
knowledge_config: Knowledge configuration dictionary.
Returns:
The task prompt potentially augmented with knowledge context.
"""
if not (agent.knowledge or (agent.crew and agent.crew.knowledge)):
return task_prompt
crewai_event_bus.emit(
agent,
event=KnowledgeRetrievalStartedEvent(
from_task=task,
from_agent=agent,
),
)
try:
agent.knowledge_search_query = agent._get_knowledge_search_query(
task_prompt, task
)
if agent.knowledge_search_query:
if agent.knowledge:
agent_knowledge_snippets = await agent.knowledge.aquery(
[agent.knowledge_search_query], **knowledge_config
)
if agent_knowledge_snippets:
agent.agent_knowledge_context = extract_knowledge_context(
agent_knowledge_snippets
)
if agent.agent_knowledge_context:
task_prompt += agent.agent_knowledge_context
knowledge_snippets = await agent.crew.aquery_knowledge(
[agent.knowledge_search_query], **knowledge_config
)
if knowledge_snippets:
agent.crew_knowledge_context = extract_knowledge_context(
knowledge_snippets
)
if agent.crew_knowledge_context:
task_prompt += agent.crew_knowledge_context
crewai_event_bus.emit(
agent,
event=KnowledgeRetrievalCompletedEvent(
query=agent.knowledge_search_query,
from_task=task,
from_agent=agent,
retrieved_knowledge=_combine_knowledge_context(agent),
),
)
except Exception as e:
crewai_event_bus.emit(
agent,
event=KnowledgeSearchQueryFailedEvent(
query=agent.knowledge_search_query or "",
error=str(e),
from_task=task,
from_agent=agent,
),
)
return task_prompt

View File

@@ -5,10 +5,9 @@ from __future__ import annotations
from abc import ABC, abstractmethod
import json
import re
from typing import TYPE_CHECKING, Final, Literal
from crewai.utilities.converter import generate_model_description
from typing import TYPE_CHECKING, Any, Final, Literal
from crewai.utilities.pydantic_schema_utils import generate_model_description
if TYPE_CHECKING:
@@ -42,7 +41,7 @@ class BaseConverterAdapter(ABC):
"""
self.agent_adapter = agent_adapter
self._output_format: Literal["json", "pydantic"] | None = None
self._schema: str | None = None
self._schema: dict[str, Any] | None = None
@abstractmethod
def configure_structured_output(self, task: Task) -> None:
@@ -129,7 +128,7 @@ class BaseConverterAdapter(ABC):
@staticmethod
def _configure_format_from_task(
task: Task,
) -> tuple[Literal["json", "pydantic"] | None, str | None]:
) -> tuple[Literal["json", "pydantic"] | None, dict[str, Any] | None]:
"""Determine output format and schema from task requirements.
This is a helper method that examines the task's output requirements

View File

@@ -4,6 +4,7 @@ This module contains the OpenAIConverterAdapter class that handles structured
output conversion for OpenAI agents, supporting JSON and Pydantic model formats.
"""
import json
from typing import Any
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
@@ -61,7 +62,7 @@ class OpenAIConverterAdapter(BaseConverterAdapter):
output_schema: str = (
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=self._schema)
.format(output_format=json.dumps(self._schema, indent=2))
)
return f"{base_prompt}\n\n{output_schema}"

View File

@@ -265,7 +265,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if not mcps:
return mcps
validated_mcps = []
validated_mcps: list[str | MCPServerConfig] = []
for mcp in mcps:
if isinstance(mcp, str):
if mcp.startswith(("https://", "crewai-amp:")):
@@ -347,6 +347,15 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
) -> str:
pass
@abstractmethod
async def aexecute_task(
self,
task: Any,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> str:
"""Execute a task asynchronously."""
@abstractmethod
def create_agent_executor(self, tools: list[BaseTool] | None = None) -> None:
pass
@@ -448,7 +457,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
"""Set the rpm controller for the agent.
@@ -458,7 +466,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
"""
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self.create_agent_executor()
def set_knowledge(self, crew_embedder: EmbedderConfig | None = None) -> None:
pass

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import time
from typing import TYPE_CHECKING
from crewai.agents.parser import AgentFinish
from crewai.events.event_listener import event_listener
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
@@ -29,7 +30,7 @@ class CrewAgentExecutorMixin:
_i18n: I18N
_printer: Printer = Printer()
def _create_short_term_memory(self, output) -> None:
def _create_short_term_memory(self, output: AgentFinish) -> None:
"""Create and save a short-term memory item if conditions are met."""
if (
self.crew
@@ -53,7 +54,7 @@ class CrewAgentExecutorMixin:
"error", f"Failed to add to short term memory: {e}"
)
def _create_external_memory(self, output) -> None:
def _create_external_memory(self, output: AgentFinish) -> None:
"""Create and save a external-term memory item if conditions are met."""
if (
self.crew
@@ -75,7 +76,7 @@ class CrewAgentExecutorMixin:
"error", f"Failed to add to external memory: {e}"
)
def _create_long_term_memory(self, output) -> None:
def _create_long_term_memory(self, output: AgentFinish) -> None:
"""Create and save long-term and entity memory items based on evaluation."""
if (
self.crew
@@ -136,40 +137,50 @@ class CrewAgentExecutorMixin:
)
def _ask_human_input(self, final_answer: str) -> str:
"""Prompt human input with mode-appropriate messaging."""
event_listener.formatter.pause_live_updates()
try:
self._printer.print(
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
)
"""Prompt human input with mode-appropriate messaging.
Note: The final answer is already displayed via the AgentLogsExecutionEvent
panel, so we only show the feedback prompt here.
"""
from rich.panel import Panel
from rich.text import Text
formatter = event_listener.formatter
formatter.pause_live_updates()
try:
# Training mode prompt (single iteration)
if self.crew and getattr(self.crew, "_train", False):
prompt = (
"\n\n=====\n"
"## TRAINING MODE: Provide feedback to improve the agent's performance.\n"
prompt_text = (
"TRAINING MODE: Provide feedback to improve the agent's performance.\n\n"
"This will be used to train better versions of the agent.\n"
"Please provide detailed feedback about the result quality and reasoning process.\n"
"=====\n"
"Please provide detailed feedback about the result quality and reasoning process."
)
title = "🎓 Training Feedback Required"
# Regular human-in-the-loop prompt (multiple iterations)
else:
prompt = (
"\n\n=====\n"
"## HUMAN FEEDBACK: Provide feedback on the Final Result and Agent's actions.\n"
"Please follow these guidelines:\n"
" - If you are happy with the result, simply hit Enter without typing anything.\n"
" - Otherwise, provide specific improvement requests.\n"
" - You can provide multiple rounds of feedback until satisfied.\n"
"=====\n"
prompt_text = (
"Provide feedback on the Final Result above.\n\n"
"• If you are happy with the result, simply hit Enter without typing anything.\n"
"• Otherwise, provide specific improvement requests.\n"
"• You can provide multiple rounds of feedback until satisfied."
)
title = "💬 Human Feedback Required"
content = Text()
content.append(prompt_text, style="yellow")
prompt_panel = Panel(
content,
title=title,
border_style="yellow",
padding=(1, 2),
)
formatter.console.print(prompt_panel)
self._printer.print(content=prompt, color="bold_yellow")
response = input()
if response.strip() != "":
self._printer.print(
content="\nProcessing your feedback...", color="cyan"
)
formatter.console.print("\n[cyan]Processing your feedback...[/cyan]")
return response
finally:
event_listener.formatter.resume_live_updates()
formatter.resume_live_updates()

View File

@@ -7,6 +7,7 @@ and memory management.
from __future__ import annotations
from collections.abc import Callable
import logging
from typing import TYPE_CHECKING, Any, Literal, cast
from pydantic import BaseModel, GetCoreSchemaHandler
@@ -28,6 +29,7 @@ from crewai.hooks.llm_hooks import (
get_before_llm_call_hooks,
)
from crewai.utilities.agent_utils import (
aget_llm_response,
enforce_rpm_limit,
format_message_for_llm,
get_llm_response,
@@ -43,10 +45,15 @@ from crewai.utilities.agent_utils import (
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.printer import Printer
from crewai.utilities.tool_utils import execute_tool_and_check_finality
from crewai.utilities.tool_utils import (
aexecute_tool_and_check_finality,
execute_tool_and_check_finality,
)
from crewai.utilities.training_handler import CrewTrainingHandler
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.agents.tools_handler import ToolsHandler
@@ -87,6 +94,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
request_within_rpm_limit: Callable[[], bool] | None = None,
callbacks: list[Any] | None = None,
response_model: type[BaseModel] | None = None,
i18n: I18N | None = None,
) -> None:
"""Initialize executor.
@@ -110,7 +118,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
callbacks: Optional callbacks list.
response_model: Optional Pydantic model for structured outputs.
"""
self._i18n: I18N = get_i18n()
self._i18n: I18N = i18n or get_i18n()
self.llm = llm
self.task = task
self.agent = agent
@@ -134,8 +142,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.messages: list[LLMMessage] = []
self.iterations = 0
self.log_error_after = 3
self.before_llm_call_hooks: list[Callable] = []
self.after_llm_call_hooks: list[Callable] = []
self.before_llm_call_hooks: list[Callable[..., Any]] = []
self.after_llm_call_hooks: list[Callable[..., Any]] = []
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
if self.llm:
@@ -312,6 +320,154 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._show_logs(formatted_answer)
return formatted_answer
async def ainvoke(self, inputs: dict[str, Any]) -> dict[str, Any]:
"""Execute the agent asynchronously with given inputs.
Args:
inputs: Input dictionary containing prompt variables.
Returns:
Dictionary with agent output.
"""
if "system" in self.prompt:
system_prompt = self._format_prompt(
cast(str, self.prompt.get("system", "")), inputs
)
user_prompt = self._format_prompt(
cast(str, self.prompt.get("user", "")), inputs
)
self.messages.append(format_message_for_llm(system_prompt, role="system"))
self.messages.append(format_message_for_llm(user_prompt))
else:
user_prompt = self._format_prompt(self.prompt.get("prompt", ""), inputs)
self.messages.append(format_message_for_llm(user_prompt))
self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
try:
formatted_answer = await self._ainvoke_loop()
except AssertionError:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(self._printer, e)
raise
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer)
self._create_external_memory(formatted_answer)
return {"output": formatted_answer.output}
async def _ainvoke_loop(self) -> AgentFinish:
"""Execute agent loop asynchronously until completion.
Returns:
Final answer from the agent.
"""
formatted_answer = None
while not isinstance(formatted_answer, AgentFinish):
try:
if has_reached_max_iterations(self.iterations, self.max_iter):
formatted_answer = handle_max_iterations_exceeded(
formatted_answer,
printer=self._printer,
i18n=self._i18n,
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
)
break
enforce_rpm_limit(self.request_within_rpm_limit)
answer = await aget_llm_response(
llm=self.llm,
messages=self.messages,
callbacks=self.callbacks,
printer=self._printer,
from_task=self.task,
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
)
formatted_answer = process_llm_response(answer, self.use_stop_words) # type: ignore[assignment]
if isinstance(formatted_answer, AgentAction):
fingerprint_context = {}
if (
self.agent
and hasattr(self.agent, "security_config")
and hasattr(self.agent.security_config, "fingerprint")
):
fingerprint_context = {
"agent_fingerprint": str(
self.agent.security_config.fingerprint
)
}
tool_result = await aexecute_tool_and_check_finality(
agent_action=formatted_answer,
fingerprint_context=fingerprint_context,
tools=self.tools,
i18n=self._i18n,
agent_key=self.agent.key if self.agent else None,
agent_role=self.agent.role if self.agent else None,
tools_handler=self.tools_handler,
task=self.task,
agent=self.agent,
function_calling_llm=self.function_calling_llm,
crew=self.crew,
)
formatted_answer = self._handle_agent_action(
formatted_answer, tool_result
)
self._invoke_step_callback(formatted_answer) # type: ignore[arg-type]
self._append_message(formatted_answer.text) # type: ignore[union-attr,attr-defined]
except OutputParserError as e:
formatted_answer = handle_output_parser_exception( # type: ignore[assignment]
e=e,
messages=self.messages,
iterations=self.iterations,
log_error_after=self.log_error_after,
printer=self._printer,
)
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
raise e
if is_context_length_exceeded(e):
handle_context_length(
respect_context_window=self.respect_context_window,
printer=self._printer,
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
)
continue
handle_unknown_error(self._printer, e)
raise e
finally:
self.iterations += 1
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer. "
f"Got {type(formatted_answer).__name__} instead of AgentFinish."
)
self._show_logs(formatted_answer)
return formatted_answer
def _handle_agent_action(
self, formatted_answer: AgentAction, tool_result: ToolResult
) -> AgentAction | AgentFinish:
@@ -388,7 +544,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.agent is None:
raise ValueError("Agent cannot be None")
crewai_event_bus.emit(
future = crewai_event_bus.emit(
self.agent,
AgentLogsExecutionEvent(
agent_role=self.agent.role,
@@ -398,6 +554,12 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
),
)
if future is not None:
try:
future.result(timeout=5.0)
except Exception as e:
logger.error(f"Failed to show logs for agent execution event: {e}")
def _handle_crew_training_output(
self, result: AgentFinish, human_feedback: str | None = None
) -> None:

View File

@@ -149,7 +149,9 @@ class AuthenticationCommand:
return
if token_data["error"] not in ("authorization_pending", "slow_down"):
raise requests.HTTPError(token_data["error_description"])
raise requests.HTTPError(
token_data.get("error_description") or token_data.get("error")
)
time.sleep(device_code_data["interval"])
attempts += 1

View File

@@ -14,7 +14,8 @@ import tomli
from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
from crewai.crew import Crew
from crewai.llm import LLM, BaseLLM
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import Printer
@@ -27,7 +28,7 @@ MIN_REQUIRED_VERSION: Final[Literal["0.98.0"]] = "0.98.0"
def check_conversational_crews_version(
crewai_version: str, pyproject_data: dict
crewai_version: str, pyproject_data: dict[str, Any]
) -> bool:
"""
Check if the installed crewAI version supports conversational crews.
@@ -53,7 +54,7 @@ def check_conversational_crews_version(
return True
def run_chat():
def run_chat() -> None:
"""
Runs an interactive chat loop using the Crew's chat LLM with function calling.
Incorporates crew_name, crew_description, and input fields to build a tool schema.
@@ -101,7 +102,7 @@ def run_chat():
click.secho(f"Assistant: {introductory_message}\n", fg="green")
messages = [
messages: list[LLMMessage] = [
{"role": "system", "content": system_message},
{"role": "assistant", "content": introductory_message},
]
@@ -113,7 +114,7 @@ def run_chat():
chat_loop(chat_llm, messages, crew_tool_schema, available_functions)
def show_loading(event: threading.Event):
def show_loading(event: threading.Event) -> None:
"""Display animated loading dots while processing."""
while not event.is_set():
_printer.print(".", end="")
@@ -162,23 +163,23 @@ def build_system_message(crew_chat_inputs: ChatInputs) -> str:
)
def create_tool_function(crew: Crew, messages: list[dict[str, str]]) -> Any:
def create_tool_function(crew: Crew, messages: list[LLMMessage]) -> Any:
"""Creates a wrapper function for running the crew tool with messages."""
def run_crew_tool_with_messages(**kwargs):
def run_crew_tool_with_messages(**kwargs: Any) -> str:
return run_crew_tool(crew, messages, **kwargs)
return run_crew_tool_with_messages
def flush_input():
def flush_input() -> None:
"""Flush any pending input from the user."""
if platform.system() == "Windows":
# Windows platform
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
while msvcrt.kbhit(): # type: ignore[attr-defined]
msvcrt.getch() # type: ignore[attr-defined]
else:
# Unix-like platforms (Linux, macOS)
import termios
@@ -186,7 +187,12 @@ def flush_input():
termios.tcflush(sys.stdin, termios.TCIFLUSH)
def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
def chat_loop(
chat_llm: LLM | BaseLLM,
messages: list[LLMMessage],
crew_tool_schema: dict[str, Any],
available_functions: dict[str, Any],
) -> None:
"""Main chat loop for interacting with the user."""
while True:
try:
@@ -225,7 +231,7 @@ def get_user_input() -> str:
def handle_user_input(
user_input: str,
chat_llm: LLM,
chat_llm: LLM | BaseLLM,
messages: list[LLMMessage],
crew_tool_schema: dict[str, Any],
available_functions: dict[str, Any],
@@ -255,7 +261,7 @@ def handle_user_input(
click.secho(f"\nAssistant: {final_response}\n", fg="green")
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict[str, Any]:
"""
Dynamically build a Littellm 'function' schema for the given crew.
@@ -286,7 +292,7 @@ def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
}
def run_crew_tool(crew: Crew, messages: list[dict[str, str]], **kwargs):
def run_crew_tool(crew: Crew, messages: list[LLMMessage], **kwargs: Any) -> str:
"""
Runs the crew using crew.kickoff(inputs=kwargs) and returns the output.
@@ -372,7 +378,9 @@ def load_crew_and_name() -> tuple[Crew, str]:
return crew_instance, crew_class_name
def generate_crew_chat_inputs(crew: Crew, crew_name: str, chat_llm) -> ChatInputs:
def generate_crew_chat_inputs(
crew: Crew, crew_name: str, chat_llm: LLM | BaseLLM
) -> ChatInputs:
"""
Generates the ChatInputs required for the crew by analyzing the tasks and agents.
@@ -410,23 +418,12 @@ def fetch_required_inputs(crew: Crew) -> set[str]:
Returns:
Set[str]: A set of placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)}")
required_inputs: set[str] = set()
# Scan tasks
for task in crew.tasks:
text = f"{task.description or ''} {task.expected_output or ''}"
required_inputs.update(placeholder_pattern.findall(text))
# Scan agents
for agent in crew.agents:
text = f"{agent.role or ''} {agent.goal or ''} {agent.backstory or ''}"
required_inputs.update(placeholder_pattern.findall(text))
return required_inputs
return crew.fetch_inputs()
def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) -> str:
def generate_input_description_with_ai(
input_name: str, crew: Crew, chat_llm: LLM | BaseLLM
) -> str:
"""
Generates an input description using AI based on the context of the crew.
@@ -484,10 +481,10 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
return response.strip()
return str(response).strip()
def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
def generate_crew_description_with_ai(crew: Crew, chat_llm: LLM | BaseLLM) -> str:
"""
Generates a brief description of the crew using AI.
@@ -534,4 +531,4 @@ def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
return response.strip()
return str(response).strip()

View File

@@ -1,6 +1,6 @@
from typing import Any
from urllib.parse import urljoin
import os
import requests
from crewai.cli.config import Settings
@@ -33,9 +33,7 @@ class PlusAPI:
if settings.org_uuid:
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
self.base_url = (
str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
)
self.base_url = os.getenv("CREWAI_PLUS_URL") or str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
def _make_request(
self, method: str, endpoint: str, **kwargs: Any

View File

@@ -3,103 +3,56 @@ import json
import os
from pathlib import Path
import sys
from typing import BinaryIO, cast
import tempfile
from typing import Final, Literal, cast
from cryptography.fernet import Fernet
if sys.platform == "win32":
import msvcrt
else:
import fcntl
_FERNET_KEY_LENGTH: Final[Literal[44]] = 44
class TokenManager:
def __init__(self, file_path: str = "tokens.enc") -> None:
"""
Initialize the TokenManager class.
"""Manages encrypted token storage."""
:param file_path: The file path to store the encrypted tokens. Default is "tokens.enc".
def __init__(self, file_path: str = "tokens.enc") -> None:
"""Initialize the TokenManager.
Args:
file_path: The file path to store encrypted tokens.
"""
self.file_path = file_path
self.key = self._get_or_create_key()
self.fernet = Fernet(self.key)
@staticmethod
def _acquire_lock(file_handle: BinaryIO) -> None:
"""
Acquire an exclusive lock on a file handle.
Args:
file_handle: Open file handle to lock.
"""
if sys.platform == "win32":
msvcrt.locking(file_handle.fileno(), msvcrt.LK_LOCK, 1)
else:
fcntl.flock(file_handle.fileno(), fcntl.LOCK_EX)
@staticmethod
def _release_lock(file_handle: BinaryIO) -> None:
"""
Release the lock on a file handle.
Args:
file_handle: Open file handle to unlock.
"""
if sys.platform == "win32":
msvcrt.locking(file_handle.fileno(), msvcrt.LK_UNLCK, 1)
else:
fcntl.flock(file_handle.fileno(), fcntl.LOCK_UN)
def _get_or_create_key(self) -> bytes:
"""
Get or create the encryption key with file locking to prevent race conditions.
"""Get or create the encryption key.
Returns:
The encryption key.
The encryption key as bytes.
"""
key_filename = "secret.key"
storage_path = self.get_secure_storage_path()
key_filename: str = "secret.key"
key = self.read_secure_file(key_filename)
if key is not None and len(key) == 44:
key = self._read_secure_file(key_filename)
if key is not None and len(key) == _FERNET_KEY_LENGTH:
return key
lock_file_path = storage_path / f"{key_filename}.lock"
try:
lock_file_path.touch()
with open(lock_file_path, "r+b") as lock_file:
self._acquire_lock(lock_file)
try:
key = self.read_secure_file(key_filename)
if key is not None and len(key) == 44:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
return new_key
finally:
try:
self._release_lock(lock_file)
except OSError:
pass
except OSError:
key = self.read_secure_file(key_filename)
if key is not None and len(key) == 44:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
new_key = Fernet.generate_key()
if self._atomic_create_secure_file(key_filename, new_key):
return new_key
def save_tokens(self, access_token: str, expires_at: int) -> None:
"""
Save the access token and its expiration time.
key = self._read_secure_file(key_filename)
if key is not None and len(key) == _FERNET_KEY_LENGTH:
return key
:param access_token: The access token to save.
:param expires_at: The UNIX timestamp of the expiration time.
raise RuntimeError("Failed to create or read encryption key")
def save_tokens(self, access_token: str, expires_at: int) -> None:
"""Save the access token and its expiration time.
Args:
access_token: The access token to save.
expires_at: The UNIX timestamp of the expiration time.
"""
expiration_time = datetime.fromtimestamp(expires_at)
data = {
@@ -107,15 +60,15 @@ class TokenManager:
"expiration": expiration_time.isoformat(),
}
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
self.save_secure_file(self.file_path, encrypted_data)
self._atomic_write_secure_file(self.file_path, encrypted_data)
def get_token(self) -> str | None:
"""
Get the access token if it is valid and not expired.
"""Get the access token if it is valid and not expired.
:return: The access token if valid and not expired, otherwise None.
Returns:
The access token if valid and not expired, otherwise None.
"""
encrypted_data = self.read_secure_file(self.file_path)
encrypted_data = self._read_secure_file(self.file_path)
if encrypted_data is None:
return None
@@ -126,20 +79,18 @@ class TokenManager:
if expiration <= datetime.now():
return None
return cast(str | None, data["access_token"])
return cast(str | None, data.get("access_token"))
def clear_tokens(self) -> None:
"""
Clear the tokens.
"""
self.delete_secure_file(self.file_path)
"""Clear the stored tokens."""
self._delete_secure_file(self.file_path)
@staticmethod
def get_secure_storage_path() -> Path:
"""
Get the secure storage path based on the operating system.
def _get_secure_storage_path() -> Path:
"""Get the secure storage path based on the operating system.
:return: The secure storage path.
Returns:
The secure storage path.
"""
if sys.platform == "win32":
base_path = os.environ.get("LOCALAPPDATA")
@@ -155,44 +106,81 @@ class TokenManager:
return storage_path
def save_secure_file(self, filename: str, content: bytes) -> None:
"""
Save the content to a secure file.
def _atomic_create_secure_file(self, filename: str, content: bytes) -> bool:
"""Create a file only if it doesn't exist.
:param filename: The name of the file.
:param content: The content to save.
Args:
filename: The name of the file.
content: The content to write.
Returns:
True if file was created, False if it already exists.
"""
storage_path = self.get_secure_storage_path()
storage_path = self._get_secure_storage_path()
file_path = storage_path / filename
with open(file_path, "wb") as f:
f.write(content)
try:
fd = os.open(file_path, os.O_CREAT | os.O_EXCL | os.O_WRONLY, 0o600)
try:
os.write(fd, content)
finally:
os.close(fd)
return True
except FileExistsError:
return False
os.chmod(file_path, 0o600)
def _atomic_write_secure_file(self, filename: str, content: bytes) -> None:
"""Write content to a secure file.
def read_secure_file(self, filename: str) -> bytes | None:
Args:
filename: The name of the file.
content: The content to write.
"""
Read the content of a secure file.
:param filename: The name of the file.
:return: The content of the file if it exists, otherwise None.
"""
storage_path = self.get_secure_storage_path()
storage_path = self._get_secure_storage_path()
file_path = storage_path / filename
if not file_path.exists():
fd, temp_path = tempfile.mkstemp(dir=storage_path, prefix=f".{filename}.")
fd_closed = False
try:
os.write(fd, content)
os.close(fd)
fd_closed = True
os.chmod(temp_path, 0o600)
os.replace(temp_path, file_path)
except Exception:
if not fd_closed:
os.close(fd)
if os.path.exists(temp_path):
os.unlink(temp_path)
raise
def _read_secure_file(self, filename: str) -> bytes | None:
"""Read the content of a secure file.
Args:
filename: The name of the file.
Returns:
The content of the file if it exists, otherwise None.
"""
storage_path = self._get_secure_storage_path()
file_path = storage_path / filename
try:
with open(file_path, "rb") as f:
return f.read()
except FileNotFoundError:
return None
with open(file_path, "rb") as f:
return f.read()
def _delete_secure_file(self, filename: str) -> None:
"""Delete a secure file.
def delete_secure_file(self, filename: str) -> None:
Args:
filename: The name of the file.
"""
Delete the secure file.
:param filename: The name of the file.
"""
storage_path = self.get_secure_storage_path()
storage_path = self._get_secure_storage_path()
file_path = storage_path / filename
if file_path.exists():
file_path.unlink(missing_ok=True)
try:
file_path.unlink()
except FileNotFoundError:
pass

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.6.1"
"crewai[tools]==1.7.2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.6.1"
"crewai[tools]==1.7.2"
]
[project.scripts]

View File

@@ -1,4 +1,5 @@
import base64
from json import JSONDecodeError
import os
from pathlib import Path
import subprocess
@@ -11,6 +12,7 @@ from rich.console import Console
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.utils import (
build_env_with_tool_repository_credentials,
extract_available_exports,
@@ -130,10 +132,13 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
self._validate_response(publish_response)
published_handle = publish_response.json()["handle"]
settings = Settings()
base_url = settings.enterprise_base_url or DEFAULT_CREWAI_ENTERPRISE_URL
console.print(
f"Successfully published `{published_handle}` ({project_version}).\n\n"
+ "⚠️ Security checks are running in the background. Your tool will be available once these are complete.\n"
+ f"You can monitor the status or access your tool here:\nhttps://app.crewai.com/crewai_plus/tools/{published_handle}",
+ f"You can monitor the status or access your tool here:\n{base_url}/crewai_plus/tools/{published_handle}",
style="bold green",
)
@@ -162,9 +167,19 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
if login_response.status_code != 200:
console.print(
"Authentication failed. Verify if the currently active organization access to the tool repository, and run 'crewai login' again. ",
"Authentication failed. Verify if the currently active organization can access the tool repository, and run 'crewai login' again.",
style="bold red",
)
try:
console.print(
f"[{login_response.status_code} error - {login_response.json().get('message', 'Unknown error')}]",
style="bold red italic",
)
except JSONDecodeError:
console.print(
f"[{login_response.status_code} error - Unknown error - Invalid JSON response]",
style="bold red italic",
)
raise SystemExit
login_response_json = login_response.json()

View File

@@ -35,6 +35,14 @@ from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.crews.crew_output import CrewOutput
from crewai.crews.utils import (
StreamingContext,
check_conditional_skip,
enable_agent_streaming,
prepare_kickoff,
prepare_task_execution,
run_for_each_async,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_listener import EventListener
from crewai.events.listeners.tracing.trace_listener import (
@@ -47,7 +55,6 @@ from crewai.events.listeners.tracing.utils import (
from crewai.events.types.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestStartedEvent,
@@ -74,7 +81,7 @@ from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import BaseTool
from crewai.types.streaming import CrewStreamingOutput, FlowStreamingOutput
from crewai.types.streaming import CrewStreamingOutput
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities.constants import NOT_SPECIFIED, TRAINING_DATA_FILE
from crewai.utilities.crew.models import CrewContext
@@ -92,10 +99,8 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.printer import PrinterColor
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.streaming import (
TaskInfo,
create_async_chunk_generator,
create_chunk_generator,
create_streaming_state,
signal_end,
signal_error,
)
@@ -268,7 +273,7 @@ class Crew(FlowTrackable, BaseModel):
description="list of file paths for task execution JSON files.",
)
execution_logs: list[dict[str, Any]] = Field(
default=[],
default_factory=list,
description="list of execution logs for tasks",
)
knowledge_sources: list[BaseKnowledgeSource] | None = Field(
@@ -327,7 +332,7 @@ class Crew(FlowTrackable, BaseModel):
def set_private_attrs(self) -> Crew:
"""set private attributes."""
self._cache_handler = CacheHandler()
event_listener = EventListener() # type: ignore[no-untyped-call]
event_listener = EventListener()
# Determine and set tracing state once for this execution
tracing_enabled = should_enable_tracing(override=self.tracing)
@@ -348,12 +353,12 @@ class Crew(FlowTrackable, BaseModel):
return self
def _initialize_default_memories(self) -> None:
self._long_term_memory = self._long_term_memory or LongTermMemory() # type: ignore[no-untyped-call]
self._short_term_memory = self._short_term_memory or ShortTermMemory( # type: ignore[no-untyped-call]
self._long_term_memory = self._long_term_memory or LongTermMemory()
self._short_term_memory = self._short_term_memory or ShortTermMemory(
crew=self,
embedder_config=self.embedder,
)
self._entity_memory = self.entity_memory or EntityMemory( # type: ignore[no-untyped-call]
self._entity_memory = self.entity_memory or EntityMemory(
crew=self, embedder_config=self.embedder
)
@@ -404,8 +409,7 @@ class Crew(FlowTrackable, BaseModel):
raise PydanticCustomError(
"missing_manager_llm_or_manager_agent",
(
"Attribute `manager_llm` or `manager_agent` is required "
"when using hierarchical process."
"Attribute `manager_llm` or `manager_agent` is required when using hierarchical process."
),
{},
)
@@ -511,10 +515,9 @@ class Crew(FlowTrackable, BaseModel):
raise PydanticCustomError(
"invalid_async_conditional_task",
(
f"Conditional Task: {task.description}, "
f"cannot be executed asynchronously."
"Conditional Task: {description}, cannot be executed asynchronously."
),
{},
{"description": task.description},
)
return self
@@ -675,21 +678,8 @@ class Crew(FlowTrackable, BaseModel):
inputs: dict[str, Any] | None = None,
) -> CrewOutput | CrewStreamingOutput:
if self.stream:
for agent in self.agents:
if agent.llm is not None:
agent.llm.stream = True
result_holder: list[CrewOutput] = []
current_task_info: TaskInfo = {
"index": 0,
"name": "",
"id": "",
"agent_role": "",
"agent_id": "",
}
state = create_streaming_state(current_task_info, result_holder)
output_holder: list[CrewStreamingOutput | FlowStreamingOutput] = []
enable_agent_streaming(self.agents)
ctx = StreamingContext()
def run_crew() -> None:
"""Execute the crew and capture the result."""
@@ -697,59 +687,28 @@ class Crew(FlowTrackable, BaseModel):
self.stream = False
crew_result = self.kickoff(inputs=inputs)
if isinstance(crew_result, CrewOutput):
result_holder.append(crew_result)
ctx.result_holder.append(crew_result)
except Exception as exc:
signal_error(state, exc)
signal_error(ctx.state, exc)
finally:
self.stream = True
signal_end(state)
signal_end(ctx.state)
streaming_output = CrewStreamingOutput(
sync_iterator=create_chunk_generator(state, run_crew, output_holder)
sync_iterator=create_chunk_generator(
ctx.state, run_crew, ctx.output_holder
)
)
output_holder.append(streaming_output)
ctx.output_holder.append(streaming_output)
return streaming_output
ctx = baggage.set_baggage(
baggage_ctx = baggage.set_baggage(
"crew_context", CrewContext(id=str(self.id), key=self.key)
)
token = attach(ctx)
token = attach(baggage_ctx)
try:
for before_callback in self.before_kickoff_callbacks:
if inputs is None:
inputs = {}
inputs = before_callback(inputs)
crewai_event_bus.emit(
self,
CrewKickoffStartedEvent(crew_name=self.name, inputs=inputs),
)
# Starts the crew to work on its assigned tasks.
self._task_output_handler.reset()
self._logging_color = "bold_purple"
if inputs is not None:
self._inputs = inputs
self._interpolate_inputs(inputs)
self._set_tasks_callbacks()
self._set_allow_crewai_trigger_context_for_first_task()
for agent in self.agents:
agent.crew = self
agent.set_knowledge(crew_embedder=self.embedder)
# TODO: Create an AgentFunctionCalling protocol for future refactoring
if not agent.function_calling_llm: # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
agent.function_calling_llm = self.function_calling_llm # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
if not agent.step_callback: # type: ignore # "BaseAgent" has no attribute "step_callback"
agent.step_callback = self.step_callback # type: ignore # "BaseAgent" has no attribute "step_callback"
agent.create_agent_executor()
if self.planning:
self._handle_crew_planning()
inputs = prepare_kickoff(self, inputs)
if self.process == Process.sequential:
result = self._run_sequential_process()
@@ -814,42 +773,27 @@ class Crew(FlowTrackable, BaseModel):
inputs = inputs or {}
if self.stream:
for agent in self.agents:
if agent.llm is not None:
agent.llm.stream = True
result_holder: list[CrewOutput] = []
current_task_info: TaskInfo = {
"index": 0,
"name": "",
"id": "",
"agent_role": "",
"agent_id": "",
}
state = create_streaming_state(
current_task_info, result_holder, use_async=True
)
output_holder: list[CrewStreamingOutput | FlowStreamingOutput] = []
enable_agent_streaming(self.agents)
ctx = StreamingContext(use_async=True)
async def run_crew() -> None:
try:
self.stream = False
result = await asyncio.to_thread(self.kickoff, inputs)
if isinstance(result, CrewOutput):
result_holder.append(result)
ctx.result_holder.append(result)
except Exception as e:
signal_error(state, e, is_async=True)
signal_error(ctx.state, e, is_async=True)
finally:
self.stream = True
signal_end(state, is_async=True)
signal_end(ctx.state, is_async=True)
streaming_output = CrewStreamingOutput(
async_iterator=create_async_chunk_generator(
state, run_crew, output_holder
ctx.state, run_crew, ctx.output_holder
)
)
output_holder.append(streaming_output)
ctx.output_holder.append(streaming_output)
return streaming_output
@@ -864,89 +808,207 @@ class Crew(FlowTrackable, BaseModel):
from all crews as they arrive. After iteration, access results via .results
(list of CrewOutput).
"""
crew_copies = [self.copy() for _ in inputs]
async def kickoff_fn(
crew: Crew, input_data: dict[str, Any]
) -> CrewOutput | CrewStreamingOutput:
return await crew.kickoff_async(inputs=input_data)
return await run_for_each_async(self, inputs, kickoff_fn)
async def akickoff(
self, inputs: dict[str, Any] | None = None
) -> CrewOutput | CrewStreamingOutput:
"""Native async kickoff method using async task execution throughout.
Unlike kickoff_async which wraps sync kickoff in a thread, this method
uses native async/await for all operations including task execution,
memory operations, and knowledge queries.
"""
if self.stream:
result_holder: list[list[CrewOutput]] = [[]]
current_task_info: TaskInfo = {
"index": 0,
"name": "",
"id": "",
"agent_role": "",
"agent_id": "",
}
enable_agent_streaming(self.agents)
ctx = StreamingContext(use_async=True)
state = create_streaming_state(
current_task_info, result_holder, use_async=True
)
output_holder: list[CrewStreamingOutput | FlowStreamingOutput] = []
async def run_all_crews() -> None:
"""Run all crew copies and aggregate their streaming outputs."""
async def run_crew() -> None:
try:
streaming_outputs: list[CrewStreamingOutput] = []
for i, crew in enumerate(crew_copies):
streaming = await crew.kickoff_async(inputs=inputs[i])
if isinstance(streaming, CrewStreamingOutput):
streaming_outputs.append(streaming)
async def consume_stream(
stream_output: CrewStreamingOutput,
) -> CrewOutput:
"""Consume stream chunks and forward to parent queue.
Args:
stream_output: The streaming output to consume.
Returns:
The final CrewOutput result.
"""
async for chunk in stream_output:
if state.async_queue is not None and state.loop is not None:
state.loop.call_soon_threadsafe(
state.async_queue.put_nowait, chunk
)
return stream_output.result
crew_results = await asyncio.gather(
*[consume_stream(s) for s in streaming_outputs]
)
result_holder[0] = list(crew_results)
except Exception as e:
signal_error(state, e, is_async=True)
self.stream = False
inner_result = await self.akickoff(inputs)
if isinstance(inner_result, CrewOutput):
ctx.result_holder.append(inner_result)
except Exception as exc:
signal_error(ctx.state, exc, is_async=True)
finally:
signal_end(state, is_async=True)
self.stream = True
signal_end(ctx.state, is_async=True)
streaming_output = CrewStreamingOutput(
async_iterator=create_async_chunk_generator(
state, run_all_crews, output_holder
ctx.state, run_crew, ctx.output_holder
)
)
def set_results_wrapper(result: Any) -> None:
"""Wrap _set_results to match _set_result signature."""
streaming_output._set_results(result)
streaming_output._set_result = set_results_wrapper # type: ignore[method-assign]
output_holder.append(streaming_output)
ctx.output_holder.append(streaming_output)
return streaming_output
tasks = [
asyncio.create_task(crew_copy.kickoff_async(inputs=input_data))
for crew_copy, input_data in zip(crew_copies, inputs, strict=True)
]
baggage_ctx = baggage.set_baggage(
"crew_context", CrewContext(id=str(self.id), key=self.key)
)
token = attach(baggage_ctx)
results = await asyncio.gather(*tasks)
try:
inputs = prepare_kickoff(self, inputs)
total_usage_metrics = UsageMetrics()
for crew_copy in crew_copies:
if crew_copy.usage_metrics:
total_usage_metrics.add_usage_metrics(crew_copy.usage_metrics)
self.usage_metrics = total_usage_metrics
if self.process == Process.sequential:
result = await self._arun_sequential_process()
elif self.process == Process.hierarchical:
result = await self._arun_hierarchical_process()
else:
raise NotImplementedError(
f"The process '{self.process}' is not implemented yet."
)
self._task_output_handler.reset()
return list(results)
for after_callback in self.after_kickoff_callbacks:
result = after_callback(result)
self.usage_metrics = self.calculate_usage_metrics()
return result
except Exception as e:
crewai_event_bus.emit(
self,
CrewKickoffFailedEvent(error=str(e), crew_name=self.name),
)
raise
finally:
detach(token)
async def akickoff_for_each(
self, inputs: list[dict[str, Any]]
) -> list[CrewOutput | CrewStreamingOutput] | CrewStreamingOutput:
"""Native async execution of the Crew's workflow for each input.
Uses native async throughout rather than thread-based async.
If stream=True, returns a single CrewStreamingOutput that yields chunks
from all crews as they arrive.
"""
async def kickoff_fn(
crew: Crew, input_data: dict[str, Any]
) -> CrewOutput | CrewStreamingOutput:
return await crew.akickoff(inputs=input_data)
return await run_for_each_async(self, inputs, kickoff_fn)
async def _arun_sequential_process(self) -> CrewOutput:
"""Executes tasks sequentially using native async and returns the final output."""
return await self._aexecute_tasks(self.tasks)
async def _arun_hierarchical_process(self) -> CrewOutput:
"""Creates and assigns a manager agent to complete the tasks using native async."""
self._create_manager_agent()
return await self._aexecute_tasks(self.tasks)
async def _aexecute_tasks(
self,
tasks: list[Task],
start_index: int | None = 0,
was_replayed: bool = False,
) -> CrewOutput:
"""Executes tasks using native async and returns the final output.
Args:
tasks: List of tasks to execute
start_index: Index to start execution from (for replay)
was_replayed: Whether this is a replayed execution
Returns:
CrewOutput: Final output of the crew
"""
task_outputs: list[TaskOutput] = []
pending_tasks: list[tuple[Task, asyncio.Task[TaskOutput], int]] = []
last_sync_output: TaskOutput | None = None
for task_index, task in enumerate(tasks):
exec_data, task_outputs, last_sync_output = prepare_task_execution(
self, task, task_index, start_index, task_outputs, last_sync_output
)
if exec_data.should_skip:
continue
if isinstance(task, ConditionalTask):
skipped_task_output = await self._ahandle_conditional_task(
task, task_outputs, pending_tasks, task_index, was_replayed
)
if skipped_task_output:
task_outputs.append(skipped_task_output)
continue
if task.async_execution:
context = self._get_context(
task, [last_sync_output] if last_sync_output else []
)
async_task = asyncio.create_task(
task.aexecute_sync(
agent=exec_data.agent,
context=context,
tools=exec_data.tools,
)
)
pending_tasks.append((task, async_task, task_index))
else:
if pending_tasks:
task_outputs = await self._aprocess_async_tasks(
pending_tasks, was_replayed
)
pending_tasks.clear()
context = self._get_context(task, task_outputs)
task_output = await task.aexecute_sync(
agent=exec_data.agent,
context=context,
tools=exec_data.tools,
)
task_outputs.append(task_output)
self._process_task_result(task, task_output)
self._store_execution_log(task, task_output, task_index, was_replayed)
if pending_tasks:
task_outputs = await self._aprocess_async_tasks(pending_tasks, was_replayed)
return self._create_crew_output(task_outputs)
async def _ahandle_conditional_task(
self,
task: ConditionalTask,
task_outputs: list[TaskOutput],
pending_tasks: list[tuple[Task, asyncio.Task[TaskOutput], int]],
task_index: int,
was_replayed: bool,
) -> TaskOutput | None:
"""Handle conditional task evaluation using native async."""
if pending_tasks:
task_outputs = await self._aprocess_async_tasks(pending_tasks, was_replayed)
pending_tasks.clear()
return check_conditional_skip(
self, task, task_outputs, task_index, was_replayed
)
async def _aprocess_async_tasks(
self,
pending_tasks: list[tuple[Task, asyncio.Task[TaskOutput], int]],
was_replayed: bool = False,
) -> list[TaskOutput]:
"""Process pending async tasks and return their outputs."""
task_outputs: list[TaskOutput] = []
for future_task, async_task, task_index in pending_tasks:
task_output = await async_task
task_outputs.append(task_output)
self._process_task_result(future_task, task_output)
self._store_execution_log(
future_task, task_output, task_index, was_replayed
)
return task_outputs
def _handle_crew_planning(self) -> None:
"""Handles the Crew planning."""
@@ -955,10 +1017,26 @@ class Crew(FlowTrackable, BaseModel):
tasks=self.tasks, planning_agent_llm=self.planning_llm
)._handle_crew_planning()
for task, step_plan in zip(
self.tasks, result.list_of_plans_per_task, strict=False
):
task.description += step_plan.plan
plan_map: dict[int, str] = {}
for step_plan in result.list_of_plans_per_task:
if step_plan.task_number in plan_map:
self._logger.log(
"warning",
f"Duplicate plan for Task Number {step_plan.task_number}, "
"using the first plan",
)
else:
plan_map[step_plan.task_number] = step_plan.plan
for idx, task in enumerate(self.tasks):
task_number = idx + 1
if task_number in plan_map:
task.description += plan_map[task_number]
else:
self._logger.log(
"warning",
f"No plan found for Task Number {task_number}",
)
def _store_execution_log(
self,
@@ -1048,33 +1126,11 @@ class Crew(FlowTrackable, BaseModel):
last_sync_output: TaskOutput | None = None
for task_index, task in enumerate(tasks):
if start_index is not None and task_index < start_index:
if task.output:
if task.async_execution:
task_outputs.append(task.output)
else:
task_outputs = [task.output]
last_sync_output = task.output
continue
agent_to_use = self._get_agent_to_use(task)
if agent_to_use is None:
raise ValueError(
f"No agent available for task: {task.description}. "
f"Ensure that either the task has an assigned agent "
f"or a manager agent is provided."
)
# Determine which tools to use - task tools take precedence over agent tools
tools_for_task = task.tools or agent_to_use.tools or []
# Prepare tools and ensure they're compatible with task execution
tools_for_task = self._prepare_tools(
agent_to_use,
task,
tools_for_task,
exec_data, task_outputs, last_sync_output = prepare_task_execution(
self, task, task_index, start_index, task_outputs, last_sync_output
)
self._log_task_start(task, agent_to_use.role)
if exec_data.should_skip:
continue
if isinstance(task, ConditionalTask):
skipped_task_output = self._handle_conditional_task(
@@ -1089,9 +1145,9 @@ class Crew(FlowTrackable, BaseModel):
task, [last_sync_output] if last_sync_output else []
)
future = task.execute_async(
agent=agent_to_use,
agent=exec_data.agent,
context=context,
tools=tools_for_task,
tools=exec_data.tools,
)
futures.append((task, future, task_index))
else:
@@ -1101,9 +1157,9 @@ class Crew(FlowTrackable, BaseModel):
context = self._get_context(task, task_outputs)
task_output = task.execute_sync(
agent=agent_to_use,
agent=exec_data.agent,
context=context,
tools=tools_for_task,
tools=exec_data.tools,
)
task_outputs.append(task_output)
self._process_task_result(task, task_output)
@@ -1126,19 +1182,9 @@ class Crew(FlowTrackable, BaseModel):
task_outputs = self._process_async_tasks(futures, was_replayed)
futures.clear()
previous_output = task_outputs[-1] if task_outputs else None
if previous_output is not None and not task.should_execute(previous_output):
self._logger.log(
"debug",
f"Skipping conditional task: {task.description}",
color="yellow",
)
skipped_task_output = task.get_skipped_task_output()
if not was_replayed:
self._store_execution_log(task, skipped_task_output, task_index)
return skipped_task_output
return None
return check_conditional_skip(
self, task, task_outputs, task_index, was_replayed
)
def _prepare_tools(
self, agent: BaseAgent, task: Task, tools: list[BaseTool]
@@ -1302,7 +1348,8 @@ class Crew(FlowTrackable, BaseModel):
)
return tools
def _get_context(self, task: Task, task_outputs: list[TaskOutput]) -> str:
@staticmethod
def _get_context(task: Task, task_outputs: list[TaskOutput]) -> str:
if not task.context:
return ""
@@ -1371,7 +1418,8 @@ class Crew(FlowTrackable, BaseModel):
)
return task_outputs
def _find_task_index(self, task_id: str, stored_outputs: list[Any]) -> int | None:
@staticmethod
def _find_task_index(task_id: str, stored_outputs: list[Any]) -> int | None:
return next(
(
index
@@ -1431,6 +1479,16 @@ class Crew(FlowTrackable, BaseModel):
)
return None
async def aquery_knowledge(
self, query: list[str], results_limit: int = 3, score_threshold: float = 0.35
) -> list[SearchResult] | None:
"""Query the crew's knowledge base for relevant information asynchronously."""
if self.knowledge:
return await self.knowledge.aquery(
query, results_limit=results_limit, score_threshold=score_threshold
)
return None
def fetch_inputs(self) -> set[str]:
"""
Gathers placeholders (e.g., {something}) referenced in tasks or agents.
@@ -1439,7 +1497,7 @@ class Crew(FlowTrackable, BaseModel):
Returns a set of all discovered placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)\}")
placeholder_pattern = re.compile(r"\{(.+?)}")
required_inputs: set[str] = set()
# Scan tasks for inputs
@@ -1687,6 +1745,32 @@ class Crew(FlowTrackable, BaseModel):
self._logger.log("error", error_msg)
raise RuntimeError(error_msg) from e
def _reset_memory_system(
self, system: Any, name: str, reset_fn: Callable[[Any], Any]
) -> None:
"""Reset a single memory system.
Args:
system: The memory system instance to reset.
name: Display name of the memory system for logging.
reset_fn: Function to call to reset the system.
Raises:
RuntimeError: If the reset operation fails.
"""
try:
reset_fn(system)
self._logger.log(
"info",
f"[Crew ({self.name if self.name else self.id})] "
f"{name} memory has been reset",
)
except Exception as e:
raise RuntimeError(
f"[Crew ({self.name if self.name else self.id})] "
f"Failed to reset {name} memory: {e!s}"
) from e
def _reset_all_memories(self) -> None:
"""Reset all available memory systems."""
memory_systems = self._get_memory_systems()
@@ -1694,21 +1778,10 @@ class Crew(FlowTrackable, BaseModel):
for config in memory_systems.values():
if (system := config.get("system")) is not None:
name = config.get("name")
try:
reset_fn: Callable[[Any], Any] = cast(
Callable[[Any], Any], config.get("reset")
)
reset_fn(system)
self._logger.log(
"info",
f"[Crew ({self.name if self.name else self.id})] "
f"{name} memory has been reset",
)
except Exception as e:
raise RuntimeError(
f"[Crew ({self.name if self.name else self.id})] "
f"Failed to reset {name} memory: {e!s}"
) from e
reset_fn: Callable[[Any], Any] = cast(
Callable[[Any], Any], config.get("reset")
)
self._reset_memory_system(system, name, reset_fn)
def _reset_specific_memory(self, memory_type: str) -> None:
"""Reset a specific memory system.
@@ -1727,21 +1800,8 @@ class Crew(FlowTrackable, BaseModel):
if system is None:
raise RuntimeError(f"{name} memory system is not initialized")
try:
reset_fn: Callable[[Any], Any] = cast(
Callable[[Any], Any], config.get("reset")
)
reset_fn(system)
self._logger.log(
"info",
f"[Crew ({self.name if self.name else self.id})] "
f"{name} memory has been reset",
)
except Exception as e:
raise RuntimeError(
f"[Crew ({self.name if self.name else self.id})] "
f"Failed to reset {name} memory: {e!s}"
) from e
reset_fn: Callable[[Any], Any] = cast(Callable[[Any], Any], config.get("reset"))
self._reset_memory_system(system, name, reset_fn)
def _get_memory_systems(self) -> dict[str, Any]:
"""Get all available memory systems with their configuration.
@@ -1829,7 +1889,8 @@ class Crew(FlowTrackable, BaseModel):
):
self.tasks[0].allow_crewai_trigger_context = True
def _show_tracing_disabled_message(self) -> None:
@staticmethod
def _show_tracing_disabled_message() -> None:
"""Show a message when tracing is disabled."""
from crewai.events.listeners.tracing.utils import has_user_declined_tracing

View File

@@ -0,0 +1,363 @@
"""Utility functions for crew operations."""
from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine, Iterable
from typing import TYPE_CHECKING, Any
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.crews.crew_output import CrewOutput
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.types.streaming import CrewStreamingOutput, FlowStreamingOutput
from crewai.utilities.streaming import (
StreamingState,
TaskInfo,
create_streaming_state,
)
if TYPE_CHECKING:
from crewai.crew import Crew
def enable_agent_streaming(agents: Iterable[BaseAgent]) -> None:
"""Enable streaming on all agents that have an LLM configured.
Args:
agents: Iterable of agents to enable streaming on.
"""
for agent in agents:
if agent.llm is not None:
agent.llm.stream = True
def setup_agents(
crew: Crew,
agents: Iterable[BaseAgent],
embedder: EmbedderConfig | None,
function_calling_llm: Any,
step_callback: Callable[..., Any] | None,
) -> None:
"""Set up agents for crew execution.
Args:
crew: The crew instance agents belong to.
agents: Iterable of agents to set up.
embedder: Embedder configuration for knowledge.
function_calling_llm: Default function calling LLM for agents.
step_callback: Default step callback for agents.
"""
for agent in agents:
agent.crew = crew
agent.set_knowledge(crew_embedder=embedder)
if not agent.function_calling_llm: # type: ignore[attr-defined]
agent.function_calling_llm = function_calling_llm # type: ignore[attr-defined]
if not agent.step_callback: # type: ignore[attr-defined]
agent.step_callback = step_callback # type: ignore[attr-defined]
agent.create_agent_executor()
class TaskExecutionData:
"""Data container for prepared task execution information."""
def __init__(
self,
agent: BaseAgent | None,
tools: list[Any],
should_skip: bool = False,
) -> None:
"""Initialize task execution data.
Args:
agent: The agent to use for task execution (None if skipped).
tools: Prepared tools for the task.
should_skip: Whether the task should be skipped (replay).
"""
self.agent = agent
self.tools = tools
self.should_skip = should_skip
def prepare_task_execution(
crew: Crew,
task: Any,
task_index: int,
start_index: int | None,
task_outputs: list[Any],
last_sync_output: Any | None,
) -> tuple[TaskExecutionData, list[Any], Any | None]:
"""Prepare a task for execution, handling replay skip logic and agent/tool setup.
Args:
crew: The crew instance.
task: The task to prepare.
task_index: Index of the current task.
start_index: Index to start execution from (for replay).
task_outputs: Current list of task outputs.
last_sync_output: Last synchronous task output.
Returns:
A tuple of (TaskExecutionData or None if skipped, updated task_outputs, updated last_sync_output).
If the task should be skipped, TaskExecutionData will have should_skip=True.
Raises:
ValueError: If no agent is available for the task.
"""
# Handle replay skip
if start_index is not None and task_index < start_index:
if task.output:
if task.async_execution:
task_outputs.append(task.output)
else:
task_outputs = [task.output]
last_sync_output = task.output
return (
TaskExecutionData(agent=None, tools=[], should_skip=True),
task_outputs,
last_sync_output,
)
agent_to_use = crew._get_agent_to_use(task)
if agent_to_use is None:
raise ValueError(
f"No agent available for task: {task.description}. "
f"Ensure that either the task has an assigned agent "
f"or a manager agent is provided."
)
tools_for_task = task.tools or agent_to_use.tools or []
tools_for_task = crew._prepare_tools(
agent_to_use,
task,
tools_for_task,
)
crew._log_task_start(task, agent_to_use.role)
return (
TaskExecutionData(agent=agent_to_use, tools=tools_for_task),
task_outputs,
last_sync_output,
)
def check_conditional_skip(
crew: Crew,
task: Any,
task_outputs: list[Any],
task_index: int,
was_replayed: bool,
) -> Any | None:
"""Check if a conditional task should be skipped.
Args:
crew: The crew instance.
task: The conditional task to check.
task_outputs: List of previous task outputs.
task_index: Index of the current task.
was_replayed: Whether this is a replayed execution.
Returns:
The skipped task output if the task should be skipped, None otherwise.
"""
previous_output = task_outputs[-1] if task_outputs else None
if previous_output is not None and not task.should_execute(previous_output):
crew._logger.log(
"debug",
f"Skipping conditional task: {task.description}",
color="yellow",
)
skipped_task_output = task.get_skipped_task_output()
if not was_replayed:
crew._store_execution_log(task, skipped_task_output, task_index)
return skipped_task_output
return None
def prepare_kickoff(crew: Crew, inputs: dict[str, Any] | None) -> dict[str, Any] | None:
"""Prepare crew for kickoff execution.
Handles before callbacks, event emission, task handler reset, input
interpolation, task callbacks, agent setup, and planning.
Args:
crew: The crew instance to prepare.
inputs: Optional input dictionary to pass to the crew.
Returns:
The potentially modified inputs dictionary after before callbacks.
"""
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.crew_events import CrewKickoffStartedEvent
for before_callback in crew.before_kickoff_callbacks:
if inputs is None:
inputs = {}
inputs = before_callback(inputs)
future = crewai_event_bus.emit(
crew,
CrewKickoffStartedEvent(crew_name=crew.name, inputs=inputs),
)
if future is not None:
try:
future.result()
except Exception: # noqa: S110
pass
crew._task_output_handler.reset()
crew._logging_color = "bold_purple"
if inputs is not None:
crew._inputs = inputs
crew._interpolate_inputs(inputs)
crew._set_tasks_callbacks()
crew._set_allow_crewai_trigger_context_for_first_task()
setup_agents(
crew,
crew.agents,
crew.embedder,
crew.function_calling_llm,
crew.step_callback,
)
if crew.planning:
crew._handle_crew_planning()
return inputs
class StreamingContext:
"""Container for streaming state and holders used during crew execution."""
def __init__(self, use_async: bool = False) -> None:
"""Initialize streaming context.
Args:
use_async: Whether to use async streaming mode.
"""
self.result_holder: list[CrewOutput] = []
self.current_task_info: TaskInfo = {
"index": 0,
"name": "",
"id": "",
"agent_role": "",
"agent_id": "",
}
self.state: StreamingState = create_streaming_state(
self.current_task_info, self.result_holder, use_async=use_async
)
self.output_holder: list[CrewStreamingOutput | FlowStreamingOutput] = []
class ForEachStreamingContext:
"""Container for streaming state used in for_each crew execution methods."""
def __init__(self) -> None:
"""Initialize for_each streaming context."""
self.result_holder: list[list[CrewOutput]] = [[]]
self.current_task_info: TaskInfo = {
"index": 0,
"name": "",
"id": "",
"agent_role": "",
"agent_id": "",
}
self.state: StreamingState = create_streaming_state(
self.current_task_info, self.result_holder, use_async=True
)
self.output_holder: list[CrewStreamingOutput | FlowStreamingOutput] = []
async def run_for_each_async(
crew: Crew,
inputs: list[dict[str, Any]],
kickoff_fn: Callable[
[Crew, dict[str, Any]], Coroutine[Any, Any, CrewOutput | CrewStreamingOutput]
],
) -> list[CrewOutput | CrewStreamingOutput] | CrewStreamingOutput:
"""Execute crew workflow for each input asynchronously.
Args:
crew: The crew instance to execute.
inputs: List of input dictionaries for each execution.
kickoff_fn: Async function to call for each crew copy (kickoff_async or akickoff).
Returns:
If streaming, a single CrewStreamingOutput that yields chunks from all crews.
Otherwise, a list of CrewOutput results.
"""
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities.streaming import (
create_async_chunk_generator,
signal_end,
signal_error,
)
crew_copies = [crew.copy() for _ in inputs]
if crew.stream:
ctx = ForEachStreamingContext()
async def run_all_crews() -> None:
try:
streaming_outputs: list[CrewStreamingOutput] = []
for i, crew_copy in enumerate(crew_copies):
streaming = await kickoff_fn(crew_copy, inputs[i])
if isinstance(streaming, CrewStreamingOutput):
streaming_outputs.append(streaming)
async def consume_stream(
stream_output: CrewStreamingOutput,
) -> CrewOutput:
async for chunk in stream_output:
if (
ctx.state.async_queue is not None
and ctx.state.loop is not None
):
ctx.state.loop.call_soon_threadsafe(
ctx.state.async_queue.put_nowait, chunk
)
return stream_output.result
crew_results = await asyncio.gather(
*[consume_stream(s) for s in streaming_outputs]
)
ctx.result_holder[0] = list(crew_results)
except Exception as e:
signal_error(ctx.state, e, is_async=True)
finally:
signal_end(ctx.state, is_async=True)
streaming_output = CrewStreamingOutput(
async_iterator=create_async_chunk_generator(
ctx.state, run_all_crews, ctx.output_holder
)
)
def set_results_wrapper(result: Any) -> None:
streaming_output._set_results(result)
streaming_output._set_result = set_results_wrapper # type: ignore[method-assign]
ctx.output_holder.append(streaming_output)
return streaming_output
async_tasks: list[asyncio.Task[CrewOutput | CrewStreamingOutput]] = [
asyncio.create_task(kickoff_fn(crew_copy, input_data))
for crew_copy, input_data in zip(crew_copies, inputs, strict=True)
]
results = await asyncio.gather(*async_tasks)
total_usage_metrics = UsageMetrics()
for crew_copy in crew_copies:
if crew_copy.usage_metrics:
total_usage_metrics.add_usage_metrics(crew_copy.usage_metrics)
crew.usage_metrics = total_usage_metrics
crew._task_output_handler.reset()
return list(results)

View File

@@ -1,7 +1,6 @@
from __future__ import annotations
from io import StringIO
import threading
from typing import TYPE_CHECKING, Any
from pydantic import Field, PrivateAttr
@@ -17,8 +16,6 @@ from crewai.events.types.a2a_events import (
A2AResponseReceivedEvent,
)
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionStartedEvent,
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
@@ -38,15 +35,16 @@ from crewai.events.types.crew_events import (
from crewai.events.types.flow_events import (
FlowCreatedEvent,
FlowFinishedEvent,
FlowPausedEvent,
FlowStartedEvent,
MethodExecutionFailedEvent,
MethodExecutionFinishedEvent,
MethodExecutionPausedEvent,
MethodExecutionStartedEvent,
)
from crewai.events.types.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeQueryStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
@@ -110,7 +108,6 @@ class EventListener(BaseEventListener):
text_stream: StringIO = StringIO()
knowledge_retrieval_in_progress: bool = False
knowledge_query_in_progress: bool = False
method_branches: dict[str, Any] = Field(default_factory=dict)
def __new__(cls) -> EventListener:
if cls._instance is None:
@@ -124,10 +121,8 @@ class EventListener(BaseEventListener):
self._telemetry = Telemetry()
self._telemetry.set_tracer()
self.execution_spans = {}
self.method_branches = {}
self._initialized = True
self.formatter = ConsoleFormatter(verbose=True)
self._crew_tree_lock = threading.Condition()
# Initialize trace listener with formatter for memory event handling
trace_listener = TraceCollectionListener()
@@ -138,10 +133,10 @@ class EventListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus) -> None:
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_started(source: Any, event: CrewKickoffStartedEvent) -> None:
with self._crew_tree_lock:
self.formatter.create_crew_tree(event.crew_name or "Crew", source.id)
self._telemetry.crew_execution_span(source, event.inputs)
self._crew_tree_lock.notify_all()
self.formatter.handle_crew_started(event.crew_name or "Crew", source.id)
source._execution_span = self._telemetry.crew_execution_span(
source, event.inputs
)
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_completed(source: Any, event: CrewKickoffCompletedEvent) -> None:
@@ -149,8 +144,7 @@ class EventListener(BaseEventListener):
final_string_output = event.output.raw
self._telemetry.end_crew(source, final_string_output)
self.formatter.update_crew_tree(
self.formatter.current_crew_tree,
self.formatter.handle_crew_status(
event.crew_name or "Crew",
source.id,
"completed",
@@ -159,8 +153,7 @@ class EventListener(BaseEventListener):
@crewai_event_bus.on(CrewKickoffFailedEvent)
def on_crew_failed(source: Any, event: CrewKickoffFailedEvent) -> None:
self.formatter.update_crew_tree(
self.formatter.current_crew_tree,
self.formatter.handle_crew_status(
event.crew_name or "Crew",
source.id,
"failed",
@@ -193,23 +186,22 @@ class EventListener(BaseEventListener):
# ----------- TASK EVENTS -----------
def get_task_name(source: Any) -> str | None:
return (
source.name
if hasattr(source, "name") and source.name
else source.description
if hasattr(source, "description") and source.description
else None
)
@crewai_event_bus.on(TaskStartedEvent)
def on_task_started(source: Any, event: TaskStartedEvent) -> None:
span = self._telemetry.task_started(crew=source.agent.crew, task=source)
self.execution_spans[source] = span
with self._crew_tree_lock:
self._crew_tree_lock.wait_for(
lambda: self.formatter.current_crew_tree is not None, timeout=5.0
)
if self.formatter.current_crew_tree is not None:
task_name = (
source.name if hasattr(source, "name") and source.name else None
)
self.formatter.create_task_branch(
self.formatter.current_crew_tree, source.id, task_name
)
task_name = get_task_name(source)
self.formatter.handle_task_started(source.id, task_name)
@crewai_event_bus.on(TaskCompletedEvent)
def on_task_completed(source: Any, event: TaskCompletedEvent) -> None:
@@ -220,13 +212,9 @@ class EventListener(BaseEventListener):
self.execution_spans[source] = None
# Pass task name if it exists
task_name = source.name if hasattr(source, "name") and source.name else None
self.formatter.update_task_status(
self.formatter.current_crew_tree,
source.id,
source.agent.role,
"completed",
task_name,
task_name = get_task_name(source)
self.formatter.handle_task_status(
source.id, source.agent.role, "completed", task_name
)
@crewai_event_bus.on(TaskFailedEvent)
@@ -238,37 +226,12 @@ class EventListener(BaseEventListener):
self.execution_spans[source] = None
# Pass task name if it exists
task_name = source.name if hasattr(source, "name") and source.name else None
self.formatter.update_task_status(
self.formatter.current_crew_tree,
source.id,
source.agent.role,
"failed",
task_name,
task_name = get_task_name(source)
self.formatter.handle_task_status(
source.id, source.agent.role, "failed", task_name
)
# ----------- AGENT EVENTS -----------
@crewai_event_bus.on(AgentExecutionStartedEvent)
def on_agent_execution_started(
_: Any, event: AgentExecutionStartedEvent
) -> None:
self.formatter.create_agent_branch(
self.formatter.current_task_branch,
event.agent.role,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(AgentExecutionCompletedEvent)
def on_agent_execution_completed(
_: Any, event: AgentExecutionCompletedEvent
) -> None:
self.formatter.update_agent_status(
self.formatter.current_agent_branch,
event.agent.role,
self.formatter.current_crew_tree,
)
# ----------- LITE AGENT EVENTS -----------
@crewai_event_bus.on(LiteAgentExecutionStartedEvent)
@@ -312,57 +275,61 @@ class EventListener(BaseEventListener):
self._telemetry.flow_execution_span(
event.flow_name, list(source._methods.keys())
)
tree = self.formatter.create_flow_tree(event.flow_name, str(source.flow_id))
self.formatter.current_flow_tree = tree
self.formatter.start_flow(event.flow_name, str(source.flow_id))
self.formatter.handle_flow_created(event.flow_name, str(source.flow_id))
self.formatter.handle_flow_started(event.flow_name, str(source.flow_id))
@crewai_event_bus.on(FlowFinishedEvent)
def on_flow_finished(source: Any, event: FlowFinishedEvent) -> None:
self.formatter.update_flow_status(
self.formatter.current_flow_tree, event.flow_name, source.flow_id
self.formatter.handle_flow_status(
event.flow_name,
source.flow_id,
)
@crewai_event_bus.on(MethodExecutionStartedEvent)
def on_method_execution_started(
_: Any, event: MethodExecutionStartedEvent
) -> None:
method_branch = self.method_branches.get(event.method_name)
updated_branch = self.formatter.update_method_status(
method_branch,
self.formatter.current_flow_tree,
self.formatter.handle_method_status(
event.method_name,
"running",
)
self.method_branches[event.method_name] = updated_branch
@crewai_event_bus.on(MethodExecutionFinishedEvent)
def on_method_execution_finished(
_: Any, event: MethodExecutionFinishedEvent
) -> None:
method_branch = self.method_branches.get(event.method_name)
updated_branch = self.formatter.update_method_status(
method_branch,
self.formatter.current_flow_tree,
self.formatter.handle_method_status(
event.method_name,
"completed",
)
self.method_branches[event.method_name] = updated_branch
@crewai_event_bus.on(MethodExecutionFailedEvent)
def on_method_execution_failed(
_: Any, event: MethodExecutionFailedEvent
) -> None:
method_branch = self.method_branches.get(event.method_name)
updated_branch = self.formatter.update_method_status(
method_branch,
self.formatter.current_flow_tree,
self.formatter.handle_method_status(
event.method_name,
"failed",
)
self.method_branches[event.method_name] = updated_branch
@crewai_event_bus.on(MethodExecutionPausedEvent)
def on_method_execution_paused(
_: Any, event: MethodExecutionPausedEvent
) -> None:
self.formatter.handle_method_status(
event.method_name,
"paused",
)
@crewai_event_bus.on(FlowPausedEvent)
def on_flow_paused(_: Any, event: FlowPausedEvent) -> None:
self.formatter.handle_flow_status(
event.flow_name,
event.flow_id,
"paused",
)
# ----------- TOOL USAGE EVENTS -----------
@crewai_event_bus.on(ToolUsageStartedEvent)
def on_tool_usage_started(source: Any, event: ToolUsageStartedEvent) -> None:
if isinstance(source, LLM):
@@ -372,9 +339,9 @@ class EventListener(BaseEventListener):
)
else:
self.formatter.handle_tool_usage_started(
self.formatter.current_agent_branch,
event.tool_name,
self.formatter.current_crew_tree,
event.tool_args,
event.run_attempts,
)
@crewai_event_bus.on(ToolUsageFinishedEvent)
@@ -383,12 +350,6 @@ class EventListener(BaseEventListener):
self.formatter.handle_llm_tool_usage_finished(
event.tool_name,
)
else:
self.formatter.handle_tool_usage_finished(
self.formatter.current_tool_branch,
event.tool_name,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(ToolUsageErrorEvent)
def on_tool_usage_error(source: Any, event: ToolUsageErrorEvent) -> None:
@@ -399,10 +360,9 @@ class EventListener(BaseEventListener):
)
else:
self.formatter.handle_tool_usage_error(
self.formatter.current_tool_branch,
event.tool_name,
event.error,
self.formatter.current_crew_tree,
event.run_attempts,
)
# ----------- LLM EVENTS -----------
@@ -411,32 +371,15 @@ class EventListener(BaseEventListener):
def on_llm_call_started(_: Any, event: LLMCallStartedEvent) -> None:
self.text_stream = StringIO()
self.next_chunk = 0
# Capture the returned tool branch and update the current_tool_branch reference
thinking_branch = self.formatter.handle_llm_call_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
# Update the formatter's current_tool_branch to ensure proper cleanup
if thinking_branch is not None:
self.formatter.current_tool_branch = thinking_branch
@crewai_event_bus.on(LLMCallCompletedEvent)
def on_llm_call_completed(_: Any, event: LLMCallCompletedEvent) -> None:
self.formatter.handle_llm_stream_completed()
self.formatter.handle_llm_call_completed(
self.formatter.current_tool_branch,
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(LLMCallFailedEvent)
def on_llm_call_failed(_: Any, event: LLMCallFailedEvent) -> None:
self.formatter.handle_llm_stream_completed()
self.formatter.handle_llm_call_failed(
self.formatter.current_tool_branch,
event.error,
self.formatter.current_crew_tree,
)
self.formatter.handle_llm_call_failed(event.error)
@crewai_event_bus.on(LLMStreamChunkEvent)
def on_llm_stream_chunk(_: Any, event: LLMStreamChunkEvent) -> None:
@@ -447,9 +390,7 @@ class EventListener(BaseEventListener):
accumulated_text = self.text_stream.getvalue()
self.formatter.handle_llm_stream_chunk(
event.chunk,
accumulated_text,
self.formatter.current_crew_tree,
event.call_type,
)
@@ -489,7 +430,6 @@ class EventListener(BaseEventListener):
@crewai_event_bus.on(CrewTestCompletedEvent)
def on_crew_test_completed(_: Any, event: CrewTestCompletedEvent) -> None:
self.formatter.handle_crew_test_completed(
self.formatter.current_flow_tree,
event.crew_name or "Crew",
)
@@ -506,10 +446,7 @@ class EventListener(BaseEventListener):
self.knowledge_retrieval_in_progress = True
self.formatter.handle_knowledge_retrieval_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
self.formatter.handle_knowledge_retrieval_started()
@crewai_event_bus.on(KnowledgeRetrievalCompletedEvent)
def on_knowledge_retrieval_completed(
@@ -520,24 +457,13 @@ class EventListener(BaseEventListener):
self.knowledge_retrieval_in_progress = False
self.formatter.handle_knowledge_retrieval_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.retrieved_knowledge,
event.query,
)
@crewai_event_bus.on(KnowledgeQueryStartedEvent)
def on_knowledge_query_started(
_: Any, event: KnowledgeQueryStartedEvent
) -> None:
pass
@crewai_event_bus.on(KnowledgeQueryFailedEvent)
def on_knowledge_query_failed(_: Any, event: KnowledgeQueryFailedEvent) -> None:
self.formatter.handle_knowledge_query_failed(
self.formatter.current_agent_branch,
event.error,
self.formatter.current_crew_tree,
)
self.formatter.handle_knowledge_query_failed(event.error)
@crewai_event_bus.on(KnowledgeQueryCompletedEvent)
def on_knowledge_query_completed(
@@ -549,11 +475,7 @@ class EventListener(BaseEventListener):
def on_knowledge_search_query_failed(
_: Any, event: KnowledgeSearchQueryFailedEvent
) -> None:
self.formatter.handle_knowledge_search_query_failed(
self.formatter.current_agent_branch,
event.error,
self.formatter.current_crew_tree,
)
self.formatter.handle_knowledge_search_query_failed(event.error)
# ----------- REASONING EVENTS -----------
@@ -561,11 +483,7 @@ class EventListener(BaseEventListener):
def on_agent_reasoning_started(
_: Any, event: AgentReasoningStartedEvent
) -> None:
self.formatter.handle_reasoning_started(
self.formatter.current_agent_branch,
event.attempt,
self.formatter.current_crew_tree,
)
self.formatter.handle_reasoning_started(event.attempt)
@crewai_event_bus.on(AgentReasoningCompletedEvent)
def on_agent_reasoning_completed(
@@ -574,14 +492,12 @@ class EventListener(BaseEventListener):
self.formatter.handle_reasoning_completed(
event.plan,
event.ready,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(AgentReasoningFailedEvent)
def on_agent_reasoning_failed(_: Any, event: AgentReasoningFailedEvent) -> None:
self.formatter.handle_reasoning_failed(
event.error,
self.formatter.current_crew_tree,
)
# ----------- AGENT LOGGING EVENTS -----------
@@ -708,18 +624,6 @@ class EventListener(BaseEventListener):
event.tool_args,
)
@crewai_event_bus.on(MCPToolExecutionCompletedEvent)
def on_mcp_tool_execution_completed(
_: Any, event: MCPToolExecutionCompletedEvent
) -> None:
self.formatter.handle_mcp_tool_execution_completed(
event.server_name,
event.tool_name,
event.tool_args,
event.result,
event.execution_duration_ms,
)
@crewai_event_bus.on(MCPToolExecutionFailedEvent)
def on_mcp_tool_execution_failed(
_: Any, event: MCPToolExecutionFailedEvent

View File

@@ -9,6 +9,8 @@ from rich.console import Console
from rich.panel import Panel
from crewai.cli.authentication.token import AuthError, get_auth_token
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.plus_api import PlusAPI
from crewai.cli.version import get_crewai_version
from crewai.events.listeners.tracing.types import TraceEvent
@@ -16,7 +18,6 @@ from crewai.events.listeners.tracing.utils import (
is_tracing_enabled_in_context,
should_auto_collect_first_time_traces,
)
from crewai.utilities.constants import CREWAI_BASE_URL
logger = getLogger(__name__)
@@ -326,10 +327,12 @@ class TraceBatchManager:
if response.status_code == 200:
access_code = response.json().get("access_code", None)
console = Console()
settings = Settings()
base_url = settings.enterprise_base_url or DEFAULT_CREWAI_ENTERPRISE_URL
return_link = (
f"{CREWAI_BASE_URL}/crewai_plus/trace_batches/{self.trace_batch_id}"
f"{base_url}/crewai_plus/trace_batches/{self.trace_batch_id}"
if not self.is_current_batch_ephemeral and access_code is None
else f"{CREWAI_BASE_URL}/crewai_plus/ephemeral_trace_batches/{self.trace_batch_id}?access_code={access_code}"
else f"{base_url}/crewai_plus/ephemeral_trace_batches/{self.trace_batch_id}?access_code={access_code}"
)
if self.is_current_batch_ephemeral:

View File

@@ -1,7 +1,7 @@
"""Trace collection listener for orchestrating trace collection."""
import os
from typing import Any, ClassVar
from typing import Any, ClassVar, cast
import uuid
from typing_extensions import Self
@@ -105,7 +105,7 @@ class TraceCollectionListener(BaseEventListener):
"""Create or return singleton instance."""
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
return cast(Self, cls._instance)
def __init__(
self,
@@ -319,21 +319,12 @@ class TraceCollectionListener(BaseEventListener):
source: Any, event: MemoryQueryCompletedEvent
) -> None:
self._handle_action_event("memory_query_completed", source, event)
if self.formatter and self.memory_retrieval_in_progress:
self.formatter.handle_memory_query_completed(
self.formatter.current_agent_branch,
event.source_type or "memory",
event.query_time_ms,
self.formatter.current_crew_tree,
)
@event_bus.on(MemoryQueryFailedEvent)
def on_memory_query_failed(source: Any, event: MemoryQueryFailedEvent) -> None:
self._handle_action_event("memory_query_failed", source, event)
if self.formatter and self.memory_retrieval_in_progress:
self.formatter.handle_memory_query_failed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.error,
event.source_type or "memory",
)
@@ -347,10 +338,7 @@ class TraceCollectionListener(BaseEventListener):
self.memory_save_in_progress = True
self.formatter.handle_memory_save_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
self.formatter.handle_memory_save_started()
@event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(
@@ -364,8 +352,6 @@ class TraceCollectionListener(BaseEventListener):
self.memory_save_in_progress = False
self.formatter.handle_memory_save_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.save_time_ms,
event.source_type or "memory",
)
@@ -375,10 +361,8 @@ class TraceCollectionListener(BaseEventListener):
self._handle_action_event("memory_save_failed", source, event)
if self.formatter and self.memory_save_in_progress:
self.formatter.handle_memory_save_failed(
self.formatter.current_agent_branch,
event.error,
event.source_type or "memory",
self.formatter.current_crew_tree,
)
@event_bus.on(MemoryRetrievalStartedEvent)
@@ -391,10 +375,7 @@ class TraceCollectionListener(BaseEventListener):
self.memory_retrieval_in_progress = True
self.formatter.handle_memory_retrieval_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
self.formatter.handle_memory_retrieval_started()
@event_bus.on(MemoryRetrievalCompletedEvent)
def on_memory_retrieval_completed(
@@ -406,8 +387,6 @@ class TraceCollectionListener(BaseEventListener):
self.memory_retrieval_in_progress = False
self.formatter.handle_memory_retrieval_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.memory_content,
event.retrieval_time_ms,
)

View File

@@ -58,6 +58,29 @@ class MethodExecutionFailedEvent(FlowEvent):
model_config = ConfigDict(arbitrary_types_allowed=True)
class MethodExecutionPausedEvent(FlowEvent):
"""Event emitted when a flow method is paused waiting for human feedback.
This event is emitted when a @human_feedback decorated method with an
async provider raises HumanFeedbackPending to pause execution.
Attributes:
flow_name: Name of the flow that is paused.
method_name: Name of the method waiting for feedback.
state: Current flow state when paused.
flow_id: Unique identifier for this flow execution.
message: The message shown when requesting feedback.
emit: Optional list of possible outcomes for routing.
"""
method_name: str
state: dict[str, Any] | BaseModel
flow_id: str
message: str
emit: list[str] | None = None
type: str = "method_execution_paused"
class FlowFinishedEvent(FlowEvent):
"""Event emitted when a flow completes execution"""
@@ -67,8 +90,71 @@ class FlowFinishedEvent(FlowEvent):
state: dict[str, Any] | BaseModel
class FlowPausedEvent(FlowEvent):
"""Event emitted when a flow is paused waiting for human feedback.
This event is emitted when a flow is paused due to a @human_feedback
decorated method with an async provider raising HumanFeedbackPending.
Attributes:
flow_name: Name of the flow that is paused.
flow_id: Unique identifier for this flow execution.
method_name: Name of the method waiting for feedback.
state: Current flow state when paused.
message: The message shown when requesting feedback.
emit: Optional list of possible outcomes for routing.
"""
flow_id: str
method_name: str
state: dict[str, Any] | BaseModel
message: str
emit: list[str] | None = None
type: str = "flow_paused"
class FlowPlotEvent(FlowEvent):
"""Event emitted when a flow plot is created"""
flow_name: str
type: str = "flow_plot"
class HumanFeedbackRequestedEvent(FlowEvent):
"""Event emitted when human feedback is requested.
This event is emitted when a @human_feedback decorated method
requires input from a human reviewer.
Attributes:
flow_name: Name of the flow requesting feedback.
method_name: Name of the method decorated with @human_feedback.
output: The method output shown to the human for review.
message: The message displayed when requesting feedback.
emit: Optional list of possible outcomes for routing.
"""
method_name: str
output: Any
message: str
emit: list[str] | None = None
type: str = "human_feedback_requested"
class HumanFeedbackReceivedEvent(FlowEvent):
"""Event emitted when human feedback is received.
This event is emitted after a human provides feedback in response
to a @human_feedback decorated method.
Attributes:
flow_name: Name of the flow that received feedback.
method_name: Name of the method that received feedback.
feedback: The raw text feedback provided by the human.
outcome: The collapsed outcome string (if emit was specified).
"""
method_name: str
feedback: str
outcome: str | None = None
type: str = "human_feedback_received"

View File

@@ -19,9 +19,9 @@ class SignalType(IntEnum):
SIGTERM = signal.SIGTERM
SIGINT = signal.SIGINT
SIGHUP = signal.SIGHUP
SIGTSTP = signal.SIGTSTP
SIGCONT = signal.SIGCONT
SIGHUP = getattr(signal, "SIGHUP", 1)
SIGTSTP = getattr(signal, "SIGTSTP", 20)
SIGCONT = getattr(signal, "SIGCONT", 18)
class SigTermEvent(BaseEvent):

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
from crewai.experimental.crew_agent_executor_flow import CrewAgentExecutorFlow
from crewai.experimental.evaluation import (
AgentEvaluationResult,
AgentEvaluator,
@@ -23,6 +24,7 @@ __all__ = [
"AgentEvaluationResult",
"AgentEvaluator",
"BaseEvaluator",
"CrewAgentExecutorFlow",
"EvaluationScore",
"EvaluationTraceCallback",
"ExperimentResult",

View File

@@ -0,0 +1,808 @@
from __future__ import annotations
from collections.abc import Callable
import threading
from typing import TYPE_CHECKING, Any, Literal, cast
from uuid import uuid4
from pydantic import BaseModel, Field, GetCoreSchemaHandler
from pydantic_core import CoreSchema, core_schema
from rich.console import Console
from rich.text import Text
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserError,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.logging_events import (
AgentLogsExecutionEvent,
AgentLogsStartedEvent,
)
from crewai.flow.flow import Flow, listen, or_, router, start
from crewai.hooks.llm_hooks import (
get_after_llm_call_hooks,
get_before_llm_call_hooks,
)
from crewai.utilities.agent_utils import (
enforce_rpm_limit,
format_message_for_llm,
get_llm_response,
handle_agent_action_core,
handle_context_length,
handle_max_iterations_exceeded,
handle_output_parser_exception,
handle_unknown_error,
has_reached_max_iterations,
is_context_length_exceeded,
process_llm_response,
)
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.printer import Printer
from crewai.utilities.tool_utils import execute_tool_and_check_finality
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.agents.tools_handler import ToolsHandler
from crewai.crew import Crew
from crewai.llms.base_llm import BaseLLM
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_types import ToolResult
from crewai.utilities.prompts import StandardPromptResult, SystemPromptResult
class AgentReActState(BaseModel):
"""Structured state for agent ReAct flow execution.
Replaces scattered instance variables with validated immutable state.
Maps to: self.messages, self.iterations, formatted_answer in current executor.
"""
messages: list[LLMMessage] = Field(default_factory=list)
iterations: int = Field(default=0)
current_answer: AgentAction | AgentFinish | None = Field(default=None)
is_finished: bool = Field(default=False)
ask_for_human_input: bool = Field(default=False)
class CrewAgentExecutorFlow(Flow[AgentReActState], CrewAgentExecutorMixin):
"""Flow-based executor matching CrewAgentExecutor interface.
Inherits from:
- Flow[AgentReActState]: Provides flow orchestration capabilities
- CrewAgentExecutorMixin: Provides memory methods (short/long/external term)
Note: Multiple instances may be created during agent initialization
(cache setup, RPM controller setup, etc.) but only the final instance
should execute tasks via invoke().
"""
def __init__(
self,
llm: BaseLLM,
task: Task,
crew: Crew,
agent: Agent,
prompt: SystemPromptResult | StandardPromptResult,
max_iter: int,
tools: list[CrewStructuredTool],
tools_names: str,
stop_words: list[str],
tools_description: str,
tools_handler: ToolsHandler,
step_callback: Any = None,
original_tools: list[BaseTool] | None = None,
function_calling_llm: BaseLLM | Any | None = None,
respect_context_window: bool = False,
request_within_rpm_limit: Callable[[], bool] | None = None,
callbacks: list[Any] | None = None,
response_model: type[BaseModel] | None = None,
i18n: I18N | None = None,
) -> None:
"""Initialize the flow-based agent executor.
Args:
llm: Language model instance.
task: Task to execute.
crew: Crew instance.
agent: Agent to execute.
prompt: Prompt templates.
max_iter: Maximum iterations.
tools: Available tools.
tools_names: Tool names string.
stop_words: Stop word list.
tools_description: Tool descriptions.
tools_handler: Tool handler instance.
step_callback: Optional step callback.
original_tools: Original tool list.
function_calling_llm: Optional function calling LLM.
respect_context_window: Respect context limits.
request_within_rpm_limit: RPM limit check function.
callbacks: Optional callbacks list.
response_model: Optional Pydantic model for structured outputs.
"""
self._i18n: I18N = i18n or get_i18n()
self.llm = llm
self.task = task
self.agent = agent
self.crew = crew
self.prompt = prompt
self.tools = tools
self.tools_names = tools_names
self.stop = stop_words
self.max_iter = max_iter
self.callbacks = callbacks or []
self._printer: Printer = Printer()
self.tools_handler = tools_handler
self.original_tools = original_tools or []
self.step_callback = step_callback
self.tools_description = tools_description
self.function_calling_llm = function_calling_llm
self.respect_context_window = respect_context_window
self.request_within_rpm_limit = request_within_rpm_limit
self.response_model = response_model
self.log_error_after = 3
self._console: Console = Console()
# Error context storage for recovery
self._last_parser_error: OutputParserError | None = None
self._last_context_error: Exception | None = None
# Execution guard to prevent concurrent/duplicate executions
self._execution_lock = threading.Lock()
self._is_executing: bool = False
self._has_been_invoked: bool = False
self._flow_initialized: bool = False
self._instance_id = str(uuid4())[:8]
self.before_llm_call_hooks: list[Callable] = []
self.after_llm_call_hooks: list[Callable] = []
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
if self.llm:
existing_stop = getattr(self.llm, "stop", [])
self.llm.stop = list(
set(
existing_stop + self.stop
if isinstance(existing_stop, list)
else self.stop
)
)
self._state = AgentReActState()
def _ensure_flow_initialized(self) -> None:
"""Ensure Flow.__init__() has been called.
This is deferred from __init__ to prevent FlowCreatedEvent emission
during agent setup when multiple executor instances are created.
Only the instance that actually executes via invoke() will emit events.
"""
if not self._flow_initialized:
# Now call Flow's __init__ which will replace self._state
# with Flow's managed state. Suppress flow events since this is
# an agent executor, not a user-facing flow.
super().__init__(
suppress_flow_events=True,
)
self._flow_initialized = True
@property
def use_stop_words(self) -> bool:
"""Check to determine if stop words are being used.
Returns:
bool: True if stop words should be used.
"""
return self.llm.supports_stop_words() if self.llm else False
@property
def state(self) -> AgentReActState:
"""Get state - returns temporary state if Flow not yet initialized.
Flow initialization is deferred to prevent event emission during agent setup.
Returns the temporary state until invoke() is called.
"""
return self._state
@property
def messages(self) -> list[LLMMessage]:
"""Compatibility property for mixin - returns state messages."""
return self._state.messages
@property
def iterations(self) -> int:
"""Compatibility property for mixin - returns state iterations."""
return self._state.iterations
@start()
def initialize_reasoning(self) -> Literal["initialized"]:
"""Initialize the reasoning flow and emit agent start logs."""
self._show_start_logs()
return "initialized"
@listen("force_final_answer")
def force_final_answer(self) -> Literal["agent_finished"]:
"""Force agent to provide final answer when max iterations exceeded."""
formatted_answer = handle_max_iterations_exceeded(
formatted_answer=None,
printer=self._printer,
i18n=self._i18n,
messages=list(self.state.messages),
llm=self.llm,
callbacks=self.callbacks,
)
self.state.current_answer = formatted_answer
self.state.is_finished = True
return "agent_finished"
@listen("continue_reasoning")
def call_llm_and_parse(self) -> Literal["parsed", "parser_error", "context_error"]:
"""Execute LLM call with hooks and parse the response.
Returns routing decision based on parsing result.
"""
try:
enforce_rpm_limit(self.request_within_rpm_limit)
answer = get_llm_response(
llm=self.llm,
messages=list(self.state.messages),
callbacks=self.callbacks,
printer=self._printer,
from_task=self.task,
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
)
# Parse the LLM response
formatted_answer = process_llm_response(answer, self.use_stop_words)
self.state.current_answer = formatted_answer
if "Final Answer:" in answer and isinstance(formatted_answer, AgentAction):
warning_text = Text()
warning_text.append("⚠️ ", style="yellow bold")
warning_text.append(
f"LLM returned 'Final Answer:' but parsed as AgentAction (tool: {formatted_answer.tool})",
style="yellow",
)
self._console.print(warning_text)
preview_text = Text()
preview_text.append("Answer preview: ", style="yellow")
preview_text.append(f"{answer[:200]}...", style="yellow dim")
self._console.print(preview_text)
return "parsed"
except OutputParserError as e:
# Store error context for recovery
self._last_parser_error = e or OutputParserError(
error="Unknown parser error"
)
return "parser_error"
except Exception as e:
if is_context_length_exceeded(e):
self._last_context_error = e
return "context_error"
if e.__class__.__module__.startswith("litellm"):
raise e
handle_unknown_error(self._printer, e)
raise
@router(call_llm_and_parse)
def route_by_answer_type(self) -> Literal["execute_tool", "agent_finished"]:
"""Route based on whether answer is AgentAction or AgentFinish."""
if isinstance(self.state.current_answer, AgentAction):
return "execute_tool"
return "agent_finished"
@listen("execute_tool")
def execute_tool_action(self) -> Literal["tool_completed", "tool_result_is_final"]:
"""Execute the tool action and handle the result."""
try:
action = cast(AgentAction, self.state.current_answer)
# Extract fingerprint context for tool execution
fingerprint_context = {}
if (
self.agent
and hasattr(self.agent, "security_config")
and hasattr(self.agent.security_config, "fingerprint")
):
fingerprint_context = {
"agent_fingerprint": str(self.agent.security_config.fingerprint)
}
# Execute the tool
tool_result = execute_tool_and_check_finality(
agent_action=action,
fingerprint_context=fingerprint_context,
tools=self.tools,
i18n=self._i18n,
agent_key=self.agent.key if self.agent else None,
agent_role=self.agent.role if self.agent else None,
tools_handler=self.tools_handler,
task=self.task,
agent=self.agent,
function_calling_llm=self.function_calling_llm,
crew=self.crew,
)
# Handle agent action and append observation to messages
result = self._handle_agent_action(action, tool_result)
self.state.current_answer = result
# Invoke step callback if configured
self._invoke_step_callback(result)
# Append result message to conversation state
if hasattr(result, "text"):
self._append_message_to_state(result.text)
# Check if tool result became a final answer (result_as_answer flag)
if isinstance(result, AgentFinish):
self.state.is_finished = True
return "tool_result_is_final"
return "tool_completed"
except Exception as e:
error_text = Text()
error_text.append("❌ Error in tool execution: ", style="red bold")
error_text.append(str(e), style="red")
self._console.print(error_text)
raise
@listen("initialized")
def continue_iteration(self) -> Literal["check_iteration"]:
"""Bridge listener that connects iteration loop back to iteration check."""
return "check_iteration"
@router(or_(initialize_reasoning, continue_iteration))
def check_max_iterations(
self,
) -> Literal["force_final_answer", "continue_reasoning"]:
"""Check if max iterations reached before proceeding with reasoning."""
if has_reached_max_iterations(self.state.iterations, self.max_iter):
return "force_final_answer"
return "continue_reasoning"
@router(execute_tool_action)
def increment_and_continue(self) -> Literal["initialized"]:
"""Increment iteration counter and loop back for next iteration."""
self.state.iterations += 1
return "initialized"
@listen(or_("agent_finished", "tool_result_is_final"))
def finalize(self) -> Literal["completed", "skipped"]:
"""Finalize execution and emit completion logs."""
if self.state.current_answer is None:
skip_text = Text()
skip_text.append("⚠️ ", style="yellow bold")
skip_text.append(
"Finalize called but no answer in state - skipping", style="yellow"
)
self._console.print(skip_text)
return "skipped"
if not isinstance(self.state.current_answer, AgentFinish):
skip_text = Text()
skip_text.append("⚠️ ", style="yellow bold")
skip_text.append(
f"Finalize called with {type(self.state.current_answer).__name__} instead of AgentFinish - skipping",
style="yellow",
)
self._console.print(skip_text)
return "skipped"
self.state.is_finished = True
self._show_logs(self.state.current_answer)
return "completed"
@listen("parser_error")
def recover_from_parser_error(self) -> Literal["initialized"]:
"""Recover from output parser errors and retry."""
formatted_answer = handle_output_parser_exception(
e=self._last_parser_error,
messages=list(self.state.messages),
iterations=self.state.iterations,
log_error_after=self.log_error_after,
printer=self._printer,
)
if formatted_answer:
self.state.current_answer = formatted_answer
self.state.iterations += 1
return "initialized"
@listen("context_error")
def recover_from_context_length(self) -> Literal["initialized"]:
"""Recover from context length errors and retry."""
handle_context_length(
respect_context_window=self.respect_context_window,
printer=self._printer,
messages=self.state.messages,
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
)
self.state.iterations += 1
return "initialized"
def invoke(self, inputs: dict[str, Any]) -> dict[str, Any]:
"""Execute agent with given inputs.
Args:
inputs: Input dictionary containing prompt variables.
Returns:
Dictionary with agent output.
"""
self._ensure_flow_initialized()
with self._execution_lock:
if self._is_executing:
raise RuntimeError(
"Executor is already running. "
"Cannot invoke the same executor instance concurrently."
)
self._is_executing = True
self._has_been_invoked = True
try:
# Reset state for fresh execution
self.state.messages.clear()
self.state.iterations = 0
self.state.current_answer = None
self.state.is_finished = False
if "system" in self.prompt:
prompt = cast("SystemPromptResult", self.prompt)
system_prompt = self._format_prompt(prompt["system"], inputs)
user_prompt = self._format_prompt(prompt["user"], inputs)
self.state.messages.append(
format_message_for_llm(system_prompt, role="system")
)
self.state.messages.append(format_message_for_llm(user_prompt))
else:
user_prompt = self._format_prompt(self.prompt["prompt"], inputs)
self.state.messages.append(format_message_for_llm(user_prompt))
self.state.ask_for_human_input = bool(
inputs.get("ask_for_human_input", False)
)
self.kickoff()
formatted_answer = self.state.current_answer
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer."
)
if self.state.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer)
self._create_external_memory(formatted_answer)
return {"output": formatted_answer.output}
except AssertionError:
fail_text = Text()
fail_text.append("", style="red bold")
fail_text.append(
"Agent failed to reach a final answer. This is likely a bug - please report it.",
style="red",
)
self._console.print(fail_text)
raise
except Exception as e:
handle_unknown_error(self._printer, e)
raise
finally:
self._is_executing = False
def _handle_agent_action(
self, formatted_answer: AgentAction, tool_result: ToolResult
) -> AgentAction | AgentFinish:
"""Process agent action and tool execution result.
Args:
formatted_answer: Agent's action to execute.
tool_result: Result from tool execution.
Returns:
Updated action or final answer.
"""
add_image_tool = self._i18n.tools("add_image")
if (
isinstance(add_image_tool, dict)
and formatted_answer.tool.casefold().strip()
== add_image_tool.get("name", "").casefold().strip()
):
self.state.messages.append(
{"role": "assistant", "content": tool_result.result}
)
return formatted_answer
return handle_agent_action_core(
formatted_answer=formatted_answer,
tool_result=tool_result,
messages=self.state.messages,
step_callback=self.step_callback,
show_logs=self._show_logs,
)
def _invoke_step_callback(
self, formatted_answer: AgentAction | AgentFinish
) -> None:
"""Invoke step callback if configured.
Args:
formatted_answer: Current agent response.
"""
if self.step_callback:
self.step_callback(formatted_answer)
def _append_message_to_state(
self, text: str, role: Literal["user", "assistant", "system"] = "assistant"
) -> None:
"""Add message to state conversation history.
Args:
text: Message content.
role: Message role (default: assistant).
"""
self.state.messages.append(format_message_for_llm(text, role=role))
def _show_start_logs(self) -> None:
"""Emit agent start event."""
if self.agent is None:
raise ValueError("Agent cannot be None")
crewai_event_bus.emit(
self.agent,
AgentLogsStartedEvent(
agent_role=self.agent.role,
task_description=(self.task.description if self.task else "Not Found"),
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
),
)
def _show_logs(self, formatted_answer: AgentAction | AgentFinish) -> None:
"""Emit agent execution event.
Args:
formatted_answer: Agent's response to log.
"""
if self.agent is None:
raise ValueError("Agent cannot be None")
crewai_event_bus.emit(
self.agent,
AgentLogsExecutionEvent(
agent_role=self.agent.role,
formatted_answer=formatted_answer,
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
),
)
def _handle_crew_training_output(
self, result: AgentFinish, human_feedback: str | None = None
) -> None:
"""Save training data for crew training mode.
Args:
result: Agent's final output.
human_feedback: Optional feedback from human.
"""
agent_id = str(self.agent.id)
train_iteration = (
getattr(self.crew, "_train_iteration", None) if self.crew else None
)
if train_iteration is None or not isinstance(train_iteration, int):
train_error = Text()
train_error.append("", style="red bold")
train_error.append(
"Invalid or missing train iteration. Cannot save training data.",
style="red",
)
self._console.print(train_error)
return
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
training_data = training_handler.load() or {}
# Initialize or retrieve agent's training data
agent_training_data = training_data.get(agent_id, {})
if human_feedback is not None:
# Save initial output and human feedback
agent_training_data[train_iteration] = {
"initial_output": result.output,
"human_feedback": human_feedback,
}
else:
# Save improved output
if train_iteration in agent_training_data:
agent_training_data[train_iteration]["improved_output"] = result.output
else:
train_error = Text()
train_error.append("", style="red bold")
train_error.append(
f"No existing training data for agent {agent_id} and iteration "
f"{train_iteration}. Cannot save improved output.",
style="red",
)
self._console.print(train_error)
return
# Update the training data and save
training_data[agent_id] = agent_training_data
training_handler.save(training_data)
@staticmethod
def _format_prompt(prompt: str, inputs: dict[str, str]) -> str:
"""Format prompt template with input values.
Args:
prompt: Template string.
inputs: Values to substitute.
Returns:
Formatted prompt.
"""
prompt = prompt.replace("{input}", inputs["input"])
prompt = prompt.replace("{tool_names}", inputs["tool_names"])
return prompt.replace("{tools}", inputs["tools"])
def _handle_human_feedback(self, formatted_answer: AgentFinish) -> AgentFinish:
"""Process human feedback and refine answer.
Args:
formatted_answer: Initial agent result.
Returns:
Final answer after feedback.
"""
human_feedback = self._ask_human_input(formatted_answer.output)
if self._is_training_mode():
return self._handle_training_feedback(formatted_answer, human_feedback)
return self._handle_regular_feedback(formatted_answer, human_feedback)
def _is_training_mode(self) -> bool:
"""Check if training mode is active.
Returns:
True if in training mode.
"""
return bool(self.crew and self.crew._train)
def _handle_training_feedback(
self, initial_answer: AgentFinish, feedback: str
) -> AgentFinish:
"""Process training feedback and generate improved answer.
Args:
initial_answer: Initial agent output.
feedback: Training feedback.
Returns:
Improved answer.
"""
self._handle_crew_training_output(initial_answer, feedback)
self.state.messages.append(
format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
)
# Re-run flow for improved answer
self.state.iterations = 0
self.state.is_finished = False
self.state.current_answer = None
self.kickoff()
# Get improved answer from state
improved_answer = self.state.current_answer
if not isinstance(improved_answer, AgentFinish):
raise RuntimeError(
"Training feedback iteration did not produce final answer"
)
self._handle_crew_training_output(improved_answer)
self.state.ask_for_human_input = False
return improved_answer
def _handle_regular_feedback(
self, current_answer: AgentFinish, initial_feedback: str
) -> AgentFinish:
"""Process regular feedback iteratively until user is satisfied.
Args:
current_answer: Current agent output.
initial_feedback: Initial user feedback.
Returns:
Final answer after iterations.
"""
feedback = initial_feedback
answer = current_answer
while self.state.ask_for_human_input:
if feedback.strip() == "":
self.state.ask_for_human_input = False
else:
answer = self._process_feedback_iteration(feedback)
feedback = self._ask_human_input(answer.output)
return answer
def _process_feedback_iteration(self, feedback: str) -> AgentFinish:
"""Process a single feedback iteration and generate updated response.
Args:
feedback: User feedback.
Returns:
Updated agent response.
"""
self.state.messages.append(
format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
)
# Re-run flow
self.state.iterations = 0
self.state.is_finished = False
self.state.current_answer = None
self.kickoff()
# Get answer from state
answer = self.state.current_answer
if not isinstance(answer, AgentFinish):
raise RuntimeError("Feedback iteration did not produce final answer")
return answer
@classmethod
def __get_pydantic_core_schema__(
cls, _source_type: Any, _handler: GetCoreSchemaHandler
) -> CoreSchema:
"""Generate Pydantic core schema for Protocol compatibility.
Allows the executor to be used in Pydantic models without
requiring arbitrary_types_allowed=True.
"""
return core_schema.any_schema()

View File

@@ -1,8 +1,9 @@
from __future__ import annotations
from collections.abc import Sequence
import threading
from typing import Any
from typing import TYPE_CHECKING, Any
from crewai.agent.core import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.agent_events import (
@@ -28,6 +29,10 @@ from crewai.experimental.evaluation.evaluation_listener import (
from crewai.task import Task
if TYPE_CHECKING:
from crewai.agent import Agent
class ExecutionState:
current_agent_id: str | None = None
current_task_id: str | None = None

View File

@@ -1,17 +1,22 @@
from __future__ import annotations
import abc
import enum
from enum import Enum
from typing import Any
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.llm import BaseLLM
from crewai.task import Task
from crewai.utilities.llm_utils import create_llm
if TYPE_CHECKING:
from crewai.agent import Agent
class MetricCategory(enum.Enum):
GOAL_ALIGNMENT = "goal_alignment"
SEMANTIC_QUALITY = "semantic_quality"

View File

@@ -1,8 +1,9 @@
from __future__ import annotations
from collections import defaultdict
from hashlib import md5
from typing import Any
from typing import TYPE_CHECKING, Any
from crewai import Agent, Crew
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.experimental.evaluation import AgentEvaluator, create_default_evaluator
from crewai.experimental.evaluation.evaluation_display import (
@@ -17,6 +18,11 @@ from crewai.experimental.evaluation.experiment.result_display import (
)
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.crew import Crew
class ExperimentRunner:
def __init__(self, dataset: list[dict[str, Any]]):
self.dataset = dataset or []

View File

@@ -1,6 +1,7 @@
from typing import Any
from __future__ import annotations
from typing import TYPE_CHECKING, Any
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.experimental.evaluation.base_evaluator import (
BaseEvaluator,
@@ -12,6 +13,10 @@ from crewai.task import Task
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent import Agent
class GoalAlignmentEvaluator(BaseEvaluator):
@property
def metric_category(self) -> MetricCategory:

View File

@@ -6,15 +6,16 @@ This module provides evaluator implementations for:
- Thinking-to-action ratio
"""
from __future__ import annotations
from collections.abc import Sequence
from enum import Enum
import logging
import re
from typing import Any
from typing import TYPE_CHECKING, Any
import numpy as np
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.experimental.evaluation.base_evaluator import (
BaseEvaluator,
@@ -27,6 +28,10 @@ from crewai.tasks.task_output import TaskOutput
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent import Agent
class ReasoningPatternType(Enum):
EFFICIENT = "efficient" # Good reasoning flow
LOOP = "loop" # Agent is stuck in a loop

View File

@@ -1,6 +1,7 @@
from typing import Any
from __future__ import annotations
from typing import TYPE_CHECKING, Any
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.experimental.evaluation.base_evaluator import (
BaseEvaluator,
@@ -12,6 +13,10 @@ from crewai.task import Task
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent import Agent
class SemanticQualityEvaluator(BaseEvaluator):
@property
def metric_category(self) -> MetricCategory:

View File

@@ -1,7 +1,8 @@
import json
from typing import Any
from __future__ import annotations
import json
from typing import TYPE_CHECKING, Any
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.experimental.evaluation.base_evaluator import (
BaseEvaluator,
@@ -13,6 +14,10 @@ from crewai.task import Task
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent import Agent
class ToolSelectionEvaluator(BaseEvaluator):
@property
def metric_category(self) -> MetricCategory:

View File

@@ -1,4 +1,11 @@
from crewai.flow.async_feedback import (
ConsoleProvider,
HumanFeedbackPending,
HumanFeedbackProvider,
PendingFeedbackContext,
)
from crewai.flow.flow import Flow, and_, listen, or_, router, start
from crewai.flow.human_feedback import HumanFeedbackResult, human_feedback
from crewai.flow.persistence import persist
from crewai.flow.visualization import (
FlowStructure,
@@ -8,10 +15,16 @@ from crewai.flow.visualization import (
__all__ = [
"ConsoleProvider",
"Flow",
"FlowStructure",
"HumanFeedbackPending",
"HumanFeedbackProvider",
"HumanFeedbackResult",
"PendingFeedbackContext",
"and_",
"build_flow_structure",
"human_feedback",
"listen",
"or_",
"persist",

View File

@@ -0,0 +1,41 @@
"""Async human feedback support for CrewAI Flows.
This module provides abstractions for non-blocking human-in-the-loop workflows,
allowing integration with external systems like Slack, Teams, webhooks, or APIs.
Example:
```python
from crewai.flow import Flow, start, human_feedback
from crewai.flow.async_feedback import HumanFeedbackProvider, HumanFeedbackPending
class SlackProvider(HumanFeedbackProvider):
def request_feedback(self, context, flow):
self.send_slack_notification(context)
raise HumanFeedbackPending(context=context)
class MyFlow(Flow):
@start()
@human_feedback(
message="Review this:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=SlackProvider(),
)
def review(self):
return "Content to review"
```
"""
from crewai.flow.async_feedback.types import (
HumanFeedbackPending,
HumanFeedbackProvider,
PendingFeedbackContext,
)
from crewai.flow.async_feedback.providers import ConsoleProvider
__all__ = [
"ConsoleProvider",
"HumanFeedbackPending",
"HumanFeedbackProvider",
"PendingFeedbackContext",
]

View File

@@ -0,0 +1,124 @@
"""Default provider implementations for human feedback.
This module provides the ConsoleProvider, which is the default synchronous
provider that collects feedback via console input.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from crewai.flow.async_feedback.types import PendingFeedbackContext
if TYPE_CHECKING:
from crewai.flow.flow import Flow
class ConsoleProvider:
"""Default synchronous console-based feedback provider.
This provider blocks execution and waits for console input from the user.
It displays the method output with formatting and prompts for feedback.
This is the default provider used when no custom provider is specified
in the @human_feedback decorator.
Example:
```python
from crewai.flow.async_feedback import ConsoleProvider
# Explicitly use console provider
@human_feedback(
message="Review this:",
provider=ConsoleProvider(),
)
def my_method(self):
return "Content to review"
```
"""
def __init__(self, verbose: bool = True):
"""Initialize the console provider.
Args:
verbose: Whether to display formatted output. If False, only
shows the prompt message.
"""
self.verbose = verbose
def request_feedback(
self,
context: PendingFeedbackContext,
flow: Flow,
) -> str:
"""Request feedback via console input (blocking).
Displays the method output with formatting and waits for the user
to type their feedback. Press Enter to skip (returns empty string).
Args:
context: The pending feedback context with output and message.
flow: The Flow instance (used for event emission).
Returns:
The user's feedback as a string, or empty string if skipped.
"""
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_listener import event_listener
from crewai.events.types.flow_events import (
HumanFeedbackReceivedEvent,
HumanFeedbackRequestedEvent,
)
# Emit feedback requested event
crewai_event_bus.emit(
flow,
HumanFeedbackRequestedEvent(
type="human_feedback_requested",
flow_name=flow.name or flow.__class__.__name__,
method_name=context.method_name,
output=context.method_output,
message=context.message,
emit=context.emit,
),
)
# Pause live updates during human input
formatter = event_listener.formatter
formatter.pause_live_updates()
try:
console = formatter.console
if self.verbose:
# Display output with formatting using Rich console
console.print("\n" + "" * 50, style="bold cyan")
console.print(" OUTPUT FOR REVIEW", style="bold cyan")
console.print("" * 50 + "\n", style="bold cyan")
console.print(context.method_output)
console.print("\n" + "" * 50 + "\n", style="bold cyan")
# Show message and prompt for feedback
console.print(context.message, style="yellow")
console.print(
"(Press Enter to skip, or type your feedback)\n", style="cyan"
)
feedback = input("Your feedback: ").strip()
# Emit feedback received event
crewai_event_bus.emit(
flow,
HumanFeedbackReceivedEvent(
type="human_feedback_received",
flow_name=flow.name or flow.__class__.__name__,
method_name=context.method_name,
feedback=feedback,
outcome=None, # Will be determined after collapsing
),
)
return feedback
finally:
# Resume live updates
formatter.resume_live_updates()

View File

@@ -0,0 +1,264 @@
"""Core types for async human feedback in Flows.
This module defines the protocol, exception, and context types used for
non-blocking human-in-the-loop workflows.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import datetime
from typing import TYPE_CHECKING, Any, Protocol, runtime_checkable
if TYPE_CHECKING:
from crewai.flow.flow import Flow
@dataclass
class PendingFeedbackContext:
"""Context capturing everything needed to resume a paused flow.
When a flow is paused waiting for async human feedback, this dataclass
stores all the information needed to:
1. Identify which flow execution is waiting
2. What method triggered the feedback request
3. What was shown to the human
4. How to route the response when it arrives
Attributes:
flow_id: Unique identifier for the flow instance (from state.id)
flow_class: Fully qualified class name (e.g., "myapp.flows.ReviewFlow")
method_name: Name of the method that triggered feedback request
method_output: The output that was shown to the human for review
message: The message displayed when requesting feedback
emit: Optional list of outcome strings for routing
default_outcome: Outcome to use when no feedback is provided
metadata: Optional metadata for external system integration
llm: LLM model string for outcome collapsing
requested_at: When the feedback was requested
Example:
```python
context = PendingFeedbackContext(
flow_id="abc-123",
flow_class="myapp.ReviewFlow",
method_name="review_content",
method_output={"title": "Draft", "body": "..."},
message="Please review and approve or reject:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
```
"""
flow_id: str
flow_class: str
method_name: str
method_output: Any
message: str
emit: list[str] | None = None
default_outcome: str | None = None
metadata: dict[str, Any] = field(default_factory=dict)
llm: str | None = None
requested_at: datetime = field(default_factory=datetime.now)
def to_dict(self) -> dict[str, Any]:
"""Serialize context to a dictionary for persistence.
Returns:
Dictionary representation suitable for JSON serialization.
"""
return {
"flow_id": self.flow_id,
"flow_class": self.flow_class,
"method_name": self.method_name,
"method_output": self.method_output,
"message": self.message,
"emit": self.emit,
"default_outcome": self.default_outcome,
"metadata": self.metadata,
"llm": self.llm,
"requested_at": self.requested_at.isoformat(),
}
@classmethod
def from_dict(cls, data: dict[str, Any]) -> PendingFeedbackContext:
"""Deserialize context from a dictionary.
Args:
data: Dictionary representation of the context.
Returns:
Reconstructed PendingFeedbackContext instance.
"""
requested_at = data.get("requested_at")
if isinstance(requested_at, str):
requested_at = datetime.fromisoformat(requested_at)
elif requested_at is None:
requested_at = datetime.now()
return cls(
flow_id=data["flow_id"],
flow_class=data["flow_class"],
method_name=data["method_name"],
method_output=data.get("method_output"),
message=data.get("message", ""),
emit=data.get("emit"),
default_outcome=data.get("default_outcome"),
metadata=data.get("metadata", {}),
llm=data.get("llm"),
requested_at=requested_at,
)
class HumanFeedbackPending(Exception): # noqa: N818 - Not an error, a control flow signal
"""Signal that flow execution should pause for async human feedback.
When raised by a provider, the flow framework will:
1. Stop execution at the current method
2. Automatically persist state and context (if persistence is configured)
3. Return this object to the caller (not re-raise it)
The caller receives this as a return value from `flow.kickoff()`, enabling
graceful handling of the paused state without try/except blocks:
```python
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow is paused, handle async feedback
print(f"Waiting for feedback: {result.context.flow_id}")
else:
# Normal completion
print(f"Flow completed: {result}")
```
Note:
The flow framework automatically saves pending feedback when this
exception is raised. Providers do NOT need to call `save_pending_feedback`
manually - just raise this exception and the framework handles persistence.
Attributes:
context: The PendingFeedbackContext with all details needed to resume
callback_info: Optional dict with information for external systems
(e.g., webhook URL, ticket ID, Slack thread ID)
Example:
```python
class SlackProvider(HumanFeedbackProvider):
def request_feedback(self, context, flow):
# Send notification to external system
ticket_id = self.create_slack_thread(context)
# Raise to pause - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": "#reviews",
"thread_id": ticket_id,
}
)
```
"""
def __init__(
self,
context: PendingFeedbackContext,
callback_info: dict[str, Any] | None = None,
message: str | None = None,
):
"""Initialize the pending feedback exception.
Args:
context: The pending feedback context with flow details
callback_info: Optional information for external system callbacks
message: Optional custom message (defaults to descriptive message)
"""
self.context = context
self.callback_info = callback_info or {}
if message is None:
message = (
f"Human feedback pending for flow '{context.flow_id}' "
f"at method '{context.method_name}'"
)
super().__init__(message)
@runtime_checkable
class HumanFeedbackProvider(Protocol):
"""Protocol for human feedback collection strategies.
Implement this protocol to create custom feedback providers that integrate
with external systems like Slack, Teams, email, or custom APIs.
Providers can be either:
- **Synchronous (blocking)**: Return feedback string directly
- **Asynchronous (non-blocking)**: Raise HumanFeedbackPending to pause
The default ConsoleProvider is synchronous and blocks waiting for input.
For async workflows, implement a provider that raises HumanFeedbackPending.
Note:
The flow framework automatically handles state persistence when
HumanFeedbackPending is raised. Providers only need to:
1. Notify the external system (Slack, email, webhook, etc.)
2. Raise HumanFeedbackPending with the context and callback info
Example synchronous provider:
```python
class ConsoleProvider(HumanFeedbackProvider):
def request_feedback(self, context, flow):
print(context.method_output)
return input("Your feedback: ")
```
Example async provider:
```python
class SlackProvider(HumanFeedbackProvider):
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context, flow):
# Send notification to Slack
thread_id = self.post_to_slack(
channel=self.channel,
message=context.message,
content=context.method_output,
)
# Raise to pause - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={
"channel": self.channel,
"thread_id": thread_id,
}
)
```
"""
def request_feedback(
self,
context: PendingFeedbackContext,
flow: Flow,
) -> str:
"""Request feedback from a human.
For synchronous providers, block and return the feedback string.
For async providers, notify the external system and raise
HumanFeedbackPending to pause the flow.
Args:
context: The pending feedback context containing all details
about what feedback is needed and how to route the response.
flow: The Flow instance, providing access to state and name.
Returns:
The human's feedback as a string (synchronous providers only).
Raises:
HumanFeedbackPending: To signal that the flow should pause and
wait for external feedback. The framework will automatically
persist state when this is raised.
"""
...

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More