Compare commits

..

63 Commits

Author SHA1 Message Date
Greyson LaLonde
96031fa358 Merge branch 'main' into feat/cli-predeploy-validation 2026-04-11 05:56:26 +08:00
Greyson LaLonde
3b280e41fb chore: bump pypdf to 6.10.0 for GHSA-3crg-w4f6-42mx
Some checks are pending
Build uv cache / build-cache (3.10) (push) Waiting to run
Build uv cache / build-cache (3.11) (push) Waiting to run
Build uv cache / build-cache (3.12) (push) Waiting to run
Build uv cache / build-cache (3.13) (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Vulnerability Scan / pip-audit (push) Waiting to run
Resolves CVE-2026-40260 where manipulated XMP metadata entity
declarations can exhaust RAM in pypdf <6.10.0.
2026-04-11 05:56:11 +08:00
Greyson LaLonde
bbc7392a32 Merge branch 'main' into feat/cli-predeploy-validation 2026-04-11 05:52:01 +08:00
Greyson LaLonde
9f61deb072 feat: add crewai deploy validate pre-deploy validation
Adds a new `crewai deploy validate` command that checks a project
locally against the most common categories of deploy-time failures,
so users don't burn attempts on fixable project-structure problems.
`crewai deploy create` and `crewai deploy push` now run the same
checks automatically and abort on errors; `--skip-validate` opts out.

Checks (errors block, warnings print only):
  1. pyproject.toml present with `[project].name`
  2. lockfile (uv.lock or poetry.lock) present and not stale
  3. src/<package>/ resolves, rejecting empty names and .egg-info dirs
  4. crew.py, config/agents.yaml, config/tasks.yaml for standard crews
  5. main.py for flow projects
  6. hatchling wheel target resolves
  7. crew/flow module imports cleanly in a `uv run` subprocess, with
     classification of common failures (missing provider extras,
     missing API keys at import, stale crewai pins, pydantic errors)
  8. env vars referenced in source vs .env (warning only)
  9. crewai lockfile pin vs a known-bad cutoff (warning only)

Each finding has a stable code and a structured title/detail/hint so
downstream tooling and tests can pin behavior. 33 tests cover the
checks 1:1 against the failure patterns observed in practice.
2026-04-11 05:50:15 +08:00
Greyson LaLonde
851df79a82 fix: defer native LLM client construction when credentials are missing
All native LLM providers built their SDK clients inside
`@model_validator(mode="after")`, which required the API key at
`LLM(...)` construction time. Instantiating an LLM at module scope
(e.g. `chat_llm=LLM(model="openai/gpt-4o-mini")` on a `@crew` method)
crashed during downstream crew-metadata extraction with a confusing
`ImportError: Error importing native provider: 1 validation error...`
before the process env vars were ever consulted.

Wrap eager client construction in a try/except in each provider and
add `_get_sync_client` / `_get_async_client` methods that build on
first use. OpenAI call sites are routed through the lazy getters so
calls made without eager construction still work. The descriptive
"X_API_KEY is required" errors are re-raised from the lazy path at
first real call.

Update two Azure tests that asserted the old eager-error contract to
assert the new lazy-error contract.
2026-04-11 05:48:42 +08:00
Greyson LaLonde
8de4421705 fix: sanitize tool schemas for strict mode
Pydantic schemas intermittently fail strict tool-use on openai, anthropic,
and bedrock. All three reject nested objects missing additionalProperties:
false, and anthropic also rejects keywords like minLength and top-level
anyOf. Adds per-provider sanitizers that inline refs, close objects, mark
every property required, preserve nullable unions, and strip keywords each
grammar compiler rejects. Verified against real bedrock, anthropic, and
openai.
2026-04-11 05:26:48 +08:00
Greyson LaLonde
62484934c1 chore: bump uv to 0.11.6 for GHSA-pjjw-68hj-v9mw
Some checks failed
Check Documentation Broken Links / Check broken links (push) Waiting to run
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Low-severity advisory: malformed RECORD entries in wheels could delete
files outside the venv on uninstall. Fixed in uv 0.11.6.
2026-04-11 05:09:24 +08:00
Greyson LaLonde
298fc7b9c0 chore: drop tiktoken from anthropic async max_tokens test 2026-04-11 03:20:20 +08:00
Greyson LaLonde
9537ba0413 ci: add pip-audit pre-commit hook 2026-04-11 03:06:31 +08:00
Greyson LaLonde
ace9617722 test: re-record hierarchical verbose manager cassette 2026-04-11 02:35:00 +08:00
Greyson LaLonde
7e1672447b fix: deflake MemoryRecord embedding serialization test
Substring checks like `'0.1' not in json_str` collided with timestamps
such as `2026-04-10T13:00:50.140557` on CI. Round-trip through
`model_validate_json` to verify structurally that the embedding field
is absent from the serialized output.
2026-04-11 02:01:23 +08:00
Greyson LaLonde
ea58f8d34d docs: update changelog and version for v1.14.2a2 2026-04-10 21:58:55 +08:00
Greyson LaLonde
fe93333066 feat: bump versions to 1.14.2a2 2026-04-10 21:51:51 +08:00
Greyson LaLonde
1293dee241 feat: checkpoint TUI with tree view, fork support, editable inputs/outputs
- Rewrite TUI with Tree widget showing branch/fork lineage
- Add Resume and Fork buttons in detail panel with Collapsible entities
- Show branch and parent_id in detail panel and CLI info output
- Auto-detect .checkpoints.db when default dir missing
- Append .db to location for SqliteProvider when no extension set
- Fix RuntimeState.from_checkpoint not setting provider/location
- Fork now writes initial checkpoint on new branch
- Add from_checkpoint, fork, and CLI docs to checkpointing.mdx
2026-04-10 21:24:49 +08:00
Greyson LaLonde
6efa142e22 fix: forward strict mode to Anthropic and Bedrock providers
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
The OpenAI-format tool schema sets strict: true but this was dropped
during conversion to Anthropic/Bedrock formats, so neither provider
used constrained decoding. Without it, the model can return string
"None" instead of JSON null for nullable fields, causing Pydantic
validation failures.
2026-04-10 15:32:54 +08:00
Lucas Gomide
fc6792d067 feat: enrich LLM token tracking with reasoning tokens, cache creation tokens (#5389)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-04-10 00:22:27 -04:00
Greyson LaLonde
84b1b0a0b0 feat: add from_checkpoint parameter to kickoff methods
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Accept CheckpointConfig on Crew and Flow kickoff/kickoff_async/akickoff.
When restore_from is set, the entity resumes from that checkpoint.
When only config fields are set, checkpointing is enabled for the run.
Adds restore_from field (Path | str | None) to CheckpointConfig.
2026-04-10 03:47:23 +08:00
Greyson LaLonde
56cf8a4384 feat: embed crewai_version in checkpoints with migration framework
Write the crewAI package version into every checkpoint blob. On restore,
run version-based migrations so older checkpoints can be transformed
forward to the current format. Adds crewai.utilities.version module.
2026-04-10 01:13:30 +08:00
Greyson LaLonde
68c754883d feat: add checkpoint forking with lineage tracking 2026-04-10 00:03:28 +08:00
alex-clawd
ce56472fc3 fix: harden NL2SQLTool — read-only default, query validation, parameterized queries (#5311)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
* fix: harden NL2SQLTool — read-only by default, parameterized queries, query validation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: address CI lint failures and remove unused import

- Remove unused `sessionmaker` import from test_nl2sql_security.py
- Use `Self` return type on `_apply_env_override` (fixes UP037/F821)
- Fix ruff errors auto-fixed in lib/crewai (UP007, etc.)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: expand _WRITE_COMMANDS and block multi-statement semicolon injection

- Add missing write commands: UPSERT, LOAD, COPY, VACUUM, ANALYZE,
  ANALYSE, REINDEX, CLUSTER, REFRESH, COMMENT, SET, RESET
- _validate_query() now splits on ';' and validates each statement
  independently; multi-statement queries are rejected outright in
  read-only mode to prevent 'SELECT 1; DROP TABLE users' bypass
- Extract single-statement logic into _validate_statement() helper
- Add TestSemicolonInjection and TestExtendedWriteCommands test classes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* ci: retrigger

* fix: use typing_extensions.Self for Python 3.10 compat

* chore: update tool specifications

* docs: document NL2SQLTool read-only default and DML configuration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: close three NL2SQLTool security gaps (writable CTEs, EXPLAIN ANALYZE, multi-stmt commit)

- Remove WITH from _READ_ONLY_COMMANDS; scan CTE body for write keywords so
  writable CTEs like `WITH d AS (DELETE …) SELECT …` are blocked in read-only mode.
- EXPLAIN ANALYZE/ANALYSE now resolves the underlying command; EXPLAIN ANALYZE DELETE
  is treated as a write and blocked in read-only mode.
- execute_sql commit decision now checks ALL semicolon-separated statements so
  a SELECT-first batch like `SELECT 1; DROP TABLE t` still triggers a commit
  when allow_dml=True.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: handle parenthesized EXPLAIN options syntax; remove unused _seed_db

_validate_statement now strips parenthesized options from EXPLAIN (e.g.
EXPLAIN (ANALYZE) DELETE, EXPLAIN (ANALYZE, VERBOSE) DELETE) before
checking whether ANALYZE/ANALYSE is present — closing the bypass where
the options-list form was silently allowed in read-only mode.

Adds three new tests:
  - EXPLAIN (ANALYZE) DELETE  → blocked
  - EXPLAIN (ANALYZE, VERBOSE) DELETE  → blocked
  - EXPLAIN (VERBOSE) SELECT  → allowed

Also removes the unused _seed_db helper from test_nl2sql_security.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore: update tool specifications

* fix: smarter CTE write detection, fix commit logic for writable CTEs

- Replace naive token-set matching with positional AS() body inspection
  to avoid false positives on column names like 'comment', 'set', 'reset'
- Fix execute_sql commit logic to detect writable CTEs (WITH + DELETE/INSERT)
  not just top-level write commands
- Add tests for false positive cases and writable CTE commit behavior
- Format nl2sql_tool.py to pass ruff format check

* fix: catch write commands in CTE main query + handle whitespace in AS()

- WITH cte AS (SELECT 1) DELETE FROM users now correctly blocked
- AS followed by newline/tab/multi-space before ( now detected
- execute_sql commit logic updated for both cases
- 4 new tests

* fix: EXPLAIN ANALYZE VERBOSE handling, string literal paren bypass, commit logic for EXPLAIN ANALYZE

- EXPLAIN handler now consumes all known options (ANALYZE, ANALYSE, VERBOSE) before
  extracting the real command, fixing 'EXPLAIN ANALYZE VERBOSE SELECT' being blocked
- Paren walker in _extract_main_query_after_cte now skips string literals, preventing
  'WITH cte AS (SELECT '\''('\'' FROM t) DELETE FROM users' from bypassing detection
- _is_write_stmt in execute_sql now resolves EXPLAIN ANALYZE to underlying command
  via _resolve_explain_command, ensuring session.commit() fires for write operations
- 10 new tests covering all three fixes

* fix: deduplicate EXPLAIN parsing, fix AS( regex in strings, block unknown CTE commands, bump langchain-core

- Refactor _validate_statement to use _resolve_explain_command (single source of truth)
- _iter_as_paren_matches skips string literals so 'AS (' in data doesn't confuse CTE detection
- Unknown commands after CTE definitions now blocked in read-only mode
- Bump langchain-core override to >=1.2.28 (GHSA-926x-3r5x-gfhw)

* fix: add return type annotation to _iter_as_paren_matches

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-09 03:21:38 -03:00
Greyson LaLonde
06fe163611 docs: update changelog and version for v1.14.2a1
Some checks failed
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
2026-04-09 07:26:22 +08:00
Greyson LaLonde
3b52b1a800 feat: bump versions to 1.14.2a1 2026-04-09 07:21:39 +08:00
Greyson LaLonde
9ab67552a7 fix: emit flow_finished event after HITL resume
resume_async() was missing trace infrastructure that kickoff_async()
sets up, causing flow_finished to never reach the platform after HITL
feedback. Add FlowStartedEvent emission to initialize the trace batch,
await event futures, finalize the trace batch, and guard with
suppress_flow_events.
2026-04-09 05:31:31 +08:00
Greyson LaLonde
8cdde16ac8 fix: bump cryptography to 46.0.7 for CVE-2026-39892 2026-04-09 05:17:31 +08:00
Greyson LaLonde
0e590ff669 refactor: use shared I18N_DEFAULT singleton 2026-04-09 04:29:53 +08:00
Greyson LaLonde
15f5bff043 docs: update changelog and version for v1.14.1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
2026-04-09 01:56:51 +08:00
Greyson LaLonde
a0578bb6c3 feat: bump versions to 1.14.1 2026-04-09 01:45:40 +08:00
Greyson LaLonde
00400a9f31 ci: skip python tests, lint, and type checks on docs-only PRs 2026-04-09 01:34:47 +08:00
Lorenze Jay
5c08e566b5 dedicate skills page (#5331) 2026-04-08 10:10:18 -07:00
Greyson LaLonde
fe028ef400 docs: update changelog and version for v1.14.1rc1 2026-04-09 00:29:04 +08:00
Greyson LaLonde
52c227ab17 feat: bump versions to 1.14.1rc1 2026-04-09 00:22:24 +08:00
Greyson LaLonde
8bae740899 fix: use regex for template pyproject.toml version bumps
tomlkit.parse() fails on Jinja placeholders like {{folder_name}}
in CLI template files. Switch to regex replacement for templates.
2026-04-09 00:13:07 +08:00
Greyson LaLonde
1c784695c1 feat: add async checkpoint TUI browser
Launch a Textual TUI via `crewai checkpoint` to browse and resume
from checkpoints. Uses run_async/akickoff for fully async execution.
Adds provider auto-detection from file magic bytes.
2026-04-08 23:59:09 +08:00
iris-clawd
1ae237a287 refactor: replace hardcoded denylist with dynamic BaseTool field exclusion in spec gen (#5347)
The spec generator previously used a hardcoded list of field names to
exclude from init_params_schema. Any new field or computed_field added
to BaseTool (like tool_type from 86ce54f) would silently leak into
tool.specs.json unless someone remembered to update that list.

Now _extract_init_params() dynamically computes BaseTool's fields at
import time via model_fields + model_computed_fields, so any future
additions to BaseTool are automatically excluded.

Fields from intermediate base classes (RagTool, BraveSearchToolBase,
SerpApiBaseTool) are correctly preserved since they're not on BaseTool.

TDD:
- RED: 3 new tests confirming BaseTool field leak, intermediate base
  preservation, and future-proofing — all failed before the fix
- GREEN: Dynamic allowlist applied — all 10 tests pass
- Regenerated tool.specs.json (tool_type removed from all tools)
2026-04-08 11:49:16 -04:00
Greyson LaLonde
0e8ed75947 feat: add aclose()/close() and async context manager to streaming outputs 2026-04-08 23:32:37 +08:00
Greyson LaLonde
98e0d1054f fix: sanitize tool names in hook decorator filters 2026-04-08 21:02:25 +08:00
Greyson LaLonde
fc9280ccf6 refactor: replace regex with tomlkit in devtools CLI
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2026-04-08 19:52:51 +08:00
Greyson LaLonde
f4c0667d34 fix: bump transformers to 5.5.0 to resolve CVE-2026-1839
Bumps docling pin from ~=2.75.0 to ~=2.84.0 (allows huggingface-hub>=1)
and adds a transformers>=5.4.0 override to force resolution past 4.57.6.
2026-04-08 18:59:51 +08:00
Greyson LaLonde
0450d06a65 refactor: use shared PRINTER singleton
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-04-08 07:17:22 +08:00
Greyson LaLonde
b23b2696fe fix: remove FilteredStream stdout/stderr wrapper
Wrapping sys.stdout and sys.stderr at import time with a
threading.Lock is not fork-safe and adds overhead to every
print call. litellm.suppress_debug_info already silences the
noisy output this was designed to filter.
2026-04-08 04:58:05 +08:00
Greyson LaLonde
8700e3db33 chore: remove unused flow/config.py 2026-04-08 04:37:31 +08:00
Greyson LaLonde
75f162fd3c refactor: make BaseProvider a BaseModel with provider_type discriminator
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Replace the Protocol with a BaseModel + ABC so providers serialize and
deserialize natively via pydantic. Each provider gets a Literal
provider_type field. CheckpointConfig.provider uses a discriminated
union so the correct provider class is reconstructed from checkpoint JSON.
2026-04-08 03:14:54 +08:00
Greyson LaLonde
c0f3151e13 fix: register checkpoint handlers when CheckpointConfig is created 2026-04-08 02:11:34 +08:00
João Moura
25eb4adc49 docs: update changelog and version for v1.14.0 (#5322) 2026-04-07 14:47:34 -03:00
João Moura
1534ba202d feat: bump versions to 1.14.0 (#5321) 2026-04-07 14:45:39 -03:00
Greyson LaLonde
868416bfe0 fix: add SSRF and path traversal protections (#5315)
* fix: add SSRF and path traversal protections

CVE-2026-2286: validate_url blocks non-http/https schemes, private
IPs, loopback, link-local, reserved addresses. Applied to 11 web tools.

CVE-2026-2285: validate_path confines file access to the working
directory. Applied to 7 file and directory tools.

* fix: drop unused assignment from validate_url call

* fix: DNS rebinding protection and allow_private flag

Rewrite validated URLs to use the resolved IP, preventing DNS rebinding
between validation and request time. SDK-based tools use pin_ip=False
since they manage their own HTTP clients. Add allow_private flag for
deployments that need internal network access.

* fix: unify security utilities and restore RAG chokepoint validation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: move validation to security/ package + address review comments

- Move safe_path.py to crewai_tools/security/; add safe_url.py re-export
- Keep utilities/safe_path.py as a backwards-compat shim
- Update all 21 import sites to use crewai_tools.security.safe_path
- files_compressor_tool: validate output_path (user-controlled)
- serper_scrape_website_tool: call validate_url() before building payload
- brightdata_unlocker: validate_url() already called without assignment (no-op fix)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: move validation to security/ package, keep utilities/ as compat shim

- security/safe_path.py is the canonical location for all validation
- utilities/safe_path.py re-exports for backward compatibility
- All tool imports already point to security.safe_path
- All review comments already addressed in prior commits

* fix: move validation outside try/except blocks, use correct directory validator

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: use resolved paths from validation to prevent symlink TOCTOU, remove unused safe_url.py

---------

Co-authored-by: Alex <alex@crewai.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-07 14:44:50 -03:00
Greyson LaLonde
a5df7c798c feat: checkpoint list/info CLI commands 2026-04-08 01:28:25 +08:00
Greyson LaLonde
5958a16ade refactor: checkpoint API cleanup 2026-04-08 01:13:23 +08:00
alex-clawd
9325e2f6a4 fix: add path and URL validation to RAG tools (#5310)
* fix: add path and URL validation to RAG tools

Add validation utilities to prevent unauthorized file reads and SSRF
when RAG tools accept LLM-controlled paths/URLs at runtime.

Changes:
- New crewai_tools.utilities.safe_path module with validate_file_path(),
  validate_directory_path(), and validate_url()
- File paths validated against base directory (defaults to cwd).
  Resolves symlinks and ../ traversal. Rejects escape attempts.
- URLs validated: file:// blocked entirely. HTTP/HTTPS resolves DNS
  and blocks private/reserved IPs (10.x, 172.16-31.x, 192.168.x,
  127.x, 169.254.x, 0.0.0.0, ::1, fc00::/7).
- Validation applied in RagTool.add() — catches all RAG search tools
  (JSON, CSV, PDF, TXT, DOCX, MDX, Directory, etc.)
- Removed file:// scheme support from DataTypes.from_content()
- CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true env var for backward compat
- 27 tests covering traversal, symlinks, private IPs, cloud metadata,
  IPv6, escape hatch, and valid paths/URLs

* fix: validate path/URL keyword args in RagTool.add()

The original patch validated positional *args but left all keyword
arguments (path=, file_path=, directory_path=, url=, website=,
github_url=, youtube_url=) unvalidated, providing a trivial bypass
for both path-traversal and SSRF checks.

Applies validate_file_path() to path/file_path/directory_path kwargs
and validate_url() to url/website/github_url/youtube_url kwargs before
they reach the adapter. Adds a regression-test file covering all eight
kwarg vectors plus the two existing positional-arg checks.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: address CodeQL and review comments on RAG path/URL validation

- Replace insecure tempfile.mktemp() with inline symlink target in test
- Remove unused 'target' variable and unused tempfile import
- Narrow broad except Exception: pass to only catch urlparse errors;
  validate_url ValueError now propagates instead of being silently swallowed
- Fix ruff B904 (raise-without-from-inside-except) in safe_path.py
- Fix ruff B007 (unused loop variable 'family') in safe_path.py
- Use validate_directory_path in DirectorySearchTool.add() so the
  public utility is exercised in production code

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* style: fix ruff format + remaining lint issues

* fix: resolve mypy type errors in RAG path/URL validation

- Cast sockaddr[0] to str() to satisfy mypy (socket.getaddrinfo returns
  sockaddr where [0] is str but typed as str | int)
- Remove now-unnecessary `type: ignore[assignment]` and
  `type: ignore[literal-required]` comments in rag_tool.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: unroll dynamic TypedDict key loops to satisfy mypy literal-required

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* test: allow tmp paths in RAG data-type tests via CREWAI_TOOLS_ALLOW_UNSAFE_PATHS

TemporaryDirectory creates files under /tmp/ which is outside CWD and is
correctly blocked by the new path validation.  These tests exercise
data-type handling, not security, so add an autouse fixture that sets
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true for the whole file.  Path/URL
security is covered by test_rag_tool_path_validation.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* test: allow tmp paths in search-tool and rag_tool tests via CREWAI_TOOLS_ALLOW_UNSAFE_PATHS

test_search_tools.py has tests for TXTSearchTool, CSVSearchTool,
MDXSearchTool, JSONSearchTool, and DirectorySearchTool that create
files under /tmp/ via tempfile, which is outside CWD and correctly
blocked by the new path validation.  rag_tool_test.py has one test
that calls tool.add() with a TemporaryDirectory path.

Add the same autouse allow_tmp_paths fixture used in
test_rag_tool_add_data_type.py.  Security is covered separately by
test_rag_tool_path_validation.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore: update tool specifications

* docs: document CodeInterpreterTool removal and RAG path/URL validation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: address three review comments on path/URL validation

- safe_path._is_private_or_reserved: after unwrapping IPv4-mapped IPv6
  to IPv4, only check against IPv4 networks to avoid TypeError when
  comparing an IPv4Address against IPv6Network objects.
- safe_path.validate_file_path: handle filesystem-root base_dir ('/')
  by not appending os.sep when the base already ends with a separator,
  preventing the '//'-prefix bug.
- rag_tool.add: path-detection heuristic now checks for both '/' and
  os.sep so forward-slash paths are caught on Windows as well as Unix.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: remove unused _BLOCKED_NETWORKS variable after IPv4/IPv6 split

* chore: update tool specifications

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-07 13:29:45 -03:00
Greyson LaLonde
25e7ca03c4 docs: update changelog and version for v1.14.0a4 2026-04-07 23:29:21 +08:00
Greyson LaLonde
5b4a0e8734 feat: bump versions to 1.14.0a4 2026-04-07 23:22:58 +08:00
alex-clawd
e64b37c5fc refactor: remove CodeInterpreterTool and deprecate code execution params (#5309)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* refactor: remove CodeInterpreterTool and deprecate code execution params

CodeInterpreterTool has been removed. The allow_code_execution and
code_execution_mode parameters on Agent are deprecated and will be
removed in v2.0. Use dedicated sandbox services (E2B, Modal, etc.)
for code execution needs.

Changes:
- Remove CodeInterpreterTool from crewai-tools (tool, Dockerfile, tests, imports)
- Remove docker dependency from crewai-tools
- Deprecate allow_code_execution and code_execution_mode on Agent
- get_code_execution_tools() returns empty list with deprecation warning
- _validate_docker_installation() is a no-op with deprecation warning
- Bedrock CodeInterpreter (AWS hosted) and OpenAI code_interpreter are NOT affected

* fix: remove empty code_interpreter imports and unused stdlib imports

- Remove empty `from code_interpreter_tool import ()` blocks in both
  crewai_tools/__init__.py and tools/__init__.py that caused SyntaxError
  after CodeInterpreterTool was removed
- Remove unused `shutil` and `subprocess` imports from agent/core.py
  left over from the code execution params deprecation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: remove redundant _validate_docker_installation call and fix list type annotation

- Drop the _validate_docker_installation() call inside the allow_code_execution
  block — it fired a second DeprecationWarning identical to the one emitted
  just above it, making the warning fire twice.
- Annotate get_code_execution_tools() return type as list[Any] to satisfy mypy
  (bare `list` fails the type-arg check introduced by this branch).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* ci: retrigger

* fix: update test_crew.py to remove CodeInterpreterTool references

CodeInterpreterTool was removed from crewai_tools. Update tests to
reflect that get_code_execution_tools() now returns an empty list.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore: update tool specifications

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-04-07 03:59:40 -03:00
Greyson LaLonde
c132d57a36 perf: use JSONB for checkpoint data column
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
2026-04-07 09:35:26 +08:00
Lucas Gomide
ad24c3d56e feat: add guardrail_type and name to distinguish traces (#5303)
* feat: add guardrail_type to distinguish between hallucination, function, and LLM

* feat: introduce guardrail_name into guardrail events

* feat: propagate guardrail type and name on guardrail completed event

* feat: remove unused LLMGuardrailFailedEvent

* fix: handle running event loop in LLMGuardrail._validate_output

When agent.kickoff() returns a coroutine inside an already-running event loop, asyncio.run() fails
2026-04-06 18:52:53 -04:00
Lorenze Jay
0c307f1621 docs: update quickstart and installation guides for improved clarity (#5301)
* docs: update quickstart and installation guides for improved clarity

- Revised the quickstart guide to emphasize creating a Flow and running a single-agent crew that generates a report.
- Updated the installation documentation to reflect changes in the quickstart process and enhance user understanding.

* translations
2026-04-06 15:04:54 -07:00
Greyson LaLonde
f98dde6c62 docs: add storage providers section, export JsonProvider 2026-04-07 06:04:29 +08:00
Greyson LaLonde
6b6e191532 feat: add SqliteProvider for checkpoint storage 2026-04-07 05:54:05 +08:00
Greyson LaLonde
c4e2d7ea3b feat: add CheckpointConfig for automatic checkpointing 2026-04-07 05:34:25 +08:00
Greyson LaLonde
86ce54fc82 feat: runtime state checkpointing, event system, and executor refactor
- Pass RuntimeState through the event bus and enable entity auto-registration
- Introduce checkpointing API:
  - .checkpoint(), .from_checkpoint(), and async checkpoint support
  - Provider-based storage with BaseProvider and JsonProvider
  - Mid-task resume and kickoff() integration
- Add EventRecord tracking and full event serialization with subtype preservation
- Enable checkpoint fidelity via llm_type and executor_type discriminators

- Refactor executor architecture:
  - Convert executors, tools, prompts, and TokenProcess to BaseModel
  - Introduce proper base classes with typed fields (CrewAgentExecutorMixin, BaseAgentExecutor)
  - Add generic from_checkpoint with full LLM serialization
  - Support executor back-references and resume-safe initialization

- Refactor runtime state system:
  - Move RuntimeState into state/ module with async checkpoint support
  - Add entity serialization improvements and JSON-safe round-tripping
  - Implement event scope tracking and replay for accurate resume behavior

- Improve tool and schema handling:
  - Make BaseTool fully serializable with JSON round-trip support
  - Serialize args_schema via JSON schema and dynamically reconstruct models
  - Add automatic subclass restoration via tool_type discriminator

- Enhance Flow checkpointing:
  - Support restoring execution state and subclass-aware deserialization

- Performance improvements:
  - Cache handler signature inspection
  - Optimize event emission and metadata preparation

- General cleanup:
  - Remove dead checkpoint payload structures
  - Simplify entity registration and serialization logic
2026-04-07 03:22:30 +08:00
alex-clawd
bf2f4dbce6 fix: exclude embedding vectors from memory serialization (saves tokens) (#5298)
Some checks failed
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
* fix: exclude embedding vector from MemoryRecord serialization

MemoryRecord.embedding (1536 floats for OpenAI embeddings) was included
in model_dump()/JSON serialization and repr. When recall results flow
to agents or get logged, these vectors burn tokens for zero value —
agents never need the raw embedding.

Added exclude=True and repr=False to the embedding field. The storage
layer accesses record.embedding directly (not via model_dump), so
persistence is unaffected.

* test: validate embedding excluded from serialization

Two tests:
1. MemoryRecord — model_dump, model_dump_json, and repr all exclude
   embedding. Direct attribute access still works for storage layer.
2. MemoryMatch — nested record serialization also excludes embedding.
2026-04-06 14:48:58 -03:00
Lorenze Jay
fdb9b6f090 fix: bump litellm to >=1.83.0 to address CVE-2026-35030
* fix: bump litellm to >=1.83.0 to address CVE-2026-35030

Bump litellm from <=1.82.6 to >=1.83.0 to fix JWT auth bypass via
OIDC cache key collision (CVE-2026-35030). Also widen devtools openai
pin from ~=1.83.0 to >=1.83.0,<3 to resolve the version conflict
(litellm 1.83.0 requires openai>=2.8.0).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve mypy errors from litellm bump

- Remove unused type: ignore[import-untyped] on instructor import
- Remove all unused type: ignore[union-attr] comments (litellm types fixed)
- Add hasattr guard for tool_call.function — new litellm adds
  ChatCompletionMessageCustomToolCall to the union which lacks .function

* fix: tighten litellm pin to ~=1.83.0 (patch-only bumps)

>=1.83.0,<2 is too wide — litellm has had breaking changes between
minors. ~=1.83.0 means >=1.83.0,<1.84.0 — gets CVE patches but won't
pull in breaking minor releases.

* ci: bump uv from 0.8.4 to 0.11.3

* fix: resolve mypy errors in openai completion from 2.x type changes

Use isinstance checks with concrete openai response types instead of
string comparisons for proper type narrowing. Update code interpreter
handling for outputs/OutputImage API changes in openai 2.x.

* fix: pre-cache tiktoken encoding before VCR intercepts requests

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Alex <alex@crewai.com>
Co-authored-by: Greyson LaLonde <greyson@crewai.com>
2026-04-07 00:41:20 +08:00
João Moura
71b4667a0e docs: update changelog and version for v1.14.0a3 (#5296)
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
2026-04-06 05:17:58 -03:00
João Moura
c393bd2ee6 feat: bump versions to 1.14.0a3 (#5295) 2026-04-06 05:17:10 -03:00
271 changed files with 18598 additions and 4852 deletions

View File

@@ -28,7 +28,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: ${{ matrix.python-version }}
enable-cache: false

View File

@@ -35,7 +35,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: "3.12"
enable-cache: true

View File

@@ -6,7 +6,24 @@ permissions:
contents: read
jobs:
lint:
changes:
name: Detect changes
runs-on: ubuntu-latest
outputs:
code: ${{ steps.filter.outputs.code }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
code:
- '!docs/**'
- '!**/*.md'
lint-run:
needs: changes
if: needs.changes.outputs.code == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
@@ -26,7 +43,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: "3.11"
enable-cache: false
@@ -48,3 +65,23 @@ jobs:
~/.local/share/uv
.venv
key: uv-main-py3.11-${{ hashFiles('uv.lock') }}
# Summary job to provide single status for branch protection
lint:
name: lint
runs-on: ubuntu-latest
needs: [changes, lint-run]
if: always()
steps:
- name: Check results
run: |
if [ "${{ needs.changes.outputs.code }}" != "true" ]; then
echo "Docs-only change, skipping lint"
exit 0
fi
if [ "${{ needs.lint-run.result }}" == "success" ]; then
echo "Lint passed"
else
echo "Lint failed"
exit 1
fi

View File

@@ -95,7 +95,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: "3.12"
enable-cache: false

View File

@@ -65,7 +65,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: "3.12"
enable-cache: false

View File

@@ -6,8 +6,25 @@ permissions:
contents: read
jobs:
tests:
changes:
name: Detect changes
runs-on: ubuntu-latest
outputs:
code: ${{ steps.filter.outputs.code }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
code:
- '!docs/**'
- '!**/*.md'
tests-matrix:
name: tests (${{ matrix.python-version }})
needs: changes
if: needs.changes.outputs.code == 'true'
runs-on: ubuntu-latest
timeout-minutes: 15
strategy:
@@ -36,7 +53,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: ${{ matrix.python-version }}
enable-cache: false
@@ -98,3 +115,23 @@ jobs:
~/.local/share/uv
.venv
key: uv-main-py${{ matrix.python-version }}-${{ hashFiles('uv.lock') }}
# Summary job to provide single status for branch protection
tests:
name: tests
runs-on: ubuntu-latest
needs: [changes, tests-matrix]
if: always()
steps:
- name: Check results
run: |
if [ "${{ needs.changes.outputs.code }}" != "true" ]; then
echo "Docs-only change, skipping tests"
exit 0
fi
if [ "${{ needs.tests-matrix.result }}" == "success" ]; then
echo "All tests passed"
else
echo "Tests failed"
exit 1
fi

View File

@@ -6,8 +6,25 @@ permissions:
contents: read
jobs:
changes:
name: Detect changes
runs-on: ubuntu-latest
outputs:
code: ${{ steps.filter.outputs.code }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
code:
- '!docs/**'
- '!**/*.md'
type-checker-matrix:
name: type-checker (${{ matrix.python-version }})
needs: changes
if: needs.changes.outputs.code == 'true'
runs-on: ubuntu-latest
strategy:
fail-fast: false
@@ -33,7 +50,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: ${{ matrix.python-version }}
enable-cache: false
@@ -57,14 +74,18 @@ jobs:
type-checker:
name: type-checker
runs-on: ubuntu-latest
needs: type-checker-matrix
needs: [changes, type-checker-matrix]
if: always()
steps:
- name: Check matrix results
- name: Check results
run: |
if [ "${{ needs.type-checker-matrix.result }}" == "success" ] || [ "${{ needs.type-checker-matrix.result }}" == "skipped" ]; then
echo "✅ All type checks passed"
if [ "${{ needs.changes.outputs.code }}" != "true" ]; then
echo "Docs-only change, skipping type checks"
exit 0
fi
if [ "${{ needs.type-checker-matrix.result }}" == "success" ]; then
echo "All type checks passed"
else
echo "Type checks failed"
echo "Type checks failed"
exit 1
fi

View File

@@ -40,7 +40,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: ${{ matrix.python-version }}
enable-cache: false

View File

@@ -33,7 +33,7 @@ jobs:
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "0.8.4"
version: "0.11.3"
python-version: "3.11"
enable-cache: false

View File

@@ -21,9 +21,17 @@ repos:
types: [python]
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/|lib/crewai-files/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
rev: 0.11.3
hooks:
- id: uv-lock
- repo: local
hooks:
- id: pip-audit
name: pip-audit
entry: bash -c 'source .venv/bin/activate && uv run pip-audit --skip-editable --ignore-vuln CVE-2025-69872 --ignore-vuln CVE-2026-25645 --ignore-vuln CVE-2026-27448 --ignore-vuln CVE-2026-27459 --ignore-vuln PYSEC-2023-235' --
language: system
pass_filenames: false
stages: [pre-push, manual]
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:

View File

@@ -4,6 +4,210 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="10 أبريل 2026">
## v1.14.2a2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## ما الذي تغير
### الميزات
- إضافة واجهة مستخدم نصية لنقطة التحقق مع عرض شجري، ودعم التفرع، ومدخلات/مخرجات قابلة للتعديل
- إثراء تتبع رموز LLM مع رموز الاستدلال ورموز إنشاء التخزين المؤقت
- إضافة معلمة `from_checkpoint` إلى طرق الانطلاق
- تضمين `crewai_version` في نقاط التحقق مع إطار عمل الهجرة
- إضافة تفرع نقاط التحقق مع تتبع السلالة
### إصلاحات الأخطاء
- إصلاح توجيه الوضع الصارم إلى مزودي Anthropic وBedrock
- تعزيز NL2SQLTool مع وضع القراءة فقط الافتراضي، والتحقق من الاستعلامات، والاستعلامات المعلمة
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a1
## المساهمون
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="9 أبريل 2026">
## v1.14.2a1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## ما الذي تغير
### إصلاحات الأخطاء
- إصلاح إصدار حدث flow_finished بعد استئناف HITL
- إصلاح إصدار التشفير إلى 46.0.7 لمعالجة CVE-2026-39892
### إعادة هيكلة
- إعادة هيكلة لاستخدام I18N_DEFAULT المشترك
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.1
## المساهمون
@greysonlalonde
</Update>
<Update label="9 أبريل 2026">
## v1.14.1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1)
## ما الذي تغير
### الميزات
- إضافة متصفح TUI لنقاط التفتيش غير المتزامنة
- إضافة دالة aclose()/close() ومدير سياق غير متزامن لمخرجات البث
### إصلاحات الأخطاء
- إصلاح التعبير النمطي لزيادة إصدار pyproject.toml
- تنظيف أسماء الأدوات في مرشحات زخرفة الخطاف
- إصلاح تسجيل معالجات نقاط التفتيش عند إنشاء CheckpointConfig
- رفع إصدار transformers إلى 5.5.0 لحل CVE-2026-1839
- إزالة غلاف FilteredStream لـ stdout/stderr
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.1rc1
### إعادة الهيكلة
- استبدال القائمة المحظورة الثابتة باستبعاد حقل BaseTool الديناميكي في توليد المواصفات
- استبدال التعبير النمطي بـ tomlkit في واجهة سطر أوامر أدوات التطوير
- استخدام كائن PRINTER المشترك
- جعل BaseProvider نموذجاً أساسياً مع مميز نوع المزود
## المساهمون
@greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay
</Update>
<Update label="9 أبريل 2026">
## v1.14.1rc1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1rc1)
## ما الذي تغير
### الميزات
- إضافة متصفح TUI لنقطة التحقق غير المتزامنة
- إضافة aclose()/close() ومدير سياق غير متزامن لمخرجات البث
### إصلاحات الأخطاء
- إصلاح زيادة إصدارات pyproject.toml باستخدام التعبيرات العادية
- تنظيف أسماء الأدوات في مرشحات ديكور المكونات
- زيادة إصدار transformers إلى 5.5.0 لحل CVE-2026-1839
- تسجيل معالجات نقطة التحقق عند إنشاء CheckpointConfig
### إعادة الهيكلة
- استبدال القائمة المحظورة الثابتة باستبعاد حقل BaseTool الديناميكي في توليد المواصفات
- استبدال التعبيرات العادية بـ tomlkit في واجهة سطر الأوامر devtools
- استخدام كائن PRINTER المشترك
- جعل BaseProvider نموذجًا أساسيًا مع مميز نوع المزود
- إزالة غلاف stdout/stderr لـ FilteredStream
- إزالة flow/config.py غير المستخدمة
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.0
## المساهمون
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="7 أبريل 2026">
## v1.14.0
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0)
## ما الذي تغير
### الميزات
- إضافة أوامر CLI لقائمة/معلومات نقاط التحقق
- إضافة guardrail_type و name لتمييز التتبع
- إضافة SqliteProvider لتخزين نقاط التحقق
- إضافة CheckpointConfig للتسجيل التلقائي لنقاط التحقق
- تنفيذ تسجيل حالة وقت التشغيل، نظام الأحداث، وإعادة هيكلة المنفذ
### إصلاحات الأخطاء
- إضافة حماية من SSRF وتجاوز المسار
- إضافة التحقق من المسار وعنوان URL لأدوات RAG
- استبعاد متجهات التضمين من تسلسل الذاكرة لتوفير الرموز
- التأكد من وجود دليل الإخراج قبل الكتابة في قالب التدفق
- رفع litellm إلى >=1.83.0 لمعالجة CVE-2026-35030
- إزالة حقل فهرسة SEO الذي يتسبب في عرض الصفحة العربية بشكل غير صحيح
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.0
- تحديث أدلة البدء السريع والتثبيت لتحسين الوضوح
- إضافة قسم مزودي التخزين، تصدير JsonProvider
- إضافة دليل علامة AMP التدريبية
### إعادة الهيكلة
- تنظيف واجهة برمجة تطبيقات نقاط التحقق
- إزالة CodeInterpreterTool وإهمال معلمات تنفيذ الكود
## المساهمون
@alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="7 أبريل 2026">
## v1.14.0a4
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a4)
## ما الذي تغير
### الميزات
- إضافة guardrail_type و name لتمييز الآثار
- إضافة SqliteProvider لتخزين نقاط التحقق
- إضافة CheckpointConfig للتخزين التلقائي لنقاط التحقق
- تنفيذ نقاط التحقق لحالة التشغيل، نظام الأحداث، وإعادة هيكلة المنفذ
### إصلاحات الأخطاء
- استبعاد متجهات التضمين من تسلسل الذاكرة لتوفير الرموز
- رفع litellm إلى >=1.83.0 لمعالجة CVE-2026-35030
### الوثائق
- تحديث أدلة البدء السريع والتثبيت لتحسين الوضوح
- إضافة قسم مقدمي التخزين وتصدير JsonProvider
### الأداء
- استخدام JSONB لعمود بيانات نقاط التحقق
### إعادة الهيكلة
- إزالة CodeInterpreterTool وإهمال معلمات تنفيذ الكود
## المساهمون
@alex-clawd, @github-actions[bot], @greysonlalonde, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="6 أبريل 2026">
## v1.14.0a3
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a3)
## ما الذي تغير
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.0a2
## المساهمون
@joaomdmoura
</Update>
<Update label="6 أبريل 2026">
## v1.14.0a2

View File

@@ -250,16 +250,12 @@ analysis_agent = Agent(
#### تنفيذ الكود
- `allow_code_execution`: يجب أن يكون True لتشغيل الكود
- `code_execution_mode`:
- `"safe"`: يستخدم Docker (موصى به للإنتاج)
- `"unsafe"`: تنفيذ مباشر (استخدم فقط في بيئات موثوقة)
<Warning>
`allow_code_execution` و`code_execution_mode` مهجوران. تمت إزالة `CodeInterpreterTool` من `crewai-tools`. استخدم خدمة بيئة معزولة مخصصة مثل [E2B](https://e2b.dev) أو [Modal](https://modal.com) لتنفيذ الكود بأمان.
</Warning>
<Note>
يشغّل هذا صورة Docker افتراضية. إذا أردت تهيئة صورة Docker،
راجع أداة Code Interpreter في قسم الأدوات. أضف أداة
مفسر الكود كأداة في معامل أداة الوكيل.
</Note>
- `allow_code_execution` _(مهجور)_: كان يُمكّن تنفيذ الكود المدمج عبر `CodeInterpreterTool`.
- `code_execution_mode` _(مهجور)_: كان يتحكم في وضع التنفيذ (`"safe"` لـ Docker، `"unsafe"` للتنفيذ المباشر).
#### الميزات المتقدمة
@@ -332,9 +328,9 @@ print(result.raw)
### الأمان وتنفيذ الكود
- عند استخدام `allow_code_execution`، كن حذرًا مع مدخلات المستخدم وتحقق منها دائمًا
- استخدم `code_execution_mode: "safe"` (Docker) في بيئات الإنتاج
- فكّر في تعيين حدود `max_execution_time` مناسبة لمنع الحلقات اللانهائية
<Warning>
`allow_code_execution` و`code_execution_mode` مهجوران وتمت إزالة `CodeInterpreterTool`. استخدم خدمة بيئة معزولة مخصصة مثل [E2B](https://e2b.dev) أو [Modal](https://modal.com) لتنفيذ الكود بأمان.
</Warning>
### تحسين الأداء

View File

@@ -0,0 +1,229 @@
---
title: Checkpointing
description: حفظ حالة التنفيذ تلقائيا حتى تتمكن الطواقم والتدفقات والوكلاء من الاستئناف بعد الفشل.
icon: floppy-disk
mode: "wide"
---
<Warning>
الـ Checkpointing في اصدار مبكر. قد تتغير واجهات البرمجة في الاصدارات المستقبلية.
</Warning>
## نظرة عامة
يقوم الـ Checkpointing بحفظ حالة التنفيذ تلقائيا اثناء التشغيل. اذا فشل طاقم او تدفق او وكيل اثناء التنفيذ، يمكنك الاستعادة من اخر نقطة حفظ والاستئناف دون اعادة تنفيذ العمل المكتمل.
## البداية السريعة
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=True, # يستخدم الافتراضيات: ./.checkpoints, عند task_completed
)
result = crew.kickoff()
```
تتم كتابة ملفات نقاط الحفظ في `./.checkpoints/` بعد اكتمال كل مهمة.
## التكوين
استخدم `CheckpointConfig` للتحكم الكامل:
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
on_events=["task_completed", "crew_kickoff_completed"],
max_checkpoints=5,
),
)
```
### حقول CheckpointConfig
| الحقل | النوع | الافتراضي | الوصف |
|:------|:------|:----------|:------|
| `location` | `str` | `"./.checkpoints"` | مسار ملفات نقاط الحفظ |
| `on_events` | `list[str]` | `["task_completed"]` | انواع الاحداث التي تطلق نقطة حفظ |
| `provider` | `BaseProvider` | `JsonProvider()` | واجهة التخزين |
| `max_checkpoints` | `int \| None` | `None` | الحد الاقصى للملفات؛ يتم حذف الاقدم اولا |
### الوراثة والانسحاب
يقبل حقل `checkpoint` في Crew و Flow و Agent قيم `CheckpointConfig` او `True` او `False` او `None`:
| القيمة | السلوك |
|:-------|:-------|
| `None` (افتراضي) | يرث من الاصل. الوكيل يرث اعدادات الطاقم. |
| `True` | تفعيل بالاعدادات الافتراضية. |
| `False` | انسحاب صريح. يوقف الوراثة من الاصل. |
| `CheckpointConfig(...)` | اعدادات مخصصة. |
```python
crew = Crew(
agents=[
Agent(role="Researcher", ...), # يرث checkpoint من الطاقم
Agent(role="Writer", ..., checkpoint=False), # منسحب، بدون نقاط حفظ
],
tasks=[...],
checkpoint=True,
)
```
## الاستئناف من نقطة حفظ
```python
# استعادة واستئناف
crew = Crew.from_checkpoint("./my_checkpoints/20260407T120000_abc123.json")
result = crew.kickoff() # يستأنف من اخر مهمة مكتملة
```
يتخطى الطاقم المستعاد المهام المكتملة ويستأنف من اول مهمة غير مكتملة.
## يعمل على Crew و Flow و Agent
### Crew
```python
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task, review_task],
checkpoint=CheckpointConfig(location="./crew_cp"),
)
```
المشغل الافتراضي: `task_completed` (نقطة حفظ واحدة لكل مهمة مكتملة).
### Flow
```python
from crewai.flow.flow import Flow, start, listen
from crewai import CheckpointConfig
class MyFlow(Flow):
@start()
def step_one(self):
return "data"
@listen(step_one)
def step_two(self, data):
return process(data)
flow = MyFlow(
checkpoint=CheckpointConfig(
location="./flow_cp",
on_events=["method_execution_finished"],
),
)
result = flow.kickoff()
# استئناف
flow = MyFlow.from_checkpoint("./flow_cp/20260407T120000_abc123.json")
result = flow.kickoff()
```
### Agent
```python
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
checkpoint=CheckpointConfig(
location="./agent_cp",
on_events=["lite_agent_execution_completed"],
),
)
result = agent.kickoff(messages=[{"role": "user", "content": "Research AI trends"}])
```
## مزودات التخزين
يتضمن CrewAI مزودي تخزين لنقاط الحفظ.
### JsonProvider (افتراضي)
يكتب كل نقطة حفظ كملف JSON منفصل.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import JsonProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
provider=JsonProvider(),
max_checkpoints=5,
),
)
```
### SqliteProvider
يخزن جميع نقاط الحفظ في ملف قاعدة بيانات SQLite واحد.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import SqliteProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./.checkpoints.db",
provider=SqliteProvider(),
),
)
```
## انواع الاحداث
يقبل حقل `on_events` اي مجموعة من سلاسل انواع الاحداث. الخيارات الشائعة:
| حالة الاستخدام | الاحداث |
|:---------------|:--------|
| بعد كل مهمة (Crew) | `["task_completed"]` |
| بعد كل طريقة في التدفق | `["method_execution_finished"]` |
| بعد تنفيذ الوكيل | `["agent_execution_completed"]`, `["lite_agent_execution_completed"]` |
| عند اكتمال الطاقم فقط | `["crew_kickoff_completed"]` |
| بعد كل استدعاء LLM | `["llm_call_completed"]` |
| على كل شيء | `["*"]` |
<Warning>
استخدام `["*"]` او احداث عالية التردد مثل `llm_call_completed` سيكتب العديد من ملفات نقاط الحفظ وقد يؤثر على الاداء. استخدم `max_checkpoints` للحد من استخدام المساحة.
</Warning>
## نقاط الحفظ اليدوية
للتحكم الكامل، سجل معالج الاحداث الخاص بك واستدع `state.checkpoint()` مباشرة:
```python
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_events import LLMCallCompletedEvent
# معالج متزامن
@crewai_event_bus.on(LLMCallCompletedEvent)
def on_llm_done(source, event, state):
path = state.checkpoint("./my_checkpoints")
print(f"تم حفظ نقطة الحفظ: {path}")
# معالج غير متزامن
@crewai_event_bus.on(LLMCallCompletedEvent)
async def on_llm_done_async(source, event, state):
path = await state.acheckpoint("./my_checkpoints")
print(f"تم حفظ نقطة الحفظ: {path}")
```
وسيط `state` هو `RuntimeState` الذي يتم تمريره تلقائيا بواسطة ناقل الاحداث عندما يقبل المعالج 3 معاملات. يمكنك تسجيل معالجات على اي نوع حدث مدرج في وثائق [Event Listeners](/ar/concepts/event-listener).
الـ Checkpointing يعمل بافضل جهد: اذا فشلت كتابة نقطة حفظ، يتم تسجيل الخطأ ولكن التنفيذ يستمر دون انقطاع.

View File

@@ -106,7 +106,7 @@ mode: "wide"
```
<Tip>
يستغرق النشر الأول عادة 10-15 دقيقة لبناء صور الحاويات. عمليات النشر اللاحقة أسرع بكثير.
يستغرق النشر الأول عادة حوالي دقيقة واحدة.
</Tip>
</Step>
@@ -188,7 +188,7 @@ crewai deploy remove <deployment_id>
1. انقر على زر "Deploy" لبدء عملية النشر
2. يمكنك مراقبة التقدم عبر شريط التقدم
3. يستغرق النشر الأول عادة حوالي 10-15 دقيقة؛ عمليات النشر اللاحقة ستكون أسرع
3. يستغرق النشر الأول عادة حوالي دقيقة واحدة
<Frame>
![تقدم النشر](/images/enterprise/deploy-progress.png)

View File

@@ -204,8 +204,8 @@ python3 --version
## الخطوات التالية
<CardGroup cols={2}>
<Card title="ابنِ أول Agent لك" icon="code" href="/ar/quickstart">
اتبع دليل البداية السريعة لإنشاء أول Agent في CrewAI والحصول على تجربة عملية.
<Card title="بدء سريع: Flow + وكيل" icon="code" href="/ar/quickstart">
اتبع البداية السريعة لإنشاء Flow وتشغيل طاقم بوكيل واحد وإنتاج تقرير.
</Card>
<Card
title="انضم إلى المجتمع"

View File

@@ -138,9 +138,9 @@ mode: "wide"
<Card
title="البداية السريعة"
icon="bolt"
href="en/quickstart"
href="/ar/quickstart"
>
اتبع دليل البداية السريعة لإنشاء أول Agent في CrewAI والحصول على تجربة عملية.
أنشئ Flow وشغّل طاقمًا بوكيل واحد وأنشئ تقريرًا من البداية للنهاية.
</Card>
<Card
title="انضم إلى المجتمع"

View File

@@ -325,6 +325,34 @@ asyncio.run(interactive_research())
- **تجربة المستخدم**: تقليل زمن الاستجابة المتصور بعرض نتائج تدريجية
- **لوحات المعلومات الحية**: بناء واجهات مراقبة تعرض حالة تنفيذ الطاقم
## الإلغاء وتنظيف الموارد
يدعم `CrewStreamingOutput` الإلغاء السلس بحيث يتوقف العمل الجاري فوراً عند انقطاع اتصال المستهلك.
### مدير السياق غير المتزامن
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
async with streaming:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
```
### الإلغاء الصريح
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
try:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
finally:
await streaming.aclose() # غير متزامن
# streaming.close() # المكافئ المتزامن
```
بعد الإلغاء، يكون كل من `streaming.is_cancelled` و `streaming.is_completed` بقيمة `True`. كل من `aclose()` و `close()` متساويان القوة.
## ملاحظات مهمة
- يفعّل البث تلقائياً بث LLM لجميع الوكلاء في الطاقم

View File

@@ -420,6 +420,34 @@ except Exception as e:
print("Streaming completed but flow encountered an error")
```
## الإلغاء وتنظيف الموارد
يدعم `FlowStreamingOutput` الإلغاء السلس بحيث يتوقف العمل الجاري فوراً عند انقطاع اتصال المستهلك.
### مدير السياق غير المتزامن
```python Code
streaming = await flow.kickoff_async()
async with streaming:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
```
### الإلغاء الصريح
```python Code
streaming = await flow.kickoff_async()
try:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
finally:
await streaming.aclose() # غير متزامن
# streaming.close() # المكافئ المتزامن
```
بعد الإلغاء، يكون كل من `streaming.is_cancelled` و `streaming.is_completed` بقيمة `True`. كل من `aclose()` و `close()` متساويان القوة.
## ملاحظات مهمة
- يفعّل البث تلقائياً بث LLM لأي أطقم مستخدمة داخل التدفق

View File

@@ -1,6 +1,6 @@
---
title: البدء السريع
description: ابنِ أول وكيل ذكاء اصطناعي مع CrewAI في أقل من 5 دقائق.
description: ابنِ أول Flow في CrewAI خلال دقائق — التنسيق والحالة وفريقًا بوكيل واحد ينتج تقريرًا فعليًا.
icon: rocket
mode: "wide"
---
@@ -13,376 +13,266 @@ mode: "wide"
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
## ابنِ أول وكيل CrewAI
في هذا الدليل ستُنشئ **Flow** يحدد موضوع بحث، ويشغّل **طاقمًا بوكيل واحد** (باحث يستخدم البحث على الويب)، وينتهي بتقرير **Markdown** على القرص. يُعد Flow الطريقة الموصى بها لتنظيم التطبيقات الإنتاجية: يمتلك **الحالة** و**ترتيب التنفيذ**، بينما **الوكلاء** ينفّذون العمل داخل خطوة الطاقم.
لننشئ طاقماً بسيطاً يساعدنا في `البحث` و`إعداد التقارير` عن `أحدث تطورات الذكاء الاصطناعي` لموضوع أو مجال معين.
إذا لم تُكمل تثبيت CrewAI بعد، اتبع [دليل التثبيت](/ar/installation) أولًا.
قبل المتابعة، تأكد من إنهاء تثبيت CrewAI.
إذا لم تكن قد ثبّتها بعد، يمكنك القيام بذلك باتباع [دليل التثبيت](/ar/installation).
## المتطلبات الأساسية
اتبع الخطوات أدناه للبدء!
- بيئة Python وواجهة سطر أوامر CrewAI (راجع [التثبيت](/ar/installation))
- نموذج لغوي مهيأ بالمفاتيح الصحيحة — راجع [LLMs](/ar/concepts/llms#setting-up-your-llm)
- مفتاح API من [Serper.dev](https://serper.dev/) (`SERPER_API_KEY`) للبحث على الويب في هذا الدرس
## ابنِ أول Flow لك
<Steps>
<Step title="إنشاء طاقمك">
أنشئ مشروع طاقم جديد عبر تشغيل الأمر التالي في الطرفية.
سينشئ هذا مجلداً جديداً باسم `latest-ai-development` مع البنية الأساسية لطاقمك.
<Step title="أنشئ مشروع Flow">
من الطرفية، أنشئ مشروع Flow (اسم المجلد يستخدم شرطة سفلية، مثل `latest_ai_flow`):
<CodeGroup>
```shell Terminal
crewai create crew latest-ai-development
crewai create flow latest-ai-flow
cd latest_ai_flow
```
</CodeGroup>
يُنشئ ذلك تطبيق Flow ضمن `src/latest_ai_flow/`، بما في ذلك طاقمًا أوليًا في `crews/content_crew/` ستستبدله بطاقم بحث **بوكيل واحد** في الخطوات التالية.
</Step>
<Step title="الانتقال إلى مشروع الطاقم الجديد">
<CodeGroup>
```shell Terminal
cd latest_ai_development
```
</CodeGroup>
</Step>
<Step title="تعديل ملف `agents.yaml`">
<Tip>
يمكنك أيضاً تعديل الوكلاء حسب الحاجة ليناسبوا حالة الاستخدام الخاصة بك أو نسخ ولصق كما هو في مشروعك.
أي متغير مُستكمل في ملفات `agents.yaml` و`tasks.yaml` مثل `{topic}` سيُستبدل بقيمة المتغير في ملف `main.py`.
</Tip>
<Step title="اضبط وكيلًا واحدًا في `agents.yaml`">
استبدل محتوى `src/latest_ai_flow/crews/content_crew/config/agents.yaml` بباحث واحد. تُملأ المتغيرات مثل `{topic}` من `crew.kickoff(inputs=...)`.
```yaml agents.yaml
# src/latest_ai_development/config/agents.yaml
# src/latest_ai_flow/crews/content_crew/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
باحث بيانات أول في {topic}
goal: >
Uncover cutting-edge developments in {topic}
اكتشاف أحدث التطورات في {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
أنت باحث مخضرم تكشف أحدث المستجدات في {topic}.
تجد المعلومات الأكثر صلة وتعرضها بوضوح.
```
</Step>
<Step title="تعديل ملف `tasks.yaml`">
<Step title="اضبط مهمة واحدة في `tasks.yaml`">
```yaml tasks.yaml
# src/latest_ai_development/config/tasks.yaml
# src/latest_ai_flow/crews/content_crew/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
أجرِ بحثًا معمقًا عن {topic}. استخدم البحث على الويب للعثور على معلومات
حديثة وموثوقة. السنة الحالية 2026.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
تقرير بصيغة Markdown بأقسام واضحة: الاتجاهات الرئيسية، أدوات أو شركات بارزة،
والآثار. بين 800 و1200 كلمة تقريبًا. دون إحاطة المستند بأكمله بكتل كود.
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
output_file: output/report.md
```
</Step>
<Step title="تعديل ملف `crew.py`">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
<Step title="اربط صف الطاقم (`content_crew.py`)">
اجعل الطاقم المُولَّد يشير إلى YAML وأرفق `SerperDevTool` بالباحث.
```python content_crew.py
# src/latest_ai_flow/crews/content_crew/content_crew.py
from typing import List
from crewai import Agent, Crew, Process, Task
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
class ResearchCrew:
"""طاقم بحث بوكيل واحد داخل Flow."""
agents: List[BaseAgent]
tasks: List[Task]
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
config=self.agents_config["researcher"], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
tools=[SerperDevTool()],
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='output/report.md' # This is the file that will be contain the final report.
config=self.tasks_config["research_task"], # type: ignore[index]
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
</Step>
<Step title="[اختياري] إضافة دوال قبل وبعد تشغيل الطاقم">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
<Step title="عرّف Flow في `main.py`">
اربط الطاقم بـ Flow: خطوة `@start()` تضبط الموضوع في **الحالة**، وخطوة `@listen` تشغّل الطاقم. يظل `output_file` للمهمة يكتب `output/report.md`.
@before_kickoff
def before_kickoff_function(self, inputs):
print(f"Before kickoff function with inputs: {inputs}")
return inputs # You can return the inputs or modify them as needed
@after_kickoff
def after_kickoff_function(self, result):
print(f"After kickoff function with result: {result}")
return result # You can return the result or modify it as needed
# ... remaining code
```
</Step>
<Step title="لا تتردد في تمرير مدخلات مخصصة لطاقمك">
على سبيل المثال، يمكنك تمرير مدخل `topic` لطاقمك لتخصيص البحث وإعداد التقارير.
```python main.py
#!/usr/bin/env python
# src/latest_ai_development/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
# src/latest_ai_flow/main.py
from pydantic import BaseModel
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
from crewai.flow import Flow, listen, start
from latest_ai_flow.crews.content_crew.content_crew import ResearchCrew
class ResearchFlowState(BaseModel):
topic: str = ""
report: str = ""
class LatestAiFlow(Flow[ResearchFlowState]):
@start()
def prepare_topic(self, crewai_trigger_payload: dict | None = None):
if crewai_trigger_payload:
self.state.topic = crewai_trigger_payload.get("topic", "AI Agents")
else:
self.state.topic = "AI Agents"
print(f"الموضوع: {self.state.topic}")
@listen(prepare_topic)
def run_research(self):
result = ResearchCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.report = result.raw
print("اكتمل طاقم البحث.")
@listen(run_research)
def summarize(self):
print("مسار التقرير: output/report.md")
def kickoff():
LatestAiFlow().kickoff()
def plot():
LatestAiFlow().plot()
if __name__ == "__main__":
kickoff()
```
</Step>
<Step title="تعيين متغيرات البيئة">
قبل تشغيل طاقمك، تأكد من تعيين المفاتيح التالية كمتغيرات بيئة في ملف `.env`:
- مفتاح API لـ [Serper.dev](https://serper.dev/): `SERPER_API_KEY=YOUR_KEY_HERE`
- إعداد النموذج الذي اخترته، مثل مفتاح API. راجع
[دليل إعداد LLM](/ar/concepts/llms#setting-up-your-llm) لمعرفة كيفية إعداد النماذج من أي مزود.
</Step>
<Step title="قفل وتثبيت التبعيات">
- اقفل التبعيات وثبّتها باستخدام أمر CLI:
<CodeGroup>
```shell Terminal
crewai install
```
</CodeGroup>
- إذا كانت لديك حزم إضافية تريد تثبيتها، يمكنك القيام بذلك عبر:
<CodeGroup>
```shell Terminal
uv add <package-name>
```
</CodeGroup>
</Step>
<Step title="تشغيل طاقمك">
- لتشغيل طاقمك، نفّذ الأمر التالي في جذر مشروعك:
<CodeGroup>
```bash Terminal
crewai run
```
</CodeGroup>
<Tip>
إذا كان اسم الحزمة ليس `latest_ai_flow`، عدّل استيراد `ResearchCrew` ليطابق مسار الوحدة في مشروعك.
</Tip>
</Step>
<Step title="البديل المؤسسي: الإنشاء في Crew Studio">
لمستخدمي CrewAI AMP، يمكنك إنشاء نفس الطاقم دون كتابة كود:
<Step title="متغيرات البيئة">
في جذر المشروع، ضبط `.env`:
1. سجّل الدخول إلى حساب CrewAI AMP (أنشئ حساباً مجانياً على [app.crewai.com](https://app.crewai.com))
2. افتح Crew Studio
3. اكتب ما هي الأتمتة التي تحاول بناءها
4. أنشئ مهامك بصرياً واربطها بالتسلسل
5. هيئ مدخلاتك وانقر "تحميل الكود" أو "نشر"
![واجهة Crew Studio للبدء السريع](/images/enterprise/crew-studio-interface.png)
<Card title="جرّب CrewAI AMP" icon="rocket" href="https://app.crewai.com">
ابدأ حسابك المجاني في CrewAI AMP
</Card>
- `SERPER_API_KEY` — من [Serper.dev](https://serper.dev/)
- مفاتيح مزوّد النموذج حسب الحاجة — راجع [إعداد LLM](/ar/concepts/llms#setting-up-your-llm)
</Step>
<Step title="عرض التقرير النهائي">
يجب أن ترى المخرجات في وحدة التحكم ويجب إنشاء ملف `report.md` في جذر مشروعك مع التقرير النهائي.
إليك مثالاً على شكل التقرير:
<Step title="التثبيت والتشغيل">
<CodeGroup>
```shell Terminal
crewai install
crewai run
```
</CodeGroup>
يُنفّذ `crewai run` نقطة دخول Flow المعرّفة في المشروع (نفس أمر الطواقم؛ نوع المشروع `"flow"` في `pyproject.toml`).
</Step>
<Step title="تحقق من المخرجات">
يجب أن ترى سجلات من Flow والطاقم. افتح **`output/report.md`** للتقرير المُولَّد (مقتطف):
<CodeGroup>
```markdown output/report.md
# Comprehensive Report on the Rise and Impact of AI Agents in 2025
# وكلاء الذكاء الاصطناعي في 2026: المشهد والاتجاهات
## 1. Introduction to AI Agents
In 2025, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce.
## ملخص تنفيذي
## 2. Benefits of AI Agents
AI agents bring numerous advantages that are transforming traditional work environments. Key benefits include:
## أبرز الاتجاهات
- **استخدام الأدوات والتنسيق** — …
- **التبني المؤسسي** — …
- **Task Automation**: AI agents can carry out repetitive tasks such as data entry, scheduling, and payroll processing without human intervention, greatly reducing the time and resources spent on these activities.
- **Improved Efficiency**: By quickly processing large datasets and performing analyses that would take humans significantly longer, AI agents enhance operational efficiency. This allows teams to focus on strategic tasks that require higher-level thinking.
- **Enhanced Decision-Making**: AI agents can analyze trends and patterns in data, provide insights, and even suggest actions, helping stakeholders make informed decisions based on factual data rather than intuition alone.
## 3. Popular AI Agent Frameworks
Several frameworks have emerged to facilitate the development of AI agents, each with its own unique features and capabilities. Some of the most popular frameworks include:
- **Autogen**: A framework designed to streamline the development of AI agents through automation of code generation.
- **Semantic Kernel**: Focuses on natural language processing and understanding, enabling agents to comprehend user intentions better.
- **Promptflow**: Provides tools for developers to create conversational agents that can navigate complex interactions seamlessly.
- **Langchain**: Specializes in leveraging various APIs to ensure agents can access and utilize external data effectively.
- **CrewAI**: Aimed at collaborative environments, CrewAI strengthens teamwork by facilitating communication through AI-driven insights.
- **MemGPT**: Combines memory-optimized architectures with generative capabilities, allowing for more personalized interactions with users.
These frameworks empower developers to build versatile and intelligent agents that can engage users, perform advanced analytics, and execute various tasks aligned with organizational goals.
## 4. AI Agents in Human Resources
AI agents are revolutionizing HR practices by automating and optimizing key functions:
- **Recruiting**: AI agents can screen resumes, schedule interviews, and even conduct initial assessments, thus accelerating the hiring process while minimizing biases.
- **Succession Planning**: AI systems analyze employee performance data and potential, helping organizations identify future leaders and plan appropriate training.
- **Employee Engagement**: Chatbots powered by AI can facilitate feedback loops between employees and management, promoting an open culture and addressing concerns promptly.
As AI continues to evolve, HR departments leveraging these agents can realize substantial improvements in both efficiency and employee satisfaction.
## 5. AI Agents in Finance
The finance sector is seeing extensive integration of AI agents that enhance financial practices:
- **Expense Tracking**: Automated systems manage and monitor expenses, flagging anomalies and offering recommendations based on spending patterns.
- **Risk Assessment**: AI models assess credit risk and uncover potential fraud by analyzing transaction data and behavioral patterns.
- **Investment Decisions**: AI agents provide stock predictions and analytics based on historical data and current market conditions, empowering investors with informative insights.
The incorporation of AI agents into finance is fostering a more responsive and risk-aware financial landscape.
## 6. Market Trends and Investments
The growth of AI agents has attracted significant investment, especially amidst the rising popularity of chatbots and generative AI technologies. Companies and entrepreneurs are eager to explore the potential of these systems, recognizing their ability to streamline operations and improve customer engagement.
Conversely, corporations like Microsoft are taking strides to integrate AI agents into their product offerings, with enhancements to their Copilot 365 applications. This strategic move emphasizes the importance of AI literacy in the modern workplace and indicates the stabilizing of AI agents as essential business tools.
## 7. Future Predictions and Implications
Experts predict that AI agents will transform essential aspects of work life. As we look toward the future, several anticipated changes include:
- Enhanced integration of AI agents across all business functions, creating interconnected systems that leverage data from various departmental silos for comprehensive decision-making.
- Continued advancement of AI technologies, resulting in smarter, more adaptable agents capable of learning and evolving from user interactions.
- Increased regulatory scrutiny to ensure ethical use, especially concerning data privacy and employee surveillance as AI agents become more prevalent.
To stay competitive and harness the full potential of AI agents, organizations must remain vigilant about latest developments in AI technology and consider continuous learning and adaptation in their strategic planning.
## 8. Conclusion
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
## الآثار
```
</CodeGroup>
سيكون الملف الفعلي أطول ويعكس نتائج بحث مباشرة.
</Step>
</Steps>
## كيف يترابط هذا
1. **Flow** — يشغّل `LatestAiFlow` أولًا `prepare_topic` ثم `run_research` ثم `summarize`. الحالة (`topic`، `report`) على Flow.
2. **الطاقم** — يشغّل `ResearchCrew` مهمة واحدة بوكيل واحد: الباحث يستخدم **Serper** للبحث على الويب ثم يكتب التقرير.
3. **المُخرَج** — يكتب `output_file` للمهمة التقرير في `output/report.md`.
للتعمق في أنماط Flow (التوجيه، الاستمرارية، الإنسان في الحلقة)، راجع [ابنِ أول Flow](/ar/guides/flows/first-flow) و[Flows](/ar/concepts/flows). للطواقم دون Flow، راجع [Crews](/ar/concepts/crews). لوكيل `Agent` واحد و`kickoff()` بلا مهام، راجع [Agents](/ar/concepts/agents#direct-agent-interaction-with-kickoff).
<Check>
تهانينا!
لقد أعددت مشروع طاقمك بنجاح وأنت جاهز للبدء في بناء سير العمل الوكيلي الخاص بك!
أصبح لديك Flow كامل مع طاقم وكيل وتقرير محفوظ — قاعدة قوية لإضافة خطوات أو طواقم أو أدوات.
</Check>
### ملاحظة حول اتساق التسمية
### اتساق التسمية
يجب أن تتطابق الأسماء التي تستخدمها في ملفات YAML (`agents.yaml` و`tasks.yaml`) مع أسماء الدوال في كود Python الخاص بك.
على سبيل المثال، يمكنك الإشارة إلى الوكيل لمهام محددة من ملف `tasks.yaml`.
يتيح اتساق التسمية هذا لـ CrewAI ربط تكويناتك بكودك تلقائياً؛ وإلا فلن تتعرف مهمتك على المرجع بشكل صحيح.
يجب أن تطابق مفاتيح YAML (`researcher`، `research_task`) أسماء الدوال في صف `@CrewBase`. راجع [Crews](/ar/concepts/crews) لنمط الديكورات الكامل.
#### أمثلة على المراجع
## النشر
<Tip>
لاحظ كيف نستخدم نفس الاسم للوكيل في ملف `agents.yaml`
(`email_summarizer`) واسم الدالة في ملف `crew.py`
(`email_summarizer`).
</Tip>
ادفع Flow إلى **[CrewAI AMP](https://app.crewai.com)** بعد أن يعمل محليًا ويكون المشروع في مستودع **GitHub**. من جذر المشروع:
```yaml agents.yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: provider/model-id # Add your choice of model here
<CodeGroup>
```bash المصادقة
crewai login
```
<Tip>
لاحظ كيف نستخدم نفس الاسم للمهمة في ملف `tasks.yaml`
(`email_summarizer_task`) واسم الدالة في ملف `crew.py`
(`email_summarizer_task`).
</Tip>
```yaml tasks.yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
```bash إنشاء نشر
crewai deploy create
```
## نشر طاقمك
```bash الحالة والسجلات
crewai deploy status
crewai deploy logs
```
أسهل طريقة لنشر طاقمك في الإنتاج هي من خلال [CrewAI AMP](http://app.crewai.com).
```bash إرسال التحديثات بعد تغيير الكود
crewai deploy push
```
شاهد هذا الفيديو التعليمي لعرض خطوة بخطوة لنشر طاقمك على [CrewAI AMP](http://app.crewai.com) باستخدام CLI.
```bash عرض النشرات أو حذفها
crewai deploy list
crewai deploy remove <deployment_id>
```
</CodeGroup>
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
<Tip>
غالبًا ما يستغرق **النشر الأول حوالي دقيقة**. المتطلبات الكاملة ومسار الواجهة الويب في [النشر على AMP](/ar/enterprise/guides/deploy-to-amp).
</Tip>
<CardGroup cols={2}>
<Card title="النشر على المؤسسة" icon="rocket" href="http://app.crewai.com">
ابدأ مع CrewAI AMP وانشر طاقمك في بيئة إنتاج
بنقرات قليلة فقط.
<Card title="دليل النشر" icon="book" href="/ar/enterprise/guides/deploy-to-amp">
النشر على AMP خطوة بخطوة (CLI ولوحة التحكم).
</Card>
<Card
title="انضم إلى المجتمع"
title="المجتمع"
icon="comments"
href="https://community.crewai.com"
>
انضم إلى مجتمعنا مفتوح المصدر لمناقشة الأفكار ومشاركة مشاريعك والتواصل
مع مطورين آخرين لـ CrewAI.
ناقش الأفكار وشارك مشاريعك وتواصل مع مطوري CrewAI.
</Card>
</CardGroup>

50
docs/ar/skills.mdx Normal file
View File

@@ -0,0 +1,50 @@
---
title: Skills
description: ثبّت crewaiinc/skills من السجل الرسمي على skills.sh—Flows وCrews ووكلاء مرتبطون بالوثائق لـ Claude Code وCursor وCodex وغيرها.
icon: wand-magic-sparkles
mode: "wide"
---
# Skills
**امنح وكيل البرمجة سياق CrewAI في أمر واحد.**
تُنشر **Skills** الخاصة بـ CrewAI على **[skills.sh/crewaiinc/skills](https://skills.sh/crewaiinc/skills)**—السجل الرسمي لـ `crewaiinc/skills`، بما في ذلك كل مهارة (مثل **design-agent** و**getting-started** و**design-task** و**ask-docs**) وإحصاءات التثبيت والتدقيقات. تعلّم وكلاء البرمجة—مثل Claude Code وCursor وCodex—هيكلة Flows وضبط Crews واستخدام الأدوات واتباع أنماط CrewAI. نفّذ الأمر أدناه (أو الصقه في الوكيل).
```shell Terminal
npx skills add crewaiinc/skills
```
يضيف ذلك حزمة المهارات إلى سير عمل الوكيل لتطبيق اتفاقيات CrewAI دون إعادة شرح الإطار في كل جلسة. المصدر والقضايا على [GitHub](https://github.com/crewAIInc/skills).
## ما يحصل عليه الوكيل
- **Flows** — تطبيقات ذات حالة وخطوات وkickoffs للـ crew على نمط CrewAI
- **Crews والوكلاء** — أنماط YAML أولاً، أدوار، مهام، وتفويض
- **الأدوات والتكاملات** — ربط الوكلاء بالبحث وواجهات API وأدوات CrewAI الشائعة
- **هيكل المشروع** — مواءمة مع قوالب CLI واتفاقيات المستودع
- **أنماط محدثة** — تتبع المهارات وثائق CrewAI والممارسات الموصى بها
## تعرّف أكثر على هذا الموقع
<CardGroup cols={2}>
<Card title="أدوات البرمجة و AGENTS.md" icon="terminal" href="/ar/guides/coding-tools/agents-md">
استخدام `AGENTS.md` وسير عمل وكلاء البرمجة مع CrewAI.
</Card>
<Card title="البداية السريعة" icon="rocket" href="/ar/quickstart">
ابنِ أول Flow وcrew من البداية للنهاية.
</Card>
<Card title="التثبيت" icon="download" href="/ar/installation">
ثبّت CrewAI CLI وحزمة Python.
</Card>
<Card title="سجل Skills (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
القائمة الرسمية لـ `crewaiinc/skills`—المهارات والتثبيتات والتدقيقات.
</Card>
<Card title="المصدر على GitHub" icon="code-branch" href="https://github.com/crewAIInc/skills">
مصدر الحزمة والتحديثات والقضايا.
</Card>
</CardGroup>
### فيديو: CrewAI مع مهارات وكلاء البرمجة
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{ width: "100%", height: "400px" }} />

View File

@@ -7,6 +7,10 @@ mode: "wide"
# `CodeInterpreterTool`
<Warning>
**مهجور:** تمت إزالة `CodeInterpreterTool` من `crewai-tools`. كما أن معاملَي `allow_code_execution` و`code_execution_mode` على `Agent` أصبحا مهجورَين. استخدم خدمة بيئة معزولة مخصصة — [E2B](https://e2b.dev) أو [Modal](https://modal.com) — لتنفيذ الكود بشكل آمن ومعزول.
</Warning>
## الوصف
تمكّن `CodeInterpreterTool` وكلاء CrewAI من تنفيذ كود Python 3 الذي يولّدونه بشكل مستقل. هذه الوظيفة ذات قيمة خاصة لأنها تتيح للوكلاء إنشاء الكود وتنفيذه والحصول على النتائج واستخدام تلك المعلومات لاتخاذ القرارات والإجراءات اللاحقة.

View File

@@ -11,7 +11,7 @@ mode: "wide"
يتيح ذلك سير عمل متعددة مثل أن يقوم وكيل بالوصول إلى قاعدة البيانات واسترجاع المعلومات بناءً على الهدف ثم استخدام تلك المعلومات لتوليد استجابة أو تقرير أو أي مخرجات أخرى. بالإضافة إلى ذلك، يوفر القدرة للوكيل على تحديث قاعدة البيانات بناءً على هدفه.
**تنبيه**: تأكد من أن الوكيل لديه وصول إلى نسخة قراءة فقط أو أنه من المقبول أن يقوم الوكيل بتنفيذ استعلامات إدراج/تحديث على قاعدة البيانات.
**تنبيه**: الأداة للقراءة فقط بشكل افتراضي (SELECT/SHOW/DESCRIBE/EXPLAIN فقط). تتطلب عمليات الكتابة تمرير `allow_dml=True` أو ضبط متغير البيئة `CREWAI_NL2SQL_ALLOW_DML=true`. عند تفعيل الكتابة، تأكد من أن الوكيل يستخدم مستخدم قاعدة بيانات محدود الصلاحيات أو نسخة قراءة كلما أمكن.
## نموذج الأمان
@@ -36,6 +36,74 @@ mode: "wide"
- أضف خطافات `before_tool_call` لفرض أنماط الاستعلام المسموح بها
- فعّل تسجيل الاستعلامات والتنبيهات للعبارات التدميرية
## وضع القراءة فقط وتهيئة DML
تعمل `NL2SQLTool` في **وضع القراءة فقط بشكل افتراضي**. لا يُسمح إلا بأنواع العبارات التالية دون تهيئة إضافية:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
أي محاولة لتنفيذ عملية كتابة (`INSERT`، `UPDATE`، `DELETE`، `DROP`، `CREATE`، `ALTER`، `TRUNCATE`، إلخ) ستُسبب خطأً ما لم يتم تفعيل DML صراحةً.
كما تُحظر الاستعلامات متعددة العبارات التي تحتوي على فاصلة منقوطة (مثل `SELECT 1; DROP TABLE users`) في وضع القراءة فقط لمنع هجمات الحقن.
### تفعيل عمليات الكتابة
يمكنك تفعيل DML (لغة معالجة البيانات) بطريقتين:
**الخيار الأول — معامل المُنشئ:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**الخيار الثاني — متغير البيئة:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# DML مفعّل عبر متغير البيئة
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### أمثلة الاستخدام
**القراءة فقط (الافتراضي) — آمن للتحليلات والتقارير:**
```python
from crewai_tools import NL2SQLTool
# يُسمح فقط بـ SELECT/SHOW/DESCRIBE/EXPLAIN
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**مع تفعيل DML — مطلوب لأعباء عمل الكتابة:**
```python
from crewai_tools import NL2SQLTool
# يُسمح بـ INSERT وUPDATE وDELETE وDROP وغيرها
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
يمنح تفعيل DML للوكيل القدرة على تعديل البيانات أو حذفها. لا تفعّله إلا عندما يتطلب حالة الاستخدام صراحةً وصولاً للكتابة، وتأكد من أن بيانات اعتماد قاعدة البيانات محدودة بالحد الأدنى من الصلاحيات المطلوبة.
</Warning>
## المتطلبات
- SqlAlchemy

View File

@@ -74,3 +74,19 @@ tool = CSVSearchTool(
}
)
```
## الأمان
### التحقق من صحة المسارات
يتم التحقق من مسارات الملفات المقدمة لهذه الأداة مقابل مجلد العمل الحالي. يتم رفض المسارات التي تحل خارج مجلد العمل وإطلاق `ValueError`.
للسماح بالمسارات خارج مجلد العمل (مثلاً في الاختبارات أو خطوط الأنابيب الموثوقة)، عيّن متغير البيئة التالي:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### التحقق من صحة الروابط
يتم التحقق من مدخلات الروابط: يتم حظر مخطط `file://` والطلبات التي تستهدف نطاقات IP الخاصة أو المحجوزة لمنع هجمات تزوير الطلبات من جانب الخادم (SSRF).

View File

@@ -68,3 +68,15 @@ tool = DirectorySearchTool(
}
)
```
## الأمان
### التحقق من صحة المسارات
يتم التحقق من مسارات المجلدات المقدمة لهذه الأداة مقابل مجلد العمل الحالي. يتم رفض المسارات التي تحل خارج مجلد العمل وإطلاق `ValueError`.
للسماح بالمسارات خارج مجلد العمل (مثلاً في الاختبارات أو خطوط الأنابيب الموثوقة)، عيّن متغير البيئة التالي:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```

View File

@@ -73,3 +73,19 @@ tool = JSONSearchTool(
}
)
```
## الأمان
### التحقق من صحة المسارات
يتم التحقق من مسارات الملفات المقدمة لهذه الأداة مقابل مجلد العمل الحالي. يتم رفض المسارات التي تحل خارج مجلد العمل وإطلاق `ValueError`.
للسماح بالمسارات خارج مجلد العمل (مثلاً في الاختبارات أو خطوط الأنابيب الموثوقة)، عيّن متغير البيئة التالي:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### التحقق من صحة الروابط
يتم التحقق من مدخلات الروابط: يتم حظر مخطط `file://` والطلبات التي تستهدف نطاقات IP الخاصة أو المحجوزة لمنع هجمات تزوير الطلبات من جانب الخادم (SSRF).

View File

@@ -105,3 +105,19 @@ tool = PDFSearchTool(
}
)
```
## الأمان
### التحقق من صحة المسارات
يتم التحقق من مسارات الملفات المقدمة لهذه الأداة مقابل مجلد العمل الحالي. يتم رفض المسارات التي تحل خارج مجلد العمل وإطلاق `ValueError`.
للسماح بالمسارات خارج مجلد العمل (مثلاً في الاختبارات أو خطوط الأنابيب الموثوقة)، عيّن متغير البيئة التالي:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### التحقق من صحة الروابط
يتم التحقق من مدخلات الروابط: يتم حظر مخطط `file://` والطلبات التي تستهدف نطاقات IP الخاصة أو المحجوزة لمنع هجمات تزوير الطلبات من جانب الخادم (SSRF).

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,210 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Apr 10, 2026">
## v1.14.2a2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## What's Changed
### Features
- Add checkpoint TUI with tree view, fork support, and editable inputs/outputs
- Enrich LLM token tracking with reasoning tokens and cache creation tokens
- Add `from_checkpoint` parameter to kickoff methods
- Embed `crewai_version` in checkpoints with migration framework
- Add checkpoint forking with lineage tracking
### Bug Fixes
- Fix strict mode forwarding to Anthropic and Bedrock providers
- Harden NL2SQLTool with read-only default, query validation, and parameterized queries
### Documentation
- Update changelog and version for v1.14.2a1
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="Apr 09, 2026">
## v1.14.2a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## What's Changed
### Bug Fixes
- Fix emission of flow_finished event after HITL resume
- Fix cryptography version to 46.0.7 to address CVE-2026-39892
### Refactoring
- Refactor to use shared I18N_DEFAULT singleton
### Documentation
- Update changelog and version for v1.14.1
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 09, 2026">
## v1.14.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1)
## What's Changed
### Features
- Add async checkpoint TUI browser
- Add aclose()/close() and async context manager to streaming outputs
### Bug Fixes
- Fix regex for template pyproject.toml version bumps
- Sanitize tool names in hook decorator filters
- Fix checkpoint handlers registration when CheckpointConfig is created
- Bump transformers to 5.5.0 to resolve CVE-2026-1839
- Remove FilteredStream stdout/stderr wrapper
### Documentation
- Update changelog and version for v1.14.1rc1
### Refactoring
- Replace hardcoded denylist with dynamic BaseTool field exclusion in spec gen
- Replace regex with tomlkit in devtools CLI
- Use shared PRINTER singleton
- Make BaseProvider a BaseModel with provider_type discriminator
## Contributors
@greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay
</Update>
<Update label="Apr 09, 2026">
## v1.14.1rc1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1rc1)
## What's Changed
### Features
- Add async checkpoint TUI browser
- Add aclose()/close() and async context manager to streaming outputs
### Bug Fixes
- Fix template pyproject.toml version bumps using regex
- Sanitize tool names in hook decorator filters
- Bump transformers to 5.5.0 to resolve CVE-2026-1839
- Register checkpoint handlers when CheckpointConfig is created
### Refactoring
- Replace hardcoded denylist with dynamic BaseTool field exclusion in spec gen
- Replace regex with tomlkit in devtools CLI
- Use shared PRINTER singleton
- Make BaseProvider a BaseModel with provider_type discriminator
- Remove FilteredStream stdout/stderr wrapper
- Remove unused flow/config.py
### Documentation
- Update changelog and version for v1.14.0
## Contributors
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="Apr 07, 2026">
## v1.14.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0)
## What's Changed
### Features
- Add checkpoint list/info CLI commands
- Add guardrail_type and name to distinguish traces
- Add SqliteProvider for checkpoint storage
- Add CheckpointConfig for automatic checkpointing
- Implement runtime state checkpointing, event system, and executor refactor
### Bug Fixes
- Add SSRF and path traversal protections
- Add path and URL validation to RAG tools
- Exclude embedding vectors from memory serialization to save tokens
- Ensure output directory exists before writing in flow template
- Bump litellm to >=1.83.0 to address CVE-2026-35030
- Remove SEO indexing field causing Arabic page rendering
### Documentation
- Update changelog and version for v1.14.0
- Update quickstart and installation guides for improved clarity
- Add storage providers section, export JsonProvider
- Add AMP Training Tab guide
### Refactoring
- Clean up checkpoint API
- Remove CodeInterpreterTool and deprecate code execution parameters
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="Apr 07, 2026">
## v1.14.0a4
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a4)
## What's Changed
### Features
- Add guardrail_type and name to distinguish traces
- Add SqliteProvider for checkpoint storage
- Add CheckpointConfig for automatic checkpointing
- Implement runtime state checkpointing, event system, and executor refactor
### Bug Fixes
- Exclude embedding vectors from memory serialization to save tokens
- Bump litellm to >=1.83.0 to address CVE-2026-35030
### Documentation
- Update quickstart and installation guides for improved clarity
- Add storage providers section and export JsonProvider
### Performance
- Use JSONB for checkpoint data column
### Refactoring
- Remove CodeInterpreterTool and deprecate code execution params
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="Apr 06, 2026">
## v1.14.0a3
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a3)
## What's Changed
### Documentation
- Update changelog and version for v1.14.0a2
## Contributors
@joaomdmoura
</Update>
<Update label="Apr 06, 2026">
## v1.14.0a2

View File

@@ -308,16 +308,12 @@ multimodal_agent = Agent(
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Warning>
`allow_code_execution` and `code_execution_mode` are deprecated. `CodeInterpreterTool` has been removed from `crewai-tools`. Use a dedicated sandbox service such as [E2B](https://e2b.dev) or [Modal](https://modal.com) for secure code execution.
</Warning>
<Note>
This runs a default Docker image. If you want to configure the docker image,
the checkout the Code Interpreter Tool in the tools section. Add the code
interpreter tool as a tool in the agent as a tool parameter.
</Note>
- `allow_code_execution` _(deprecated)_: Previously enabled built-in code execution via `CodeInterpreterTool`.
- `code_execution_mode` _(deprecated)_: Previously controlled execution mode (`"safe"` for Docker, `"unsafe"` for direct execution).
#### Advanced Features
@@ -667,9 +663,9 @@ asyncio.run(main())
### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
<Warning>
`allow_code_execution` and `code_execution_mode` are deprecated and `CodeInterpreterTool` has been removed. Use a dedicated sandbox service such as [E2B](https://e2b.dev) or [Modal](https://modal.com) for secure code execution.
</Warning>
### Performance Optimization

View File

@@ -0,0 +1,305 @@
---
title: Checkpointing
description: Automatically save execution state so crews, flows, and agents can resume after failures.
icon: floppy-disk
mode: "wide"
---
<Warning>
Checkpointing is in early release. APIs may change in future versions.
</Warning>
## Overview
Checkpointing automatically saves execution state during a run. If a crew, flow, or agent fails mid-execution, you can restore from the last checkpoint and resume without re-running completed work.
## Quick Start
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=True, # uses defaults: ./.checkpoints, on task_completed
)
result = crew.kickoff()
```
Checkpoint files are written to `./.checkpoints/` after each completed task.
## Configuration
Use `CheckpointConfig` for full control:
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
on_events=["task_completed", "crew_kickoff_completed"],
max_checkpoints=5,
),
)
```
### CheckpointConfig Fields
| Field | Type | Default | Description |
|:------|:-----|:--------|:------------|
| `location` | `str` | `"./.checkpoints"` | Storage destination — a directory for `JsonProvider`, a database file path for `SqliteProvider` |
| `on_events` | `list[str]` | `["task_completed"]` | Event types that trigger a checkpoint |
| `provider` | `BaseProvider` | `JsonProvider()` | Storage backend |
| `max_checkpoints` | `int \| None` | `None` | Max checkpoints to keep. Oldest are pruned after each write. Pruning is handled by the provider. |
| `restore_from` | `Path \| str \| None` | `None` | Path to a checkpoint to restore from. Used when passing config via a kickoff method's `from_checkpoint` parameter. |
### Inheritance and Opt-Out
The `checkpoint` field on Crew, Flow, and Agent accepts `CheckpointConfig`, `True`, `False`, or `None`:
| Value | Behavior |
|:------|:---------|
| `None` (default) | Inherit from parent. An agent inherits its crew's config. |
| `True` | Enable with defaults. |
| `False` | Explicit opt-out. Stops inheritance from parent. |
| `CheckpointConfig(...)` | Custom configuration. |
```python
crew = Crew(
agents=[
Agent(role="Researcher", ...), # inherits crew's checkpoint
Agent(role="Writer", ..., checkpoint=False), # opted out, no checkpoints
],
tasks=[...],
checkpoint=True,
)
```
## Resuming from a Checkpoint
Pass a `CheckpointConfig` with `restore_from` to any kickoff method. The crew restores from that checkpoint, skips completed tasks, and resumes.
```python
from crewai import Crew, CheckpointConfig
crew = Crew(agents=[...], tasks=[...])
result = crew.kickoff(
from_checkpoint=CheckpointConfig(
restore_from="./my_checkpoints/20260407T120000_abc123.json",
),
)
```
Remaining `CheckpointConfig` fields apply to the new run, so checkpointing continues after the restore.
You can also use the classmethod directly:
```python
config = CheckpointConfig(restore_from="./my_checkpoints/20260407T120000_abc123.json")
crew = Crew.from_checkpoint(config)
result = crew.kickoff()
```
## Forking from a Checkpoint
`fork()` restores a checkpoint and starts a new execution branch. Useful for exploring alternative paths from the same point.
```python
from crewai import Crew, CheckpointConfig
config = CheckpointConfig(restore_from="./my_checkpoints/20260407T120000_abc123.json")
crew = Crew.fork(config, branch="experiment-a")
result = crew.kickoff(inputs={"strategy": "aggressive"})
```
Each fork gets a unique lineage ID so checkpoints from different branches don't collide. The `branch` label is optional and auto-generated if omitted.
## Works on Crew, Flow, and Agent
### Crew
```python
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task, review_task],
checkpoint=CheckpointConfig(location="./crew_cp"),
)
```
Default trigger: `task_completed` (one checkpoint per finished task).
### Flow
```python
from crewai.flow.flow import Flow, start, listen
from crewai import CheckpointConfig
class MyFlow(Flow):
@start()
def step_one(self):
return "data"
@listen(step_one)
def step_two(self, data):
return process(data)
flow = MyFlow(
checkpoint=CheckpointConfig(
location="./flow_cp",
on_events=["method_execution_finished"],
),
)
result = flow.kickoff()
# Resume
config = CheckpointConfig(restore_from="./flow_cp/20260407T120000_abc123.json")
flow = MyFlow.from_checkpoint(config)
result = flow.kickoff()
```
### Agent
```python
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
checkpoint=CheckpointConfig(
location="./agent_cp",
on_events=["lite_agent_execution_completed"],
),
)
result = agent.kickoff(messages=[{"role": "user", "content": "Research AI trends"}])
```
## Storage Providers
CrewAI ships with two checkpoint storage providers.
### JsonProvider (default)
Writes each checkpoint as a separate JSON file. Simple, human-readable, easy to inspect.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import JsonProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
provider=JsonProvider(), # this is the default
max_checkpoints=5, # prunes oldest files
),
)
```
Files are named `<timestamp>_<uuid>.json` inside the location directory.
### SqliteProvider
Stores all checkpoints in a single SQLite database file. Better for high-frequency checkpointing and avoids many small files.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import SqliteProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./.checkpoints.db",
provider=SqliteProvider(),
max_checkpoints=50,
),
)
```
WAL journal mode is enabled for concurrent read access.
## Event Types
The `on_events` field accepts any combination of event type strings. Common choices:
| Use Case | Events |
|:---------|:-------|
| After each task (Crew) | `["task_completed"]` |
| After each flow method | `["method_execution_finished"]` |
| After agent execution | `["agent_execution_completed"]`, `["lite_agent_execution_completed"]` |
| On crew completion only | `["crew_kickoff_completed"]` |
| After every LLM call | `["llm_call_completed"]` |
| On everything | `["*"]` |
<Warning>
Using `["*"]` or high-frequency events like `llm_call_completed` will write many checkpoint files and may impact performance. Use `max_checkpoints` to limit disk usage.
</Warning>
## Manual Checkpointing
For full control, register your own event handler and call `state.checkpoint()` directly:
```python
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_events import LLMCallCompletedEvent
# Sync handler
@crewai_event_bus.on(LLMCallCompletedEvent)
def on_llm_done(source, event, state):
path = state.checkpoint("./my_checkpoints")
print(f"Saved checkpoint: {path}")
# Async handler
@crewai_event_bus.on(LLMCallCompletedEvent)
async def on_llm_done_async(source, event, state):
path = await state.acheckpoint("./my_checkpoints")
print(f"Saved checkpoint: {path}")
```
The `state` argument is the `RuntimeState` passed automatically by the event bus when your handler accepts 3 parameters. You can register handlers on any event type listed in the [Event Listeners](/en/concepts/event-listener) documentation.
Checkpointing is best-effort: if a checkpoint write fails, the error is logged but execution continues uninterrupted.
## CLI
The `crewai checkpoint` command gives you a TUI for browsing, inspecting, resuming, and forking checkpoints. It auto-detects whether your checkpoints are JSON files or a SQLite database.
```bash
# Launch the TUI — auto-detects .checkpoints/ or .checkpoints.db
crewai checkpoint
# Point at a specific location
crewai checkpoint --location ./my_checkpoints
crewai checkpoint --location ./.checkpoints.db
```
<Frame>
<img src="/images/checkpointing.png" alt="Checkpoint TUI" />
</Frame>
The left panel is a tree view. Checkpoints are grouped by branch, and forks nest under the checkpoint they diverged from. Select a checkpoint to see its metadata, entity state, and task progress in the detail panel. Hit **Resume** to pick up where it left off, or **Fork** to start a new branch from that point.
### Editing inputs and task outputs
When a checkpoint is selected, the detail panel shows:
- **Inputs** — if the original kickoff had inputs (e.g. `{topic}`), they appear as editable fields pre-filled with the original values. Change them before resuming or forking.
- **Task outputs** — completed tasks show their output in editable text areas. Edit a task's output to change the context that downstream tasks receive. When you modify a task output and hit Fork, all subsequent tasks are invalidated and re-run with the new context.
This is useful for "what if" exploration — fork from a checkpoint, tweak a task's result, and see how it changes downstream behavior.
### Subcommands
```bash
# List all checkpoints
crewai checkpoint list ./my_checkpoints
# Inspect a specific checkpoint
crewai checkpoint info ./my_checkpoints/20260407T120000_abc123.json
# Inspect latest in a SQLite database
crewai checkpoint info ./.checkpoints.db
```

View File

@@ -106,7 +106,7 @@ The CLI automatically detects your project type from `pyproject.toml` and builds
```
<Tip>
The first deployment typically takes 10-15 minutes as it builds the container images. Subsequent deployments are much faster.
The first deployment typically takes around 1 minute.
</Tip>
</Step>
@@ -188,7 +188,7 @@ You need to push your crew to a GitHub repository. If you haven't created a crew
1. Click the "Deploy" button to start the deployment process
2. You can monitor the progress through the progress bar
3. The first deployment typically takes around 10-15 minutes; subsequent deployments will be faster
3. The first deployment typically takes around 1 minute
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)

View File

@@ -207,9 +207,8 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
## Next Steps
<CardGroup cols={2}>
<Card title="Build Your First Agent" icon="code" href="/en/quickstart">
Follow our quickstart guide to create your first CrewAI agent and get
hands-on experience.
<Card title="Quickstart: Flow + agent" icon="code" href="/en/quickstart">
Follow the quickstart to scaffold a Flow, run a one-agent crew, and produce a report.
</Card>
<Card
title="Join the Community"

View File

@@ -140,7 +140,7 @@ For any production-ready application, **start with a Flow**.
icon="bolt"
href="en/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
Scaffold a Flow, run a crew with one agent, and generate a report end to end.
</Card>
<Card
title="Join the Community"

View File

@@ -325,6 +325,34 @@ Streaming is particularly valuable for:
- **User Experience**: Reduce perceived latency by showing incremental results
- **Live Dashboards**: Build monitoring interfaces that display crew execution status
## Cancellation and Resource Cleanup
`CrewStreamingOutput` supports graceful cancellation so that in-flight work stops promptly when the consumer disconnects.
### Async Context Manager
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
async with streaming:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
```
### Explicit Cancellation
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
try:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
finally:
await streaming.aclose() # async
# streaming.close() # sync equivalent
```
After cancellation, `streaming.is_cancelled` and `streaming.is_completed` are both `True`. Both `aclose()` and `close()` are idempotent.
## Important Notes
- Streaming automatically enables LLM streaming for all agents in the crew

View File

@@ -420,6 +420,34 @@ except Exception as e:
print("Streaming completed but flow encountered an error")
```
## Cancellation and Resource Cleanup
`FlowStreamingOutput` supports graceful cancellation so that in-flight work stops promptly when the consumer disconnects.
### Async Context Manager
```python Code
streaming = await flow.kickoff_async()
async with streaming:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
```
### Explicit Cancellation
```python Code
streaming = await flow.kickoff_async()
try:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
finally:
await streaming.aclose() # async
# streaming.close() # sync equivalent
```
After cancellation, `streaming.is_cancelled` and `streaming.is_completed` are both `True`. Both `aclose()` and `close()` are idempotent.
## Important Notes
- Streaming automatically enables LLM streaming for any crews used within the flow

View File

@@ -1,6 +1,6 @@
---
title: Quickstart
description: Build your first AI agent with CrewAI in under 5 minutes.
description: Build your first CrewAI Flow in minutes — orchestration, state, and an agent crew that produces a real report.
icon: rocket
mode: "wide"
---
@@ -13,39 +13,37 @@ You can install it with `npx skills add crewaiinc/skills`
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
## Build your first CrewAI Agent
In this guide you will **create a Flow** that sets a research topic, runs a **crew with one agent** (a researcher using web search), and ends with a **markdown report** on disk. Flows are the recommended way to structure production apps: they own **state** and **execution order**, while **agents** do the work inside a crew step.
Let's create a simple crew that will help us `research` and `report` on the `latest AI developments` for a given topic or subject.
If you have not installed CrewAI yet, follow the [installation guide](/en/installation) first.
Before we proceed, make sure you have finished installing CrewAI.
If you haven't installed them yet, you can do so by following the [installation guide](/en/installation).
## Prerequisites
Follow the steps below to get Crewing! 🚣‍♂️
- Python environment and the CrewAI CLI (see [installation](/en/installation))
- An LLM configured with the right API keys — see [LLMs](/en/concepts/llms#setting-up-your-llm)
- A [Serper.dev](https://serper.dev/) API key (`SERPER_API_KEY`) for web search in this tutorial
## Build your first Flow
<Steps>
<Step title="Create your crew">
Create a new crew project by running the following command in your terminal.
This will create a new directory called `latest-ai-development` with the basic structure for your crew.
<Step title="Create a Flow project">
From your terminal, scaffold a Flow project (the folder name uses underscores, e.g. `latest_ai_flow`):
<CodeGroup>
```shell Terminal
crewai create crew latest-ai-development
crewai create flow latest-ai-flow
cd latest_ai_flow
```
</CodeGroup>
This creates a Flow app under `src/latest_ai_flow/`, including a starter crew under `crews/content_crew/` that you will replace with a minimal **single-agent** research crew in the next steps.
</Step>
<Step title="Navigate to your new crew project">
<CodeGroup>
```shell Terminal
cd latest_ai_development
```
</CodeGroup>
</Step>
<Step title="Modify your `agents.yaml` file">
<Tip>
You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file.
</Tip>
<Step title="Configure one agent in `agents.yaml`">
Replace the contents of `src/latest_ai_flow/crews/content_crew/config/agents.yaml` with a single researcher. Variables like `{topic}` are filled from `crew.kickoff(inputs=...)`.
```yaml agents.yaml
# src/latest_ai_development/config/agents.yaml
# src/latest_ai_flow/crews/content_crew/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
@@ -53,336 +51,232 @@ Follow the steps below to get Crewing! 🚣‍♂️
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
developments in {topic}. You find the most relevant information and
present it clearly.
```
</Step>
<Step title="Modify your `tasks.yaml` file">
<Step title="Configure one task in `tasks.yaml`">
```yaml tasks.yaml
# src/latest_ai_development/config/tasks.yaml
# src/latest_ai_flow/crews/content_crew/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
Conduct thorough research about {topic}. Use web search to find current,
credible information. The current year is 2026.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
A markdown report with clear sections: key trends, notable tools or companies,
and implications. Aim for 8001200 words. No fenced code blocks around the whole document.
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
output_file: output/report.md
```
</Step>
<Step title="Modify your `crew.py` file">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
<Step title="Wire the crew class (`content_crew.py`)">
Point the generated crew at your YAML and attach `SerperDevTool` to the researcher.
```python content_crew.py
# src/latest_ai_flow/crews/content_crew/content_crew.py
from typing import List
from crewai import Agent, Crew, Process, Task
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
class ResearchCrew:
"""Single-agent research crew used inside the Flow."""
agents: List[BaseAgent]
tasks: List[Task]
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
config=self.agents_config["researcher"], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
tools=[SerperDevTool()],
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='output/report.md' # This is the file that will be contain the final report.
config=self.tasks_config["research_task"], # type: ignore[index]
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
</Step>
<Step title="[Optional] Add before and after crew functions">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
<Step title="Define the Flow in `main.py`">
Connect the crew to a Flow: a `@start()` step sets the topic in **state**, and a `@listen` step runs the crew. The tasks `output_file` still writes `output/report.md`.
@before_kickoff
def before_kickoff_function(self, inputs):
print(f"Before kickoff function with inputs: {inputs}")
return inputs # You can return the inputs or modify them as needed
@after_kickoff
def after_kickoff_function(self, result):
print(f"After kickoff function with result: {result}")
return result # You can return the result or modify it as needed
# ... remaining code
```
</Step>
<Step title="Feel free to pass custom inputs to your crew">
For example, you can pass the `topic` input to your crew to customize the research and reporting.
```python main.py
#!/usr/bin/env python
# src/latest_ai_development/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
# src/latest_ai_flow/main.py
from pydantic import BaseModel
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
from crewai.flow import Flow, listen, start
from latest_ai_flow.crews.content_crew.content_crew import ResearchCrew
class ResearchFlowState(BaseModel):
topic: str = ""
report: str = ""
class LatestAiFlow(Flow[ResearchFlowState]):
@start()
def prepare_topic(self, crewai_trigger_payload: dict | None = None):
if crewai_trigger_payload:
self.state.topic = crewai_trigger_payload.get("topic", "AI Agents")
else:
self.state.topic = "AI Agents"
print(f"Topic: {self.state.topic}")
@listen(prepare_topic)
def run_research(self):
result = ResearchCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.report = result.raw
print("Research crew finished.")
@listen(run_research)
def summarize(self):
print("Report path: output/report.md")
def kickoff():
LatestAiFlow().kickoff()
def plot():
LatestAiFlow().plot()
if __name__ == "__main__":
kickoff()
```
</Step>
<Step title="Set your environment variables">
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
- The configuration for your choice of model, such as an API key. See the
[LLM setup guide](/en/concepts/llms#setting-up-your-llm) to learn how to configure models from any provider.
</Step>
<Step title="Lock and install the dependencies">
- Lock the dependencies and install them by using the CLI command:
<CodeGroup>
```shell Terminal
crewai install
```
</CodeGroup>
- If you have additional packages that you want to install, you can do so by running:
<CodeGroup>
```shell Terminal
uv add <package-name>
```
</CodeGroup>
</Step>
<Step title="Run your crew">
- To run your crew, execute the following command in the root of your project:
<CodeGroup>
```bash Terminal
crewai run
```
</CodeGroup>
<Tip>
If your package name differs from `latest_ai_flow`, change the import of `ResearchCrew` to match your projects module path.
</Tip>
</Step>
<Step title="Enterprise Alternative: Create in Crew Studio">
For CrewAI AMP users, you can create the same crew without writing code:
<Step title="Set environment variables">
In `.env` at the project root, set:
1. Log in to your CrewAI AMP account (create a free account at [app.crewai.com](https://app.crewai.com))
2. Open Crew Studio
3. Type what is the automation you're trying to build
4. Create your tasks visually and connect them in sequence
5. Configure your inputs and click "Download Code" or "Deploy"
![Crew Studio Quickstart](/images/enterprise/crew-studio-interface.png)
<Card title="Try CrewAI AMP" icon="rocket" href="https://app.crewai.com">
Start your free account at CrewAI AMP
</Card>
- `SERPER_API_KEY` — from [Serper.dev](https://serper.dev/)
- Your model provider keys as required — see [LLM setup](/en/concepts/llms#setting-up-your-llm)
</Step>
<Step title="View your final report">
You should see the output in the console and the `report.md` file should be created in the root of your project with the final report.
Here's an example of what the report should look like:
<Step title="Install and run">
<CodeGroup>
```shell Terminal
crewai install
crewai run
```
</CodeGroup>
`crewai run` executes the Flow entrypoint defined in your project (same command as for crews; project type is `"flow"` in `pyproject.toml`).
</Step>
<Step title="Check the output">
You should see logs from the Flow and the crew. Open **`output/report.md`** for the generated report (excerpt):
<CodeGroup>
```markdown output/report.md
# Comprehensive Report on the Rise and Impact of AI Agents in 2025
# AI Agents in 2026: Landscape and Trends
## 1. Introduction to AI Agents
In 2025, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce.
## Executive summary
## 2. Benefits of AI Agents
AI agents bring numerous advantages that are transforming traditional work environments. Key benefits include:
## Key trends
- **Tool use and orchestration** — …
- **Enterprise adoption** — …
- **Task Automation**: AI agents can carry out repetitive tasks such as data entry, scheduling, and payroll processing without human intervention, greatly reducing the time and resources spent on these activities.
- **Improved Efficiency**: By quickly processing large datasets and performing analyses that would take humans significantly longer, AI agents enhance operational efficiency. This allows teams to focus on strategic tasks that require higher-level thinking.
- **Enhanced Decision-Making**: AI agents can analyze trends and patterns in data, provide insights, and even suggest actions, helping stakeholders make informed decisions based on factual data rather than intuition alone.
## 3. Popular AI Agent Frameworks
Several frameworks have emerged to facilitate the development of AI agents, each with its own unique features and capabilities. Some of the most popular frameworks include:
- **Autogen**: A framework designed to streamline the development of AI agents through automation of code generation.
- **Semantic Kernel**: Focuses on natural language processing and understanding, enabling agents to comprehend user intentions better.
- **Promptflow**: Provides tools for developers to create conversational agents that can navigate complex interactions seamlessly.
- **Langchain**: Specializes in leveraging various APIs to ensure agents can access and utilize external data effectively.
- **CrewAI**: Aimed at collaborative environments, CrewAI strengthens teamwork by facilitating communication through AI-driven insights.
- **MemGPT**: Combines memory-optimized architectures with generative capabilities, allowing for more personalized interactions with users.
These frameworks empower developers to build versatile and intelligent agents that can engage users, perform advanced analytics, and execute various tasks aligned with organizational goals.
## 4. AI Agents in Human Resources
AI agents are revolutionizing HR practices by automating and optimizing key functions:
- **Recruiting**: AI agents can screen resumes, schedule interviews, and even conduct initial assessments, thus accelerating the hiring process while minimizing biases.
- **Succession Planning**: AI systems analyze employee performance data and potential, helping organizations identify future leaders and plan appropriate training.
- **Employee Engagement**: Chatbots powered by AI can facilitate feedback loops between employees and management, promoting an open culture and addressing concerns promptly.
As AI continues to evolve, HR departments leveraging these agents can realize substantial improvements in both efficiency and employee satisfaction.
## 5. AI Agents in Finance
The finance sector is seeing extensive integration of AI agents that enhance financial practices:
- **Expense Tracking**: Automated systems manage and monitor expenses, flagging anomalies and offering recommendations based on spending patterns.
- **Risk Assessment**: AI models assess credit risk and uncover potential fraud by analyzing transaction data and behavioral patterns.
- **Investment Decisions**: AI agents provide stock predictions and analytics based on historical data and current market conditions, empowering investors with informative insights.
The incorporation of AI agents into finance is fostering a more responsive and risk-aware financial landscape.
## 6. Market Trends and Investments
The growth of AI agents has attracted significant investment, especially amidst the rising popularity of chatbots and generative AI technologies. Companies and entrepreneurs are eager to explore the potential of these systems, recognizing their ability to streamline operations and improve customer engagement.
Conversely, corporations like Microsoft are taking strides to integrate AI agents into their product offerings, with enhancements to their Copilot 365 applications. This strategic move emphasizes the importance of AI literacy in the modern workplace and indicates the stabilizing of AI agents as essential business tools.
## 7. Future Predictions and Implications
Experts predict that AI agents will transform essential aspects of work life. As we look toward the future, several anticipated changes include:
- Enhanced integration of AI agents across all business functions, creating interconnected systems that leverage data from various departmental silos for comprehensive decision-making.
- Continued advancement of AI technologies, resulting in smarter, more adaptable agents capable of learning and evolving from user interactions.
- Increased regulatory scrutiny to ensure ethical use, especially concerning data privacy and employee surveillance as AI agents become more prevalent.
To stay competitive and harness the full potential of AI agents, organizations must remain vigilant about latest developments in AI technology and consider continuous learning and adaptation in their strategic planning.
## 8. Conclusion
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
## Implications
```
</CodeGroup>
Your actual file will be longer and reflect live search results.
</Step>
</Steps>
## How this run fits together
1. **Flow** — `LatestAiFlow` runs `prepare_topic` first, then `run_research`, then `summarize`. State (`topic`, `report`) lives on the Flow.
2. **Crew** — `ResearchCrew` runs one task with one agent: the researcher uses **Serper** to search the web, then writes the structured report.
3. **Artifact** — The tasks `output_file` writes the report under `output/report.md`.
To go deeper on Flow patterns (routing, persistence, human-in-the-loop), see [Build your first Flow](/en/guides/flows/first-flow) and [Flows](/en/concepts/flows). For crews without a Flow, see [Crews](/en/concepts/crews). For a single `Agent` and `kickoff()` without tasks, see [Agents](/en/concepts/agents#direct-agent-interaction-with-kickoff).
<Check>
Congratulations!
You have successfully set up your crew project and are ready to start building your own agentic workflows!
You now have an end-to-end Flow with an agent crew and a saved report — a solid base to add more steps, crews, or tools.
</Check>
### Note on Consistency in Naming
### Naming consistency
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
For example, you can reference the agent for specific tasks from `tasks.yaml` file.
This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly.
YAML keys (`researcher`, `research_task`) must match the method names on your `@CrewBase` class. See [Crews](/en/concepts/crews) for the full decorator pattern.
#### Example References
## Deploying
<Tip>
Note how we use the same name for the agent in the `agents.yaml`
(`email_summarizer`) file as the method name in the `crew.py`
(`email_summarizer`) file.
</Tip>
Push your Flow to **[CrewAI AMP](https://app.crewai.com)** once it runs locally and your project is in a **GitHub** repository. From the project root:
```yaml agents.yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: provider/model-id # Add your choice of model here
<CodeGroup>
```bash Authenticate
crewai login
```
<Tip>
Note how we use the same name for the task in the `tasks.yaml`
(`email_summarizer_task`) file as the method name in the `crew.py`
(`email_summarizer_task`) file.
</Tip>
```yaml tasks.yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
```bash Create deployment
crewai deploy create
```
## Deploying Your Crew
```bash Check status & logs
crewai deploy status
crewai deploy logs
```
The easiest way to deploy your crew to production is through [CrewAI AMP](http://app.crewai.com).
```bash Ship updates after you change code
crewai deploy push
```
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
```bash List or remove deployments
crewai deploy list
crewai deploy remove <deployment_id>
```
</CodeGroup>
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
<Tip>
The first deploy usually takes **around 1 minute**. Full prerequisites and the web UI flow are in [Deploy to AMP](/en/enterprise/guides/deploy-to-amp).
</Tip>
<CardGroup cols={2}>
<Card title="Deploy on Enterprise" icon="rocket" href="http://app.crewai.com">
Get started with CrewAI AMP and deploy your crew in a production environment
with just a few clicks.
<Card title="Deploy guide" icon="book" href="/en/enterprise/guides/deploy-to-amp">
Step-by-step AMP deployment (CLI and dashboard).
</Card>
<Card
title="Join the Community"
icon="comments"
href="https://community.crewai.com"
>
Join our open source community to discuss ideas, share your projects, and
connect with other CrewAI developers.
Discuss ideas, share projects, and connect with other CrewAI developers.
</Card>
</CardGroup>

50
docs/en/skills.mdx Normal file
View File

@@ -0,0 +1,50 @@
---
title: Skills
description: Install crewaiinc/skills from the official registry at skills.sh—Flows, Crews, and docs-aware agents for Claude Code, Cursor, Codex, and more.
icon: wand-magic-sparkles
mode: "wide"
---
# Skills
**Give your AI coding agent CrewAI context in one command.**
CrewAI **Skills** are published on **[skills.sh/crewaiinc/skills](https://skills.sh/crewaiinc/skills)**—the official registry for `crewaiinc/skills`, including individual skills (for example **design-agent**, **getting-started**, **design-task**, and **ask-docs**), install stats, and audits. They teach coding agents—like Claude Code, Cursor, and Codex—how to scaffold Flows, configure Crews, use tools, and follow CrewAI patterns. Run the install below (or paste it into your agent).
```shell Terminal
npx skills add crewaiinc/skills
```
That pulls the official skill pack into your agent workflow so it can apply CrewAI conventions without you re-explaining the framework each session. Source code and issues live on [GitHub](https://github.com/crewAIInc/skills).
## What your agent gets
- **Flows** — structure stateful apps, steps, and crew kickoffs the CrewAI way
- **Crews & agents** — YAML-first patterns, roles, tasks, and delegation
- **Tools & integrations** — hook agents to search, APIs, and common CrewAI tools
- **Project layout** — align with CLI scaffolds and repo conventions
- **Up-to-date patterns** — skills track current CrewAI docs and recommended practices
## Learn more on this site
<CardGroup cols={2}>
<Card title="Coding tools & AGENTS.md" icon="terminal" href="/en/guides/coding-tools/agents-md">
How to use `AGENTS.md` and coding-agent workflows with CrewAI.
</Card>
<Card title="Quickstart" icon="rocket" href="/en/quickstart">
Build your first Flow and crew end-to-end.
</Card>
<Card title="Installation" icon="download" href="/en/installation">
Install the CrewAI CLI and Python package.
</Card>
<Card title="Skills registry (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
Official listing for `crewaiinc/skills`—skills, installs, and audits.
</Card>
<Card title="GitHub source" icon="code-branch" href="https://github.com/crewAIInc/skills">
Source, updates, and issues for the skill pack.
</Card>
</CardGroup>
### Video: CrewAI with coding agent skills
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{ width: "100%", height: "400px" }} />

View File

@@ -7,6 +7,10 @@ mode: "wide"
# `CodeInterpreterTool`
<Warning>
**Deprecated:** `CodeInterpreterTool` has been removed from `crewai-tools`. The `allow_code_execution` and `code_execution_mode` parameters on `Agent` are also deprecated. Use a dedicated sandbox service — [E2B](https://e2b.dev) or [Modal](https://modal.com) — for secure, isolated code execution.
</Warning>
## Description
The `CodeInterpreterTool` enables CrewAI agents to execute Python 3 code that they generate autonomously. This functionality is particularly valuable as it allows agents to create code, execute it, obtain the results, and utilize that information to inform subsequent decisions and actions.

View File

@@ -13,7 +13,7 @@ This tool is used to convert natural language to SQL queries. When passed to the
This enables multiple workflows like having an Agent to access the database fetch information based on the goal and then use the information to generate a response, report or any other output.
Along with that provides the ability for the Agent to update the database based on its goal.
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
**Attention**: By default the tool is read-only (SELECT/SHOW/DESCRIBE/EXPLAIN only). Write operations require `allow_dml=True` or the `CREWAI_NL2SQL_ALLOW_DML=true` environment variable. When write access is enabled, make sure the Agent uses a scoped database user or a read replica where possible.
## Security Model
@@ -38,6 +38,74 @@ Use all of the following in production:
- Add `before_tool_call` hooks to enforce allowed query patterns
- Enable query logging and alerting for destructive statements
## Read-Only Mode & DML Configuration
`NL2SQLTool` operates in **read-only mode by default**. Only the following statement types are permitted without additional configuration:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
Any attempt to execute a write operation (`INSERT`, `UPDATE`, `DELETE`, `DROP`, `CREATE`, `ALTER`, `TRUNCATE`, etc.) will raise an error unless DML is explicitly enabled.
Multi-statement queries containing semicolons (e.g. `SELECT 1; DROP TABLE users`) are also blocked in read-only mode to prevent injection attacks.
### Enabling Write Operations
You can enable DML (Data Manipulation Language) in two ways:
**Option 1 — constructor parameter:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**Option 2 — environment variable:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# DML enabled via environment variable
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### Usage Examples
**Read-only (default) — safe for analytics and reporting:**
```python
from crewai_tools import NL2SQLTool
# Only SELECT/SHOW/DESCRIBE/EXPLAIN are permitted
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**DML enabled — required for write workloads:**
```python
from crewai_tools import NL2SQLTool
# INSERT, UPDATE, DELETE, DROP, etc. are permitted
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
Enabling DML gives the agent the ability to modify or destroy data. Only enable this when your use case explicitly requires write access, and ensure the database credentials are scoped to the minimum required privileges.
</Warning>
## Requirements
- SqlAlchemy

View File

@@ -75,4 +75,20 @@ tool = CSVSearchTool(
},
}
)
## Security
### Path Validation
File paths provided to this tool are validated against the current working directory. Paths that resolve outside the working directory are rejected with a `ValueError`.
To allow paths outside the working directory (for example, in tests or trusted pipelines), set the environment variable:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### URL Validation
URL inputs are validated: `file://` URIs and requests targeting private or reserved IP ranges are blocked to prevent server-side request forgery (SSRF) attacks.
```

View File

@@ -67,4 +67,16 @@ tool = DirectorySearchTool(
},
}
)
## Security
### Path Validation
Directory paths provided to this tool are validated against the current working directory. Paths that resolve outside the working directory are rejected with a `ValueError`.
To allow paths outside the working directory (for example, in tests or trusted pipelines), set the environment variable:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
```

View File

@@ -74,3 +74,19 @@ tool = JSONSearchTool(
}
)
```
## Security
### Path Validation
File paths provided to this tool are validated against the current working directory. Paths that resolve outside the working directory are rejected with a `ValueError`.
To allow paths outside the working directory (for example, in tests or trusted pipelines), set the environment variable:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### URL Validation
URL inputs are validated: `file://` URIs and requests targeting private or reserved IP ranges are blocked to prevent server-side request forgery (SSRF) attacks.

View File

@@ -105,4 +105,20 @@ tool = PDFSearchTool(
},
}
)
## Security
### Path Validation
File paths provided to this tool are validated against the current working directory. Paths that resolve outside the working directory are rejected with a `ValueError`.
To allow paths outside the working directory (for example, in tests or trusted pipelines), set the environment variable:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### URL Validation
URL inputs are validated: `file://` URIs and requests targeting private or reserved IP ranges are blocked to prevent server-side request forgery (SSRF) attacks.
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 315 KiB

View File

@@ -4,6 +4,210 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 4월 10일">
## v1.14.2a2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## 변경 사항
### 기능
- 트리 뷰, 포크 지원 및 편집 가능한 입력/출력을 갖춘 체크포인트 TUI 추가
- 추론 토큰 및 캐시 생성 토큰으로 LLM 토큰 추적 강화
- 킥오프 메서드에 `from_checkpoint` 매개변수 추가
- 마이그레이션 프레임워크와 함께 체크포인트에 `crewai_version` 포함
- 계보 추적이 가능한 체크포인트 포킹 추가
### 버그 수정
- Anthropic 및 Bedrock 공급자로의 엄격 모드 포워딩 수정
- 읽기 전용 기본값, 쿼리 검증 및 매개변수화된 쿼리로 NL2SQLTool 강화
### 문서
- v1.14.2a1에 대한 변경 로그 및 버전 업데이트
## 기여자
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="2026년 4월 9일">
## v1.14.2a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## 변경 사항
### 버그 수정
- HITL 재개 후 flow_finished 이벤트 방출 수정
- CVE-2026-39892 문제를 해결하기 위해 암호화 버전을 46.0.7로 수정
### 리팩토링
- 공유 I18N_DEFAULT 싱글톤을 사용하도록 리팩토링
### 문서
- v1.14.1에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 9일">
## v1.14.1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1)
## 변경 사항
### 기능
- 비동기 체크포인트 TUI 브라우저 추가
- 스트리밍 출력에 aclose()/close() 및 비동기 컨텍스트 관리자 추가
### 버그 수정
- 템플릿 pyproject.toml 버전 증가를 위한 정규 표현식 수정
- 훅 데코레이터 필터에서 도구 이름 정리
- CheckpointConfig 생성 시 체크포인트 핸들러 등록 수정
- CVE-2026-1839 해결을 위해 transformers를 5.5.0으로 업데이트
- FilteredStream stdout/stderr 래퍼 제거
### 문서
- v1.14.1rc1에 대한 변경 로그 및 버전 업데이트
### 리팩토링
- 하드코딩된 거부 목록을 동적 BaseTool 필드 제외로 교체
- devtools CLI에서 정규 표현식을 tomlkit으로 교체
- 공유 PRINTER 싱글톤 사용
- BaseProvider를 provider_type 식별자가 있는 BaseModel로 변경
## 기여자
@greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay
</Update>
<Update label="2026년 4월 9일">
## v1.14.1rc1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1rc1)
## 변경 사항
### 기능
- 비동기 체크포인트 TUI 브라우저 추가
- 스트리밍 출력에 aclose()/close() 및 비동기 컨텍스트 관리자 추가
### 버그 수정
- 정규 표현식을 사용하여 템플릿 pyproject.toml 버전 증가 수정
- 후크 데코레이터 필터에서 도구 이름 정리
- CVE-2026-1839 해결을 위해 transformers를 5.5.0으로 업데이트
- CheckpointConfig가 생성될 때 체크포인트 핸들러 등록
### 리팩토링
- 하드코딩된 거부 목록을 동적 BaseTool 필드 제외로 교체
- devtools CLI에서 정규 표현식을 tomlkit으로 교체
- 공유 PRINTER 싱글톤 사용
- BaseProvider를 provider_type 구분자가 있는 BaseModel로 변경
- FilteredStream stdout/stderr 래퍼 제거
- 사용되지 않는 flow/config.py 제거
### 문서
- v1.14.0에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="2026년 4월 7일">
## v1.14.0
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0)
## 변경 사항
### 기능
- 체크포인트 목록/정보 CLI 명령 추가
- 추적을 구분하기 위한 guardrail_type 및 이름 추가
- 체크포인트 저장을 위한 SqliteProvider 추가
- 자동 체크포인트 생성을 위한 CheckpointConfig 추가
- 런타임 상태 체크포인트, 이벤트 시스템 및 실행기 리팩토링 구현
### 버그 수정
- SSRF 및 경로 탐색 보호 추가
- RAG 도구에 경로 및 URL 유효성 검사 추가
- 토큰 절약을 위해 메모리 직렬화에서 임베딩 벡터 제외
- 흐름 템플릿에 쓰기 전에 출력 디렉토리가 존재하는지 확인
- CVE-2026-35030 문제를 해결하기 위해 litellm을 >=1.83.0으로 업데이트
- 아랍어 페이지 렌더링을 유발하는 SEO 인덱싱 필드 제거
### 문서
- v1.14.0에 대한 변경 로그 및 버전 업데이트
- 명확성을 개선하기 위해 빠른 시작 및 설치 가이드 업데이트
- 저장소 제공자 섹션 추가, JsonProvider 내보내기
- AMP 교육 탭 가이드 추가
### 리팩토링
- 체크포인트 API 정리
- CodeInterpreterTool 제거 및 코드 실행 매개변수 사용 중단
## 기여자
@alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="2026년 4월 7일">
## v1.14.0a4
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a4)
## 변경 사항
### 기능
- 추적을 구분하기 위해 guardrail_type 및 이름 추가
- 체크포인트 저장을 위한 SqliteProvider 추가
- 자동 체크포인트 생성을 위한 CheckpointConfig 추가
- 런타임 상태 체크포인트, 이벤트 시스템 및 실행기 리팩토링 구현
### 버그 수정
- 토큰 절약을 위해 메모리 직렬화에서 임베딩 벡터 제외
- CVE-2026-35030 문제를 해결하기 위해 litellm을 >=1.83.0으로 업데이트
### 문서
- 명확성을 개선하기 위해 빠른 시작 및 설치 가이드 업데이트
- 저장소 제공자 섹션 추가 및 JsonProvider 내보내기
### 성능
- 체크포인트 데이터 열에 JSONB 사용
### 리팩토링
- CodeInterpreterTool 제거 및 코드 실행 매개변수 사용 중단
## 기여자
@alex-clawd, @github-actions[bot], @greysonlalonde, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="2026년 4월 6일">
## v1.14.0a3
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a3)
## 변경 사항
### 문서
- v1.14.0a2의 변경 로그 및 버전 업데이트
## 기여자
@joaomdmoura
</Update>
<Update label="2026년 4월 6일">
## v1.14.0a2

View File

@@ -291,15 +291,13 @@ multimodal_agent = Agent(
- `max_retry_limit`: 오류 발생 시 재시도 횟수
#### 코드 실행
- `allow_code_execution`: 코드를 실행하려면 True여야 합니다
- `code_execution_mode`:
- `"safe"`: Docker를 사용합니다 (프로덕션에 권장)
- `"unsafe"`: 직접 실행 (신뢰할 수 있는 환경에서만 사용)
<Note>
이 옵션은 기본 Docker 이미지를 실행합니다. Docker 이미지를 구성하려면 도구 섹션에 있는 Code Interpreter Tool을 확인하십시오.
Code Interpreter Tool을 에이전트의 도구 파라미터로 추가하십시오.
</Note>
<Warning>
`allow_code_execution` 및 `code_execution_mode`는 더 이상 사용되지 않습니다. `CodeInterpreterTool`이 `crewai-tools`에서 제거되었습니다. 안전한 코드 실행을 위해 [E2B](https://e2b.dev) 또는 [Modal](https://modal.com)과 같은 전용 샌드박스 서비스를 사용하세요.
</Warning>
- `allow_code_execution` _(지원 중단)_: 이전에 `CodeInterpreterTool`을 통한 내장 코드 실행을 활성화했습니다.
- `code_execution_mode` _(지원 중단)_: 이전에 실행 모드를 제어했습니다 (Docker의 경우 `"safe"`, 직접 실행의 경우 `"unsafe"`).
#### 고급 기능
- `multimodal`: 텍스트와 시각적 콘텐츠 처리를 위한 멀티모달 기능 활성화
@@ -627,9 +625,10 @@ asyncio.run(main())
## 중요한 고려사항 및 모범 사례
### 보안 및 코드 실행
- `allow_code_execution`을 사용할 때는 사용자 입력에 주의하고 항상 입력 값을 검증하세요
- 운영 환경에서는 `code_execution_mode: "safe"`(Docker)를 사용하세요
- 무한 루프를 방지하기 위해 적절한 `max_execution_time` 제한을 설정하는 것을 고려하세요
<Warning>
`allow_code_execution` 및 `code_execution_mode`는 더 이상 사용되지 않으며 `CodeInterpreterTool`이 제거되었습니다. 안전한 코드 실행을 위해 [E2B](https://e2b.dev) 또는 [Modal](https://modal.com)과 같은 전용 샌드박스 서비스를 사용하세요.
</Warning>
### 성능 최적화
- `respect_context_window: true`를 사용하여 토큰 제한 문제를 방지하세요.

View File

@@ -0,0 +1,229 @@
---
title: Checkpointing
description: 실행 상태를 자동으로 저장하여 크루, 플로우, 에이전트가 실패 후 재개할 수 있습니다.
icon: floppy-disk
mode: "wide"
---
<Warning>
체크포인팅은 초기 릴리스 단계입니다. API는 향후 버전에서 변경될 수 있습니다.
</Warning>
## 개요
체크포인팅은 실행 중 자동으로 실행 상태를 저장합니다. 크루, 플로우 또는 에이전트가 실행 도중 실패하면 마지막 체크포인트에서 복원하여 이미 완료된 작업을 다시 실행하지 않고 재개할 수 있습니다.
## 빠른 시작
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=True, # 기본값 사용: ./.checkpoints, task_completed 이벤트
)
result = crew.kickoff()
```
각 태스크가 완료된 후 `./.checkpoints/`에 체크포인트 파일이 기록됩니다.
## 설정
`CheckpointConfig`를 사용하여 세부 설정을 제어합니다:
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
on_events=["task_completed", "crew_kickoff_completed"],
max_checkpoints=5,
),
)
```
### CheckpointConfig 필드
| 필드 | 타입 | 기본값 | 설명 |
|:-----|:-----|:-------|:-----|
| `location` | `str` | `"./.checkpoints"` | 체크포인트 파일 경로 |
| `on_events` | `list[str]` | `["task_completed"]` | 체크포인트를 트리거하는 이벤트 타입 |
| `provider` | `BaseProvider` | `JsonProvider()` | 스토리지 백엔드 |
| `max_checkpoints` | `int \| None` | `None` | 보관할 최대 파일 수; 오래된 것부터 삭제 |
### 상속 및 옵트아웃
Crew, Flow, Agent의 `checkpoint` 필드는 `CheckpointConfig`, `True`, `False`, `None`을 받습니다:
| 값 | 동작 |
|:---|:-----|
| `None` (기본값) | 부모에서 상속. 에이전트는 크루의 설정을 상속합니다. |
| `True` | 기본값으로 활성화. |
| `False` | 명시적 옵트아웃. 부모 상속을 중단합니다. |
| `CheckpointConfig(...)` | 사용자 정의 설정. |
```python
crew = Crew(
agents=[
Agent(role="Researcher", ...), # 크루의 checkpoint 상속
Agent(role="Writer", ..., checkpoint=False), # 옵트아웃, 체크포인트 없음
],
tasks=[...],
checkpoint=True,
)
```
## 체크포인트에서 재개
```python
# 복원 및 재개
crew = Crew.from_checkpoint("./my_checkpoints/20260407T120000_abc123.json")
result = crew.kickoff() # 마지막으로 완료된 태스크부터 재개
```
복원된 크루는 이미 완료된 태스크를 건너뛰고 첫 번째 미완료 태스크부터 재개합니다.
## Crew, Flow, Agent에서 사용 가능
### Crew
```python
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task, review_task],
checkpoint=CheckpointConfig(location="./crew_cp"),
)
```
기본 트리거: `task_completed` (완료된 태스크당 하나의 체크포인트).
### Flow
```python
from crewai.flow.flow import Flow, start, listen
from crewai import CheckpointConfig
class MyFlow(Flow):
@start()
def step_one(self):
return "data"
@listen(step_one)
def step_two(self, data):
return process(data)
flow = MyFlow(
checkpoint=CheckpointConfig(
location="./flow_cp",
on_events=["method_execution_finished"],
),
)
result = flow.kickoff()
# 재개
flow = MyFlow.from_checkpoint("./flow_cp/20260407T120000_abc123.json")
result = flow.kickoff()
```
### Agent
```python
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
checkpoint=CheckpointConfig(
location="./agent_cp",
on_events=["lite_agent_execution_completed"],
),
)
result = agent.kickoff(messages=[{"role": "user", "content": "Research AI trends"}])
```
## 스토리지 프로바이더
CrewAI는 두 가지 체크포인트 스토리지 프로바이더를 제공합니다.
### JsonProvider (기본값)
각 체크포인트를 별도의 JSON 파일로 저장합니다.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import JsonProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
provider=JsonProvider(),
max_checkpoints=5,
),
)
```
### SqliteProvider
모든 체크포인트를 단일 SQLite 데이터베이스 파일에 저장합니다.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import SqliteProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./.checkpoints.db",
provider=SqliteProvider(),
),
)
```
## 이벤트 타입
`on_events` 필드는 이벤트 타입 문자열의 조합을 받습니다. 일반적인 선택:
| 사용 사례 | 이벤트 |
|:----------|:-------|
| 각 태스크 완료 후 (Crew) | `["task_completed"]` |
| 각 플로우 메서드 완료 후 | `["method_execution_finished"]` |
| 에이전트 실행 완료 후 | `["agent_execution_completed"]`, `["lite_agent_execution_completed"]` |
| 크루 완료 시에만 | `["crew_kickoff_completed"]` |
| 모든 LLM 호출 후 | `["llm_call_completed"]` |
| 모든 이벤트 | `["*"]` |
<Warning>
`["*"]` 또는 `llm_call_completed`와 같은 고빈도 이벤트를 사용하면 많은 체크포인트 파일이 생성되어 성능에 영향을 줄 수 있습니다. `max_checkpoints`를 사용하여 디스크 사용량을 제한하세요.
</Warning>
## 수동 체크포인팅
완전한 제어를 위해 자체 이벤트 핸들러를 등록하고 `state.checkpoint()`를 직접 호출할 수 있습니다:
```python
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_events import LLMCallCompletedEvent
# 동기 핸들러
@crewai_event_bus.on(LLMCallCompletedEvent)
def on_llm_done(source, event, state):
path = state.checkpoint("./my_checkpoints")
print(f"체크포인트 저장: {path}")
# 비동기 핸들러
@crewai_event_bus.on(LLMCallCompletedEvent)
async def on_llm_done_async(source, event, state):
path = await state.acheckpoint("./my_checkpoints")
print(f"체크포인트 저장: {path}")
```
`state` 인수는 핸들러가 3개의 매개변수를 받을 때 이벤트 버스가 자동으로 전달하는 `RuntimeState`입니다. [Event Listeners](/ko/concepts/event-listener) 문서에 나열된 모든 이벤트 타입에 핸들러를 등록할 수 있습니다.
체크포인팅은 best-effort입니다: 체크포인트 기록이 실패하면 오류가 로그에 기록되지만 실행은 중단 없이 계속됩니다.

View File

@@ -105,7 +105,7 @@ CLI는 `pyproject.toml`에서 프로젝트 유형을 자동으로 감지하고
```
<Tip>
첫 배포는 컨테이너 이미지를 빌드하므로 일반적으로 10~15분 정도 소요됩니다. 이후 배포는 훨씬 빠릅니다.
첫 배포는 보통 약 1분 정도 소요됩니다.
</Tip>
</Step>
@@ -187,7 +187,7 @@ Crew를 GitHub 저장소에 푸시해야 합니다. 아직 Crew를 만들지 않
1. "Deploy" 버튼을 클릭하여 배포 프로세스를 시작합니다.
2. 진행 바를 통해 진행 상황을 모니터링할 수 있습니다.
3. 첫 번째 배포에는 일반적으로 약 10-15분 정도 소요되며, 이후 배포는 더 빠릅니다.
3. 첫 번째 배포에는 일반적으로 약 1분 정도 소요니다
<Frame>
![Deploy Progress](/images/enterprise/deploy-progress.png)

View File

@@ -197,9 +197,8 @@ CrewAI는 의존성 관리와 패키지 처리를 위해 `uv`를 사용합니다
## 다음 단계
<CardGroup cols={2}>
<Card title="첫 번째 Agent 만들기" icon="code" href="/ko/quickstart">
빠른 시작 가이드를 따라 CrewAI 에이전트를 처음 만들어보고 직접 경험해
보세요.
<Card title="퀵스타트: Flow + 에이전트" icon="code" href="/ko/quickstart">
Flow를 만들고 에이전트 한 명짜리 crew를 실행해 보고서까지 만드는 방법을 따라 해 보세요.
</Card>
<Card
title="커뮤니티 참여하기"

View File

@@ -138,9 +138,9 @@ Crews의 기능:
<Card
title="빠른 시작"
icon="bolt"
href="ko/quickstart"
href="/ko/quickstart"
>
빠른 시작 가이드를 따라 첫 번째 CrewAI agent를 만들고 직접 경험해 보세요.
Flow를 만들고 에이전트 한 명 crew를 실행해 끝까지 보고서를 생성해 보세요.
</Card>
<Card
title="커뮤니티 가입하기"

View File

@@ -325,6 +325,34 @@ asyncio.run(interactive_research())
- **사용자 경험**: 점진적인 결과를 표시하여 체감 지연 시간 감소
- **라이브 대시보드**: crew 실행 상태를 표시하는 모니터링 인터페이스 구축
## 취소 및 리소스 정리
`CrewStreamingOutput`은 소비자가 연결을 끊을 때 진행 중인 작업을 즉시 중단하는 정상적인 취소를 지원합니다.
### 비동기 컨텍스트 매니저
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
async with streaming:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
```
### 명시적 취소
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
try:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
finally:
await streaming.aclose() # 비동기
# streaming.close() # 동기 버전
```
취소 후 `streaming.is_cancelled`와 `streaming.is_completed`는 모두 `True`입니다. `aclose()`와 `close()` 모두 멱등성을 가집니다.
## 중요 사항
- 스트리밍은 crew의 모든 에이전트에 대해 자동으로 LLM 스트리밍을 활성화합니다

View File

@@ -1,6 +1,6 @@
---
title: 퀵스타트
description: 5분 이내에 CrewAI로 첫 번째 AI 에이전트를 구축해보세요.
description: 몇 분 안에 첫 CrewAI Flow를 만듭니다 — 오케스트레이션, 상태, 그리고 실제 보고서를 만드는 에이전트 crew까지.
icon: rocket
mode: "wide"
---
@@ -13,375 +13,266 @@ mode: "wide"
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
## 첫 번째 CrewAI Agent 만들기
이 가이드에서는 **Flow**를 만들어 연구 주제를 정하고, **에이전트 한 명으로 구성된 crew**(웹 검색을 쓰는 연구원)를 실행한 뒤, 디스크에 **Markdown 보고서**를 남깁니다. Flow는 프로덕션 앱을 구성하는 권장 방식으로, **상태**와 **실행 순서**를 담당하고 **에이전트**는 crew 단계 안에서 실제 작업을 수행합니다.
이제 주어진 주제나 항목에 대해 `최신 AI 개발 동향`을 `연구`하고 `보고`하는 간단한 crew를 만들어보겠습니다.
CrewAI를 아직 설치하지 않았다면 먼저 [설치 가이드](/ko/installation)를 따르세요.
진행하기 전에 CrewAI 설치를 완료했는지 확인하세요.
아직 설치하지 않았다면, [설치 가이드](/ko/installation)를 참고해 설치할 수 있습니다.
## 사전 요건
아래 단계를 따라 Crewing을 시작하세요! 🚣‍♂️
- Python 환경과 CrewAI CLI([설치](/ko/installation) 참고)
- 올바른 API 키로 설정한 LLM — [LLM](/ko/concepts/llms#setting-up-your-llm) 참고
- 이 튜토리얼의 웹 검색용 [Serper.dev](https://serper.dev/) API 키(`SERPER_API_KEY`)
## 첫 번째 Flow 만들기
<Steps>
<Step title="crew 생성하기">
터미널에서 아래 명령어를 실행하여 새로운 crew 프로젝트를 만드세요.
이 작업은 `latest-ai-development`라는 새 디렉터리와 기본 구조를 생성합니다.
<Step title="Flow 프로젝트 생성">
터미널에서 Flow 프로젝트를 생성합니다(폴더 이름은 밑줄 형식입니다. 예: `latest_ai_flow`).
<CodeGroup>
```shell Terminal
crewai create crew latest-ai-development
crewai create flow latest-ai-flow
cd latest_ai_flow
```
</CodeGroup>
이렇게 하면 `src/latest_ai_flow/` 아래에 Flow 앱이 만들어지고, 다음 단계에서 **단일 에이전트** 연구 crew로 바꿀 시작용 crew가 `crews/content_crew/`에 포함됩니다.
</Step>
<Step title="새로운 crew 프로젝트로 이동하기">
<CodeGroup>
```shell Terminal
cd latest_ai_development
```
</CodeGroup>
</Step>
<Step title="`agents.yaml` 파일 수정하기">
<Tip>
프로젝트에 맞게 agent를 수정하거나 복사/붙여넣기를 할 수 있습니다.
`agents.yaml` 및 `tasks.yaml` 파일에서 `{topic}`과 같은 변수를 사용하면, 이는 `main.py` 파일의 변수 값으로 대체됩니다.
</Tip>
<Step title="`agents.yaml`에 에이전트 하나 설정">
`src/latest_ai_flow/crews/content_crew/config/agents.yaml` 내용을 한 명의 연구원만 남기도록 바꿉니다. `{topic}` 같은 변수는 `crew.kickoff(inputs=...)`로 채워집니다.
```yaml agents.yaml
# src/latest_ai_development/config/agents.yaml
# src/latest_ai_flow/crews/content_crew/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
{topic} 시니어 데이터 리서처
goal: >
Uncover cutting-edge developments in {topic}
{topic} 분야의 최신 동향을 파악한다
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
당신은 {topic}의 최신 흐름을 찾아내는 데 능숙한 연구원입니다.
가장 관련성 높은 정보를 찾아 명확하게 전달합니다.
```
</Step>
<Step title="`tasks.yaml` 파일 수정하기">
<Step title="`tasks.yaml`에 작업 하나 설정">
```yaml tasks.yaml
# src/latest_ai_development/config/tasks.yaml
# src/latest_ai_flow/crews/content_crew/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
{topic}에 대해 철저히 조사하세요. 웹 검색으로 최신이고 신뢰할 수 있는 정보를 찾으세요.
현재 연도는 2026년입니다.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
마크다운 보고서로, 주요 트렌드·주목할 도구나 기업·시사점 등으로 섹션을 나누세요.
분량은 약 800~1200단어. 문서 전체를 코드 펜스로 감싸지 마세요.
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
output_file: output/report.md
```
</Step>
<Step title="`crew.py` 파일 수정하기">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
<Step title="crew 클래스 연결 (`content_crew.py`)">
생성된 crew가 YAML을 읽고 연구원에게 `SerperDevTool`을 붙이도록 합니다.
```python content_crew.py
# src/latest_ai_flow/crews/content_crew/content_crew.py
from typing import List
from crewai import Agent, Crew, Process, Task
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
class ResearchCrew:
"""Flow 안에서 사용하는 단일 에이전트 연구 crew."""
agents: List[BaseAgent]
tasks: List[Task]
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
config=self.agents_config["researcher"], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
tools=[SerperDevTool()],
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='output/report.md' # This is the file that will be contain the final report.
config=self.tasks_config["research_task"], # type: ignore[index]
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
</Step>
<Step title="[선택 사항] crew 실행 전/후 함수 추가">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
<Step title="`main.py`에서 Flow 정의">
crew를 Flow에 연결합니다: `@start()` 단계에서 주제를 **상태**에 넣고, `@listen` 단계에서 crew를 실행합니다. 작업의 `output_file`은 그대로 `output/report.md`에 씁니다.
@before_kickoff
def before_kickoff_function(self, inputs):
print(f"Before kickoff function with inputs: {inputs}")
return inputs # You can return the inputs or modify them as needed
@after_kickoff
def after_kickoff_function(self, result):
print(f"After kickoff function with result: {result}")
return result # You can return the result or modify it as needed
# ... remaining code
```
</Step>
<Step title="crew에 커스텀 입력값 전달하기">
예를 들어, crew에 `topic` 입력값을 넘겨 연구 및 보고서 출력을 맞춤화할 수 있습니다.
```python main.py
#!/usr/bin/env python
# src/latest_ai_development/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
# src/latest_ai_flow/main.py
from pydantic import BaseModel
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
from crewai.flow import Flow, listen, start
from latest_ai_flow.crews.content_crew.content_crew import ResearchCrew
class ResearchFlowState(BaseModel):
topic: str = ""
report: str = ""
class LatestAiFlow(Flow[ResearchFlowState]):
@start()
def prepare_topic(self, crewai_trigger_payload: dict | None = None):
if crewai_trigger_payload:
self.state.topic = crewai_trigger_payload.get("topic", "AI Agents")
else:
self.state.topic = "AI Agents"
print(f"주제: {self.state.topic}")
@listen(prepare_topic)
def run_research(self):
result = ResearchCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.report = result.raw
print("연구 crew 실행 완료.")
@listen(run_research)
def summarize(self):
print("보고서 경로: output/report.md")
def kickoff():
LatestAiFlow().kickoff()
def plot():
LatestAiFlow().plot()
if __name__ == "__main__":
kickoff()
```
</Step>
<Step title="환경 변수 설정">
crew를 실행하기 전에 `.env` 파일에 아래 키가 환경 변수로 설정되어 있는지 확인하세요:
- [Serper.dev](https://serper.dev/) API 키: `SERPER_API_KEY=YOUR_KEY_HERE`
- 사용하려는 모델의 설정, 예: API 키. 다양한 공급자의 모델 설정은
[LLM 설정 가이드](/ko/concepts/llms#setting-up-your-llm)를 참고하세요.
</Step>
<Step title="의존성 잠그고 설치하기">
- CLI 명령어로 의존성을 잠그고 설치하세요:
<CodeGroup>
```shell Terminal
crewai install
```
</CodeGroup>
- 추가 설치가 필요한 패키지가 있다면, 아래와 같이 실행하면 됩니다:
<CodeGroup>
```shell Terminal
uv add <package-name>
```
</CodeGroup>
</Step>
<Step title="crew 실행하기">
- 프로젝트 루트에서 다음 명령어로 crew를 실행하세요:
<CodeGroup>
```bash Terminal
crewai run
```
</CodeGroup>
<Tip>
패키지 이름이 `latest_ai_flow`가 아니면 `ResearchCrew` import 경로를 프로젝트 모듈 경로에 맞게 바꾸세요.
</Tip>
</Step>
<Step title="엔터프라이즈 대안: Crew Studio에서 생성">
CrewAI AMP 사용자는 코드를 작성하지 않고도 동일한 crew를 생성할 수 있습니다:
<Step title="환경 변수">
프로젝트 루트의 `.env`에 다음을 설정합니다.
1. CrewAI AMP 계정에 로그인하세요([app.crewai.com](https://app.crewai.com)에서 무료 계정 만들기)
2. Crew Studio 열기
3. 구현하려는 자동화 내용을 입력하세요
4. 미션을 시각적으로 생성하고 순차적으로 연결하세요
5. 입력값을 구성하고 "Download Code" 또는 "Deploy"를 클릭하세요
![Crew Studio Quickstart](/images/enterprise/crew-studio-interface.png)
<Card title="CrewAI AMP 체험하기" icon="rocket" href="https://app.crewai.com">
CrewAI AOP에서 무료 계정을 시작하세요
</Card>
- `SERPER_API_KEY` — [Serper.dev](https://serper.dev/)에서 발급
- 모델 제공자 키 — [LLM 설정](/ko/concepts/llms#setting-up-your-llm) 참고
</Step>
<Step title="최종 보고서 확인하기">
콘솔에서 출력 결과를 확인할 수 있으며 프로젝트 루트에 `report.md` 파일로 최종 보고서가 생성됩니다.
보고서 예시는 다음과 같습니다:
<Step title="설치 및 실행">
<CodeGroup>
```shell Terminal
crewai install
crewai run
```
</CodeGroup>
`crewai run`은 프로젝트에 정의된 Flow 진입점을 실행합니다(crew와 동일한 명령이며, `pyproject.toml`의 프로젝트 유형은 `"flow"`입니다).
</Step>
<Step title="결과 확인">
Flow와 crew 로그가 출력되어야 합니다. 생성된 보고서는 **`output/report.md`**에서 확인하세요(발췌):
<CodeGroup>
```markdown output/report.md
# Comprehensive Report on the Rise and Impact of AI Agents in 2025
# 2026년 AI 에이전트: 동향과 전망
## 1. Introduction to AI Agents
In 2025, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce.
## 요약
## 2. Benefits of AI Agents
AI agents bring numerous advantages that are transforming traditional work environments. Key benefits include:
## 주요 트렌드
- **도구 사용과 오케스트레이션** — …
- **엔터프라이즈 도입** — …
- **Task Automation**: AI agents can carry out repetitive tasks such as data entry, scheduling, and payroll processing without human intervention, greatly reducing the time and resources spent on these activities.
- **Improved Efficiency**: By quickly processing large datasets and performing analyses that would take humans significantly longer, AI agents enhance operational efficiency. This allows teams to focus on strategic tasks that require higher-level thinking.
- **Enhanced Decision-Making**: AI agents can analyze trends and patterns in data, provide insights, and even suggest actions, helping stakeholders make informed decisions based on factual data rather than intuition alone.
## 3. Popular AI Agent Frameworks
Several frameworks have emerged to facilitate the development of AI agents, each with its own unique features and capabilities. Some of the most popular frameworks include:
- **Autogen**: A framework designed to streamline the development of AI agents through automation of code generation.
- **Semantic Kernel**: Focuses on natural language processing and understanding, enabling agents to comprehend user intentions better.
- **Promptflow**: Provides tools for developers to create conversational agents that can navigate complex interactions seamlessly.
- **Langchain**: Specializes in leveraging various APIs to ensure agents can access and utilize external data effectively.
- **CrewAI**: Aimed at collaborative environments, CrewAI strengthens teamwork by facilitating communication through AI-driven insights.
- **MemGPT**: Combines memory-optimized architectures with generative capabilities, allowing for more personalized interactions with users.
These frameworks empower developers to build versatile and intelligent agents that can engage users, perform advanced analytics, and execute various tasks aligned with organizational goals.
## 4. AI Agents in Human Resources
AI agents are revolutionizing HR practices by automating and optimizing key functions:
- **Recruiting**: AI agents can screen resumes, schedule interviews, and even conduct initial assessments, thus accelerating the hiring process while minimizing biases.
- **Succession Planning**: AI systems analyze employee performance data and potential, helping organizations identify future leaders and plan appropriate training.
- **Employee Engagement**: Chatbots powered by AI can facilitate feedback loops between employees and management, promoting an open culture and addressing concerns promptly.
As AI continues to evolve, HR departments leveraging these agents can realize substantial improvements in both efficiency and employee satisfaction.
## 5. AI Agents in Finance
The finance sector is seeing extensive integration of AI agents that enhance financial practices:
- **Expense Tracking**: Automated systems manage and monitor expenses, flagging anomalies and offering recommendations based on spending patterns.
- **Risk Assessment**: AI models assess credit risk and uncover potential fraud by analyzing transaction data and behavioral patterns.
- **Investment Decisions**: AI agents provide stock predictions and analytics based on historical data and current market conditions, empowering investors with informative insights.
The incorporation of AI agents into finance is fostering a more responsive and risk-aware financial landscape.
## 6. Market Trends and Investments
The growth of AI agents has attracted significant investment, especially amidst the rising popularity of chatbots and generative AI technologies. Companies and entrepreneurs are eager to explore the potential of these systems, recognizing their ability to streamline operations and improve customer engagement.
Conversely, corporations like Microsoft are taking strides to integrate AI agents into their product offerings, with enhancements to their Copilot 365 applications. This strategic move emphasizes the importance of AI literacy in the modern workplace and indicates the stabilizing of AI agents as essential business tools.
## 7. Future Predictions and Implications
Experts predict that AI agents will transform essential aspects of work life. As we look toward the future, several anticipated changes include:
- Enhanced integration of AI agents across all business functions, creating interconnected systems that leverage data from various departmental silos for comprehensive decision-making.
- Continued advancement of AI technologies, resulting in smarter, more adaptable agents capable of learning and evolving from user interactions.
- Increased regulatory scrutiny to ensure ethical use, especially concerning data privacy and employee surveillance as AI agents become more prevalent.
To stay competitive and harness the full potential of AI agents, organizations must remain vigilant about latest developments in AI technology and consider continuous learning and adaptation in their strategic planning.
## 8. Conclusion
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
## 시사점
```
</CodeGroup>
실제 파일은 더 길고 실시간 검색 결과를 반영합니다.
</Step>
</Steps>
## 한 번에 이해하기
1. **Flow** — `LatestAiFlow`는 `prepare_topic` → `run_research` → `summarize` 순으로 실행됩니다. 상태(`topic`, `report`)는 Flow에 있습니다.
2. **Crew** — `ResearchCrew`는 에이전트 한 명·작업 하나로 실행됩니다. 연구원이 **Serper**로 웹을 검색하고 구조화된 보고서를 씁니다.
3. **결과물** — 작업의 `output_file`이 `output/report.md`에 보고서를 씁니다.
Flow 패턴(라우팅, 지속성, human-in-the-loop)을 더 보려면 [첫 Flow 만들기](/ko/guides/flows/first-flow)와 [Flows](/ko/concepts/flows)를 참고하세요. Flow 없이 crew만 쓰려면 [Crews](/ko/concepts/crews)를, 작업 없이 단일 `Agent`의 `kickoff()`만 쓰려면 [Agents](/ko/concepts/agents#direct-agent-interaction-with-kickoff)를 참고하세요.
<Check>
축하합니다!
crew 프로젝트 설정이 완료되었으며, 이제 자신만의 agentic workflow 구축을 바로 시작하실 수 있습니다!
에이전트 crew와 저장된 보고서까지 이어진 Flow를 완성했습니다. 이제 단계·crew·도구를 더해 확장할 수 있습니다.
</Check>
### 명명 일관성에 대한 참고
### 이름 일치
YAML 파일(`agents.yaml` 및 `tasks.yaml`)에서 사용하는 이름은 Python 코드의 메서드 이름과 일치해야 합니다.
예를 들어, 특정 task에 대한 agent를 `tasks.yaml` 파일에서 참조할 수 있습니다.
이러한 명명 일관성을 지키면 CrewAI가 설정과 코드를 자동으로 연결할 수 있습니다. 그렇지 않으면 task가 참조를 제대로 인식하지 못할 수 있습니다.
YAML 키(`researcher`, `research_task`)는 `@CrewBase` 클래스의 메서드 이름과 같아야 합니다. 전체 데코레이터 패턴은 [Crews](/ko/concepts/crews)를 참고하세요.
#### 예시 참조
## 배포
<Tip>
`agents.yaml` (`email_summarizer`) 파일에서 에이전트 이름과 `crew.py`
(`email_summarizer`) 파일에서 메서드 이름이 동일하게 사용되는 점에 주목하세요.
</Tip>
로컬에서 정상 실행되고 프로젝트가 **GitHub** 저장소에 있으면 Flow를 **[CrewAI AMP](https://app.crewai.com)**에 올릴 수 있습니다. 프로젝트 루트에서:
```yaml agents.yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: provider/model-id # Add your choice of model here
<CodeGroup>
```bash 인증
crewai login
```
<Tip>
`tasks.yaml` (`email_summarizer_task`) 파일에서 태스크 이름과 `crew.py`
(`email_summarizer_task`) 파일에서 메서드 이름이 동일하게 사용되는 점에
주목하세요.
</Tip>
```yaml tasks.yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
```bash 배포 생성
crewai deploy create
```
## Crew 배포하기
```bash 상태 및 로그
crewai deploy status
crewai deploy logs
```
production 환경에 crew를 배포하는 가장 쉬운 방법은 [CrewAI AMP](http://app.crewai.com)를 통해서입니다.
```bash 코드 변경 후 반영
crewai deploy push
```
CLI를 사용하여 [CrewAI AMP](http://app.crewai.com)에 crew를 배포하는 단계별 시연은 이 영상 튜토리얼을 참고하세요.
```bash 배포 목록 또는 삭제
crewai deploy list
crewai deploy remove <deployment_id>
```
</CodeGroup>
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
<Tip>
첫 배포는 보통 **약 1분** 정도 걸립니다. 전체 사전 요건과 웹 UI 절차는 [AMP에 배포](/ko/enterprise/guides/deploy-to-amp)를 참고하세요.
</Tip>
<CardGroup cols={2}>
<Card title="Enterprise에 배포" icon="rocket" href="http://app.crewai.com">
CrewAI AOP로 시작하여 몇 번의 클릭만으로 production 환경에 crew를
배포하세요.
<Card title="배포 가이드" icon="book" href="/ko/enterprise/guides/deploy-to-amp">
AMP 배포 단계별 안내(CLI 및 대시보드).
</Card>
<Card
title="커뮤니티 참여하기"
title="커뮤니티"
icon="comments"
href="https://community.crewai.com"
>
오픈 소스 커뮤니티에 참여하여 아이디어를 나누고, 프로젝트를 공유하며, 다른
CrewAI 개발자들과 소통하세요.
아이디어를 나누고 프로젝트를 공유하며 다른 CrewAI 개발자와 소통하세요.
</Card>
</CardGroup>

50
docs/ko/skills.mdx Normal file
View File

@@ -0,0 +1,50 @@
---
title: Skills
description: skills.sh의 공식 레지스트리에서 crewaiinc/skills를 설치하세요. Claude Code, Cursor, Codex 등을 위한 Flow, Crew, 문서 연동 스킬.
icon: wand-magic-sparkles
mode: "wide"
---
# Skills
**한 번의 명령으로 코딩 에이전트에 CrewAI 컨텍스트를 제공하세요.**
CrewAI **Skills**는 **[skills.sh/crewaiinc/skills](https://skills.sh/crewaiinc/skills)**에 게시됩니다. `crewaiinc/skills`의 공식 레지스트리로, 개별 스킬(예: **design-agent**, **getting-started**, **design-task**, **ask-docs**), 설치 수, 감사 정보를 확인할 수 있습니다. Claude Code, Cursor, Codex 같은 코딩 에이전트에게 Flow 구성, Crew 설정, 도구 사용, CrewAI 패턴을 가르칩니다. 아래를 실행하거나 에이전트에 붙여 넣으세요.
```shell Terminal
npx skills add crewaiinc/skills
```
에이전트 워크플로에 스킬 팩이 추가되어 세션마다 프레임워크를 다시 설명하지 않아도 CrewAI 관례를 적용할 수 있습니다. 소스와 이슈는 [GitHub](https://github.com/crewAIInc/skills)에서 관리합니다.
## 에이전트가 얻는 것
- **Flows** — CrewAI 방식의 상태ful 앱, 단계, crew kickoff
- **Crew & 에이전트** — YAML 우선 패턴, 역할, 작업, 위임
- **도구 & 통합** — 검색, API, 일반적인 CrewAI 도구 연결
- **프로젝트 구조** — CLI 스캐폴드 및 저장소 관례와 정렬
- **최신 패턴** — 스킬이 현재 CrewAI 문서 및 권장 사항을 반영
## 이 사이트에서 더 알아보기
<CardGroup cols={2}>
<Card title="코딩 도구 & AGENTS.md" icon="terminal" href="/ko/guides/coding-tools/agents-md">
CrewAI와 `AGENTS.md`, 코딩 에이전트 워크플로 사용법.
</Card>
<Card title="빠른 시작" icon="rocket" href="/ko/quickstart">
첫 Flow와 crew를 처음부터 끝까지 구축합니다.
</Card>
<Card title="설치" icon="download" href="/ko/installation">
CrewAI CLI와 Python 패키지를 설치합니다.
</Card>
<Card title="Skills 레지스트리 (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
`crewaiinc/skills` 공식 목록—스킬, 설치 수, 감사.
</Card>
<Card title="GitHub 소스" icon="code-branch" href="https://github.com/crewAIInc/skills">
스킬 팩 소스, 업데이트, 이슈.
</Card>
</CardGroup>
### 영상: 코딩 에이전트 스킬과 CrewAI
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{ width: "100%", height: "400px" }} />

View File

@@ -7,6 +7,10 @@ mode: "wide"
# `CodeInterpreterTool`
<Warning>
**지원 중단:** `CodeInterpreterTool`이 `crewai-tools`에서 제거되었습니다. `Agent`의 `allow_code_execution` 및 `code_execution_mode` 파라미터도 더 이상 사용되지 않습니다. 안전하고 격리된 코드 실행을 위해 전용 샌드박스 서비스 — [E2B](https://e2b.dev) 또는 [Modal](https://modal.com) — 을 사용하세요.
</Warning>
## 설명
`CodeInterpreterTool`은 CrewAI 에이전트가 자율적으로 생성한 Python 3 코드를 실행할 수 있도록 합니다. 이 기능은 에이전트가 코드를 생성하고, 실행하며, 결과를 얻고, 그 정보를 활용하여 이후의 결정과 행동에 반영할 수 있다는 점에서 특히 유용합니다.

View File

@@ -11,7 +11,75 @@ mode: "wide"
이를 통해 에이전트가 데이터베이스에 접근하여 목표에 따라 정보를 가져오고, 해당 정보를 사용해 응답, 보고서 또는 기타 출력물을 생성하는 다양한 워크플로우가 가능해집니다. 또한 에이전트가 자신의 목표에 맞춰 데이터베이스를 업데이트할 수 있는 기능도 제공합니다.
**주의**: 에이전트가 Read-Replica에 접근할 수 있거나, 에이전트가 데이터베이스에 insert/update 쿼리를 실행해도 괜찮은지 반드시 확인하십시오.
**주의**: 도구는 기본적으로 읽기 전용(SELECT/SHOW/DESCRIBE/EXPLAIN만 허용)으로 동작합니다. 쓰기 작업을 수행하려면 `allow_dml=True` 매개변수 또는 `CREWAI_NL2SQL_ALLOW_DML=true` 환경 변수가 필요합니다. 쓰기 접근이 활성화된 경우, 가능하면 권한이 제한된 데이터베이스 사용자나 읽기 복제본을 사용하십시오.
## 읽기 전용 모드 및 DML 구성
`NL2SQLTool`은 기본적으로 **읽기 전용 모드**로 동작합니다. 추가 구성 없이 허용되는 구문 유형은 다음과 같습니다:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
DML을 명시적으로 활성화하지 않으면 쓰기 작업(`INSERT`, `UPDATE`, `DELETE`, `DROP`, `CREATE`, `ALTER`, `TRUNCATE` 등)을 실행하려고 할 때 오류가 발생합니다.
읽기 전용 모드에서는 세미콜론이 포함된 다중 구문 쿼리(예: `SELECT 1; DROP TABLE users`)도 인젝션 공격을 방지하기 위해 차단됩니다.
### 쓰기 작업 활성화
DML(데이터 조작 언어)을 활성화하는 방법은 두 가지입니다:
**옵션 1 — 생성자 매개변수:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**옵션 2 — 환경 변수:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# 환경 변수를 통해 DML 활성화
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### 사용 예시
**읽기 전용(기본값) — 분석 및 보고 워크로드에 안전:**
```python
from crewai_tools import NL2SQLTool
# SELECT/SHOW/DESCRIBE/EXPLAIN만 허용
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**DML 활성화 — 쓰기 워크로드에 필요:**
```python
from crewai_tools import NL2SQLTool
# INSERT, UPDATE, DELETE, DROP 등이 허용됨
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
DML을 활성화하면 에이전트가 데이터를 수정하거나 삭제할 수 있습니다. 사용 사례에서 명시적으로 쓰기 접근이 필요한 경우에만 활성화하고, 데이터베이스 자격 증명이 최소 필요 권한으로 제한되어 있는지 확인하십시오.
</Warning>
## 요구 사항

View File

@@ -76,3 +76,19 @@ tool = CSVSearchTool(
}
)
```
## 보안
### 경로 유효성 검사
이 도구에 제공되는 파일 경로는 현재 작업 디렉터리에 대해 검증됩니다. 작업 디렉터리 외부로 확인되는 경로는 `ValueError`로 거부됩니다.
작업 디렉터리 외부의 경로를 허용하려면 (예: 테스트 또는 신뢰할 수 있는 파이프라인), 다음 환경 변수를 설정하세요:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### URL 유효성 검사
URL 입력도 검증됩니다: `file://` URI와 사설 또는 예약된 IP 범위를 대상으로 하는 요청은 서버 측 요청 위조(SSRF) 공격을 방지하기 위해 차단됩니다.

View File

@@ -68,3 +68,15 @@ tool = DirectorySearchTool(
}
)
```
## 보안
### 경로 유효성 검사
이 도구에 제공되는 디렉터리 경로는 현재 작업 디렉터리에 대해 검증됩니다. 작업 디렉터리 외부로 확인되는 경로는 `ValueError`로 거부됩니다.
작업 디렉터리 외부의 경로를 허용하려면 (예: 테스트 또는 신뢰할 수 있는 파이프라인), 다음 환경 변수를 설정하세요:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```

View File

@@ -71,3 +71,19 @@ tool = JSONSearchTool(
}
)
```
## 보안
### 경로 유효성 검사
이 도구에 제공되는 파일 경로는 현재 작업 디렉터리에 대해 검증됩니다. 작업 디렉터리 외부로 확인되는 경로는 `ValueError`로 거부됩니다.
작업 디렉터리 외부의 경로를 허용하려면 (예: 테스트 또는 신뢰할 수 있는 파이프라인), 다음 환경 변수를 설정하세요:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### URL 유효성 검사
URL 입력도 검증됩니다: `file://` URI와 사설 또는 예약된 IP 범위를 대상으로 하는 요청은 서버 측 요청 위조(SSRF) 공격을 방지하기 위해 차단됩니다.

View File

@@ -102,3 +102,19 @@ tool = PDFSearchTool(
}
)
```
## 보안
### 경로 유효성 검사
이 도구에 제공되는 파일 경로는 현재 작업 디렉터리에 대해 검증됩니다. 작업 디렉터리 외부로 확인되는 경로는 `ValueError`로 거부됩니다.
작업 디렉터리 외부의 경로를 허용하려면 (예: 테스트 또는 신뢰할 수 있는 파이프라인), 다음 환경 변수를 설정하세요:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### URL 유효성 검사
URL 입력도 검증됩니다: `file://` URI와 사설 또는 예약된 IP 범위를 대상으로 하는 요청은 서버 측 요청 위조(SSRF) 공격을 방지하기 위해 차단됩니다.

View File

@@ -4,6 +4,210 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="10 abr 2026">
## v1.14.2a2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## O que Mudou
### Funcionalidades
- Adicionar TUI de ponto de verificação com visualização em árvore, suporte a bifurcações e entradas/saídas editáveis
- Enriquecer o rastreamento de tokens LLM com tokens de raciocínio e tokens de criação de cache
- Adicionar parâmetro `from_checkpoint` aos métodos de inicialização
- Incorporar `crewai_version` em pontos de verificação com o framework de migração
- Adicionar bifurcação de ponto de verificação com rastreamento de linhagem
### Correções de Bugs
- Corrigir o encaminhamento em modo estrito para os provedores Anthropic e Bedrock
- Fortalecer NL2SQLTool com padrão somente leitura, validação de consultas e consultas parametrizadas
### Documentação
- Atualizar changelog e versão para v1.14.2a1
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="09 abr 2026">
## v1.14.2a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## O que Mudou
### Correções de Bugs
- Corrigir a emissão do evento flow_finished após a retomada do HITL
- Corrigir a versão da criptografia para 46.0.7 para resolver o CVE-2026-39892
### Refatoração
- Refatorar para usar o singleton I18N_DEFAULT compartilhado
### Documentação
- Atualizar o changelog e a versão para v1.14.1
## Contribuidores
@greysonlalonde
</Update>
<Update label="09 abr 2026">
## v1.14.1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1)
## O que Mudou
### Funcionalidades
- Adicionar navegador TUI de ponto de verificação assíncrono
- Adicionar aclose()/close() e gerenciador de contexto assíncrono para saídas de streaming
### Correções de Bugs
- Corrigir regex para aumentos de versão do template pyproject.toml
- Sanitizar nomes de ferramentas nos filtros do decorador de hook
- Corrigir registro de manipuladores de ponto de verificação quando CheckpointConfig é criado
- Atualizar transformers para 5.5.0 para resolver CVE-2026-1839
- Remover wrapper stdout/stderr de FilteredStream
### Documentação
- Atualizar changelog e versão para v1.14.1rc1
### Refatoração
- Substituir lista de negação codificada por exclusão dinâmica de campo BaseTool na geração de especificações
- Substituir regex por tomlkit na CLI do devtools
- Usar singleton PRINTER compartilhado
- Fazer BaseProvider um BaseModel com discriminador provider_type
## Contribuidores
@greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay
</Update>
<Update label="09 abr 2026">
## v1.14.1rc1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.1rc1)
## O que Mudou
### Recursos
- Adicionar navegador TUI de ponto de verificação assíncrono
- Adicionar aclose()/close() e gerenciador de contexto assíncrono para saídas de streaming
### Correções de Bugs
- Corrigir aumentos de versão do template pyproject.toml usando regex
- Sanitizar nomes de ferramentas nos filtros do decorador de hook
- Atualizar transformers para 5.5.0 para resolver CVE-2026-1839
- Registrar manipuladores de ponto de verificação quando CheckpointConfig é criado
### Refatoração
- Substituir lista de negação codificada por exclusão dinâmica de campo BaseTool na geração de especificações
- Substituir regex por tomlkit na CLI do devtools
- Usar singleton PRINTER compartilhado
- Tornar BaseProvider um BaseModel com discriminador de tipo de provedor
- Remover wrapper stdout/stderr de FilteredStream
- Remover flow/config.py não utilizado
### Documentação
- Atualizar changelog e versão para v1.14.0
## Contribuidores
@greysonlalonde, @iris-clawd, @joaomdmoura
</Update>
<Update label="07 abr 2026">
## v1.14.0
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0)
## O que Mudou
### Recursos
- Adicionar comandos CLI de lista/informações de checkpoint
- Adicionar guardrail_type e nome para distinguir rastros
- Adicionar SqliteProvider para armazenamento de checkpoints
- Adicionar CheckpointConfig para checkpointing automático
- Implementar checkpointing de estado em tempo de execução, sistema de eventos e refatoração do executor
### Correções de Bugs
- Adicionar proteções contra SSRF e travessia de caminho
- Adicionar validação de caminho e URL às ferramentas RAG
- Excluir vetores de incorporação da serialização de memória para economizar tokens
- Garantir que o diretório de saída exista antes de escrever no modelo de fluxo
- Atualizar litellm para >=1.83.0 para resolver CVE-2026-35030
- Remover campo de indexação SEO que causava renderização de página em árabe
### Documentação
- Atualizar changelog e versão para v1.14.0
- Atualizar guias de início rápido e instalação para maior clareza
- Adicionar seção de provedores de armazenamento, exportar JsonProvider
- Adicionar guia da aba de Treinamento AMP
### Refatoração
- Limpar API de checkpoint
- Remover CodeInterpreterTool e descontinuar parâmetros de execução de código
## Contribuidores
@alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="07 abr 2026">
## v1.14.0a4
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a4)
## O que Mudou
### Recursos
- Adicionar guardrail_type e nome para distinguir rastros
- Adicionar SqliteProvider para armazenamento de checkpoints
- Adicionar CheckpointConfig para checkpointing automático
- Implementar checkpointing de estado em tempo de execução, sistema de eventos e refatoração do executor
### Correções de Bugs
- Excluir vetores de incorporação da serialização de memória para economizar tokens
- Atualizar litellm para >=1.83.0 para resolver CVE-2026-35030
### Documentação
- Atualizar guias de início rápido e instalação para melhor clareza
- Adicionar seção de provedores de armazenamento e exportar JsonProvider
### Desempenho
- Usar JSONB para a coluna de dados de checkpoint
### Refatoração
- Remover CodeInterpreterTool e descontinuar parâmetros de execução de código
## Contribuidores
@alex-clawd, @github-actions[bot], @greysonlalonde, @joaomdmoura, @lorenzejay, @lucasgomide
</Update>
<Update label="06 abr 2026">
## v1.14.0a3
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.0a3)
## O que Mudou
### Documentação
- Atualizar changelog e versão para v1.14.0a2
## Contribuidores
@joaomdmoura
</Update>
<Update label="06 abr 2026">
## v1.14.0a2

View File

@@ -304,17 +304,12 @@ multimodal_agent = Agent(
#### Execução de Código
- `allow_code_execution`: Deve ser True para permitir execução de código
- `code_execution_mode`:
- `"safe"`: Usa Docker (recomendado para produção)
- `"unsafe"`: Execução direta (apenas em ambientes confiáveis)
<Warning>
`allow_code_execution` e `code_execution_mode` estão depreciados. O `CodeInterpreterTool` foi removido do `crewai-tools`. Use um serviço de sandbox dedicado como [E2B](https://e2b.dev) ou [Modal](https://modal.com) para execução segura de código.
</Warning>
<Note>
Isso executa uma imagem Docker padrão. Se você deseja configurar a imagem
Docker, veja a ferramenta Code Interpreter na seção de ferramentas. Adicione a
ferramenta de interpretação de código como um parâmetro em ferramentas no
agente.
</Note>
- `allow_code_execution` _(depreciado)_: Anteriormente habilitava a execução de código embutida via `CodeInterpreterTool`.
- `code_execution_mode` _(depreciado)_: Anteriormente controlava o modo de execução (`"safe"` para Docker, `"unsafe"` para execução direta).
#### Funcionalidades Avançadas
@@ -565,9 +560,9 @@ agent = Agent(
### Segurança e Execução de Código
- Ao usar `allow_code_execution`, seja cauteloso com entradas do usuário e sempre as valide
- Use `code_execution_mode: "safe"` (Docker) em ambientes de produção
- Considere definir limites adequados de `max_execution_time` para evitar loops infinitos
<Warning>
`allow_code_execution` e `code_execution_mode` estão depreciados e o `CodeInterpreterTool` foi removido. Use um serviço de sandbox dedicado como [E2B](https://e2b.dev) ou [Modal](https://modal.com) para execução segura de código.
</Warning>
### Otimização de Performance

View File

@@ -0,0 +1,229 @@
---
title: Checkpointing
description: Salve automaticamente o estado de execucao para que crews, flows e agentes possam retomar apos falhas.
icon: floppy-disk
mode: "wide"
---
<Warning>
O checkpointing esta em versao inicial. As APIs podem mudar em versoes futuras.
</Warning>
## Visao Geral
O checkpointing salva automaticamente o estado de execucao durante uma execucao. Se uma crew, flow ou agente falhar no meio da execucao, voce pode restaurar a partir do ultimo checkpoint e retomar sem reexecutar o trabalho ja concluido.
## Inicio Rapido
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=True, # usa padroes: ./.checkpoints, em task_completed
)
result = crew.kickoff()
```
Os arquivos de checkpoint sao gravados em `./.checkpoints/` apos cada tarefa concluida.
## Configuracao
Use `CheckpointConfig` para controle total:
```python
from crewai import Crew, CheckpointConfig
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
on_events=["task_completed", "crew_kickoff_completed"],
max_checkpoints=5,
),
)
```
### Campos do CheckpointConfig
| Campo | Tipo | Padrao | Descricao |
|:------|:-----|:-------|:----------|
| `location` | `str` | `"./.checkpoints"` | Caminho para os arquivos de checkpoint |
| `on_events` | `list[str]` | `["task_completed"]` | Tipos de evento que acionam um checkpoint |
| `provider` | `BaseProvider` | `JsonProvider()` | Backend de armazenamento |
| `max_checkpoints` | `int \| None` | `None` | Maximo de arquivos a manter; os mais antigos sao removidos primeiro |
### Heranca e Desativacao
O campo `checkpoint` em Crew, Flow e Agent aceita `CheckpointConfig`, `True`, `False` ou `None`:
| Valor | Comportamento |
|:------|:--------------|
| `None` (padrao) | Herda do pai. Um agente herda a configuracao da crew. |
| `True` | Ativa com padroes. |
| `False` | Desativacao explicita. Interrompe a heranca do pai. |
| `CheckpointConfig(...)` | Configuracao personalizada. |
```python
crew = Crew(
agents=[
Agent(role="Researcher", ...), # herda checkpoint da crew
Agent(role="Writer", ..., checkpoint=False), # desativado, sem checkpoints
],
tasks=[...],
checkpoint=True,
)
```
## Retomando a partir de um Checkpoint
```python
# Restaurar e retomar
crew = Crew.from_checkpoint("./my_checkpoints/20260407T120000_abc123.json")
result = crew.kickoff() # retoma a partir da ultima tarefa concluida
```
A crew restaurada pula tarefas ja concluidas e retoma a partir da primeira incompleta.
## Funciona em Crew, Flow e Agent
### Crew
```python
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task, review_task],
checkpoint=CheckpointConfig(location="./crew_cp"),
)
```
Gatilho padrao: `task_completed` (um checkpoint por tarefa finalizada).
### Flow
```python
from crewai.flow.flow import Flow, start, listen
from crewai import CheckpointConfig
class MyFlow(Flow):
@start()
def step_one(self):
return "data"
@listen(step_one)
def step_two(self, data):
return process(data)
flow = MyFlow(
checkpoint=CheckpointConfig(
location="./flow_cp",
on_events=["method_execution_finished"],
),
)
result = flow.kickoff()
# Retomar
flow = MyFlow.from_checkpoint("./flow_cp/20260407T120000_abc123.json")
result = flow.kickoff()
```
### Agent
```python
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
checkpoint=CheckpointConfig(
location="./agent_cp",
on_events=["lite_agent_execution_completed"],
),
)
result = agent.kickoff(messages=[{"role": "user", "content": "Research AI trends"}])
```
## Provedores de Armazenamento
O CrewAI inclui dois provedores de armazenamento para checkpoints.
### JsonProvider (padrao)
Grava cada checkpoint como um arquivo JSON separado.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import JsonProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./my_checkpoints",
provider=JsonProvider(),
max_checkpoints=5,
),
)
```
### SqliteProvider
Armazena todos os checkpoints em um unico arquivo SQLite.
```python
from crewai import Crew, CheckpointConfig
from crewai.state import SqliteProvider
crew = Crew(
agents=[...],
tasks=[...],
checkpoint=CheckpointConfig(
location="./.checkpoints.db",
provider=SqliteProvider(),
),
)
```
## Tipos de Evento
O campo `on_events` aceita qualquer combinacao de strings de tipo de evento. Escolhas comuns:
| Caso de Uso | Eventos |
|:------------|:--------|
| Apos cada tarefa (Crew) | `["task_completed"]` |
| Apos cada metodo do flow | `["method_execution_finished"]` |
| Apos execucao do agente | `["agent_execution_completed"]`, `["lite_agent_execution_completed"]` |
| Apenas na conclusao da crew | `["crew_kickoff_completed"]` |
| Apos cada chamada LLM | `["llm_call_completed"]` |
| Em tudo | `["*"]` |
<Warning>
Usar `["*"]` ou eventos de alta frequencia como `llm_call_completed` gravara muitos arquivos de checkpoint e pode impactar o desempenho. Use `max_checkpoints` para limitar o uso de disco.
</Warning>
## Checkpointing Manual
Para controle total, registre seu proprio handler de evento e chame `state.checkpoint()` diretamente:
```python
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_events import LLMCallCompletedEvent
# Handler sincrono
@crewai_event_bus.on(LLMCallCompletedEvent)
def on_llm_done(source, event, state):
path = state.checkpoint("./my_checkpoints")
print(f"Checkpoint salvo: {path}")
# Handler assincrono
@crewai_event_bus.on(LLMCallCompletedEvent)
async def on_llm_done_async(source, event, state):
path = await state.acheckpoint("./my_checkpoints")
print(f"Checkpoint salvo: {path}")
```
O argumento `state` e o `RuntimeState` passado automaticamente pelo barramento de eventos quando seu handler aceita 3 parametros. Voce pode registrar handlers em qualquer tipo de evento listado na documentacao de [Event Listeners](/pt-BR/concepts/event-listener).
O checkpointing e best-effort: se uma gravacao de checkpoint falhar, o erro e registrado no log, mas a execucao continua sem interrupcao.

View File

@@ -105,7 +105,7 @@ A CLI detecta automaticamente o tipo do seu projeto a partir do `pyproject.toml`
```
<Tip>
A primeira implantação normalmente leva de 10 a 15 minutos, pois as imagens dos containers são construídas. As próximas implantações são bem mais rápidas.
A primeira implantação normalmente leva cerca de 1 minuto.
</Tip>
</Step>
@@ -187,7 +187,7 @@ Você precisa enviar seu crew para um repositório do GitHub. Caso ainda não te
1. Clique no botão "Deploy" para iniciar o processo de implantação
2. Você pode monitorar o progresso pela barra de progresso
3. A primeira implantação geralmente demora de 10 a 15 minutos; as próximas serão mais rápidas
3. A primeira implantação geralmente demora cerca de 1 minuto
<Frame>
![Progresso da Implantação](/images/enterprise/deploy-progress.png)

View File

@@ -200,12 +200,11 @@ Para equipes e organizações, o CrewAI oferece opções de implantação corpor
<CardGroup cols={2}>
<Card
title="Construa Seu Primeiro Agente"
title="Início rápido: Flow + agente"
icon="code"
href="/pt-BR/quickstart"
>
Siga nosso guia de início rápido para criar seu primeiro agente CrewAI e
obter experiência prática.
Siga o guia rápido para gerar um Flow, executar um crew com um agente e produzir um relatório.
</Card>
<Card
title="Junte-se à Comunidade"

View File

@@ -140,7 +140,7 @@ Para qualquer aplicação pronta para produção, **comece com um Flow**.
icon="bolt"
href="/pt-BR/quickstart"
>
Siga nosso guia rápido para criar seu primeiro agente CrewAI e colocar a mão na massa.
Gere um Flow, execute um crew com um agente e produza um relatório ponta a ponta.
</Card>
<Card
title="Junte-se à Comunidade"

View File

@@ -325,6 +325,34 @@ O streaming é particularmente valioso para:
- **Experiência do Usuário**: Reduzir latência percebida mostrando resultados incrementais
- **Dashboards ao Vivo**: Construir interfaces de monitoramento que exibem status de execução da crew
## Cancelamento e Limpeza de Recursos
`CrewStreamingOutput` suporta cancelamento gracioso para que o trabalho em andamento pare imediatamente quando o consumidor desconecta.
### Gerenciador de Contexto Assíncrono
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
async with streaming:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
```
### Cancelamento Explícito
```python Code
streaming = await crew.akickoff(inputs={"topic": "AI"})
try:
async for chunk in streaming:
print(chunk.content, end="", flush=True)
finally:
await streaming.aclose() # assíncrono
# streaming.close() # equivalente síncrono
```
Após o cancelamento, `streaming.is_cancelled` e `streaming.is_completed` são ambos `True`. Tanto `aclose()` quanto `close()` são idempotentes.
## Notas Importantes
- O streaming ativa automaticamente o streaming do LLM para todos os agentes na crew

View File

@@ -1,6 +1,6 @@
---
title: Guia Rápido
description: Construa seu primeiro agente de IA com a CrewAI em menos de 5 minutos.
description: Crie seu primeiro Flow CrewAI em minutos — orquestração, estado e um crew com um agente que gera um relatório real.
icon: rocket
mode: "wide"
---
@@ -13,370 +13,266 @@ Você pode instalar com `npx skills add crewaiinc/skills`
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{width: "100%", height: "400px"}}></iframe>
## Construa seu primeiro Agente CrewAI
Neste guia você vai **criar um Flow** que define um tópico de pesquisa, executa um **crew com um agente** (um pesquisador com busca na web) e termina com um **relatório em Markdown** no disco. Flows são a forma recomendada de estruturar apps em produção: eles controlam **estado** e **ordem de execução**, enquanto os **agentes** fazem o trabalho dentro da etapa do crew.
Vamos criar uma tripulação simples que nos ajudará a `pesquisar` e `relatar` sobre os `últimos avanços em IA` para um determinado tópico ou assunto.
Se ainda não instalou o CrewAI, siga primeiro o [guia de instalação](/pt-BR/installation).
Antes de prosseguir, certifique-se de ter concluído a instalação da CrewAI.
Se ainda não instalou, faça isso seguindo o [guia de instalação](/pt-BR/installation).
## Pré-requisitos
Siga os passos abaixo para começar a tripular! 🚣‍♂️
- Ambiente Python e a CLI do CrewAI (veja [instalação](/pt-BR/installation))
- Um LLM configurado com as chaves corretas — veja [LLMs](/pt-BR/concepts/llms#setting-up-your-llm)
- Uma chave de API do [Serper.dev](https://serper.dev/) (`SERPER_API_KEY`) para busca na web neste tutorial
## Construa seu primeiro Flow
<Steps>
<Step title="Crie sua tripulação">
Crie um novo projeto de tripulação executando o comando abaixo em seu terminal.
Isso criará um novo diretório chamado `latest-ai-development` com a estrutura básica para sua tripulação.
<Step title="Crie um projeto Flow">
No terminal, gere um projeto Flow (o nome da pasta usa sublinhados, ex.: `latest_ai_flow`):
<CodeGroup>
```shell Terminal
crewai create crew latest-ai-development
crewai create flow latest-ai-flow
cd latest_ai_flow
```
</CodeGroup>
Isso cria um app Flow em `src/latest_ai_flow/`, incluindo um crew inicial em `crews/content_crew/` que você substituirá por um crew de pesquisa **com um único agente** nos próximos passos.
</Step>
<Step title="Navegue até o novo projeto da sua tripulação">
<CodeGroup>
```shell Terminal
cd latest_ai_development
```
</CodeGroup>
</Step>
<Step title="Modifique seu arquivo `agents.yaml`">
<Tip>
Você também pode modificar os agentes conforme necessário para atender ao seu caso de uso ou copiar e colar como está para seu projeto.
Qualquer variável interpolada nos seus arquivos `agents.yaml` e `tasks.yaml`, como `{topic}`, será substituída pelo valor da variável no arquivo `main.py`.
</Tip>
<Step title="Configure um agente em `agents.yaml`">
Substitua o conteúdo de `src/latest_ai_flow/crews/content_crew/config/agents.yaml` por um único pesquisador. Variáveis como `{topic}` são preenchidas a partir de `crew.kickoff(inputs=...)`.
```yaml agents.yaml
# src/latest_ai_development/config/agents.yaml
# src/latest_ai_flow/crews/content_crew/config/agents.yaml
researcher:
role: >
Pesquisador Sênior de Dados em {topic}
Pesquisador(a) Sênior de Dados em {topic}
goal: >
Descobrir os avanços mais recentes em {topic}
Descobrir os desenvolvimentos mais recentes em {topic}
backstory: >
Você é um pesquisador experiente com talento para descobrir os últimos avanços em {topic}. Conhecido por sua habilidade em encontrar as informações mais relevantes e apresentá-las de forma clara e concisa.
reporting_analyst:
role: >
Analista de Relatórios em {topic}
goal: >
Criar relatórios detalhados com base na análise de dados e descobertas de pesquisa em {topic}
backstory: >
Você é um analista meticuloso com um olhar atento aos detalhes. É conhecido por sua capacidade de transformar dados complexos em relatórios claros e concisos, facilitando o entendimento e a tomada de decisão por parte dos outros.
Você é um pesquisador experiente que descobre os últimos avanços em {topic}.
Encontra as informações mais relevantes e apresenta tudo com clareza.
```
</Step>
<Step title="Modifique seu arquivo `tasks.yaml`">
<Step title="Configure uma tarefa em `tasks.yaml`">
```yaml tasks.yaml
# src/latest_ai_development/config/tasks.yaml
# src/latest_ai_flow/crews/content_crew/config/tasks.yaml
research_task:
description: >
Realize uma pesquisa aprofundada sobre {topic}.
Certifique-se de encontrar informações interessantes e relevantes considerando que o ano atual é 2025.
Faça uma pesquisa aprofundada sobre {topic}. Use busca na web para obter
informações atuais e confiáveis. O ano atual é 2026.
expected_output: >
Uma lista com 10 tópicos dos dados mais relevantes sobre {topic}
Um relatório em markdown com seções claras: tendências principais, ferramentas
ou empresas relevantes e implicações. Entre 800 e 1200 palavras. Sem cercas de código em volta do documento inteiro.
agent: researcher
reporting_task:
description: >
Revise o contexto obtido e expanda cada tópico em uma seção completa para um relatório.
Certifique-se de que o relatório seja detalhado e contenha todas as informações relevantes.
expected_output: >
Um relatório completo com os principais tópicos, cada um com uma seção detalhada de informações.
Formate como markdown sem usar '```'
agent: reporting_analyst
output_file: report.md
output_file: output/report.md
```
</Step>
<Step title="Modifique seu arquivo `crew.py`">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
<Step title="Conecte a classe do crew (`content_crew.py`)">
Aponte o crew gerado para o YAML e anexe `SerperDevTool` ao pesquisador.
```python content_crew.py
# src/latest_ai_flow/crews/content_crew/content_crew.py
from typing import List
from crewai import Agent, Crew, Process, Task
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
class ResearchCrew:
"""Crew de pesquisa com um agente, usado dentro do Flow."""
agents: List[BaseAgent]
tasks: List[Task]
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
config=self.agents_config["researcher"], # type: ignore[index]
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
tools=[SerperDevTool()],
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='output/report.md' # Este é o arquivo que conterá o relatório final.
config=self.tasks_config["research_task"], # type: ignore[index]
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Criado automaticamente pelo decorador @agent
tasks=self.tasks, # Criado automaticamente pelo decorador @task
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
</Step>
<Step title="[Opcional] Adicione funções de pré e pós execução da tripulação">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
<Step title="Defina o Flow em `main.py`">
Conecte o crew a um Flow: um passo `@start()` define o tópico no **estado** e um `@listen` executa o crew. O `output_file` da tarefa continua gravando `output/report.md`.
@before_kickoff
def before_kickoff_function(self, inputs):
print(f"Before kickoff function with inputs: {inputs}")
return inputs # You can return the inputs or modify them as needed
@after_kickoff
def after_kickoff_function(self, result):
print(f"After kickoff function with result: {result}")
return result # You can return the result or modify it as needed
# ... remaining code
```
</Step>
<Step title="Fique à vontade para passar entradas personalizadas para sua tripulação">
Por exemplo, você pode passar o input `topic` para sua tripulação para personalizar a pesquisa e o relatório.
```python main.py
#!/usr/bin/env python
# src/latest_ai_development/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
# src/latest_ai_flow/main.py
from pydantic import BaseModel
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
from crewai.flow import Flow, listen, start
from latest_ai_flow.crews.content_crew.content_crew import ResearchCrew
class ResearchFlowState(BaseModel):
topic: str = ""
report: str = ""
class LatestAiFlow(Flow[ResearchFlowState]):
@start()
def prepare_topic(self, crewai_trigger_payload: dict | None = None):
if crewai_trigger_payload:
self.state.topic = crewai_trigger_payload.get("topic", "AI Agents")
else:
self.state.topic = "AI Agents"
print(f"Tópico: {self.state.topic}")
@listen(prepare_topic)
def run_research(self):
result = ResearchCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.report = result.raw
print("Crew de pesquisa concluído.")
@listen(run_research)
def summarize(self):
print("Relatório em: output/report.md")
def kickoff():
LatestAiFlow().kickoff()
def plot():
LatestAiFlow().plot()
if __name__ == "__main__":
kickoff()
```
</Step>
<Step title="Defina suas variáveis de ambiente">
Antes de executar sua tripulação, certifique-se de ter as seguintes chaves configuradas como variáveis de ambiente no seu arquivo `.env`:
- Uma chave da API do [Serper.dev](https://serper.dev/): `SERPER_API_KEY=YOUR_KEY_HERE`
- A configuração do modelo de sua escolha, como uma chave de API. Veja o
[guia de configuração do LLM](/pt-BR/concepts/llms#setting-up-your-llm) para aprender como configurar modelos de qualquer provedor.
</Step>
<Step title="Trave e instale as dependências">
- Trave e instale as dependências utilizando o comando da CLI:
<CodeGroup>
```shell Terminal
crewai install
```
</CodeGroup>
- Se quiser instalar pacotes adicionais, faça isso executando:
<CodeGroup>
```shell Terminal
uv add <package-name>
```
</CodeGroup>
</Step>
<Step title="Execute sua tripulação">
- Para executar sua tripulação, rode o seguinte comando na raiz do projeto:
<CodeGroup>
```bash Terminal
crewai run
```
</CodeGroup>
<Tip>
Se o nome do pacote não for `latest_ai_flow`, ajuste o import de `ResearchCrew` para o caminho de módulo do seu projeto.
</Tip>
</Step>
<Step title="Alternativa para Empresas: Crie no Crew Studio">
Para usuários do CrewAI AMP, você pode criar a mesma tripulação sem escrever código:
<Step title="Variáveis de ambiente">
Na raiz do projeto, no arquivo `.env`, defina:
1. Faça login na sua conta CrewAI AMP (crie uma conta gratuita em [app.crewai.com](https://app.crewai.com))
2. Abra o Crew Studio
3. Digite qual automação deseja construir
4. Crie suas tarefas visualmente e conecte-as em sequência
5. Configure seus inputs e clique em "Download Code" ou "Deploy"
![Crew Studio Quickstart](/images/enterprise/crew-studio-interface.png)
<Card title="Experimente o CrewAI AMP" icon="rocket" href="https://app.crewai.com">
Comece sua conta gratuita no CrewAI AMP
</Card>
- `SERPER_API_KEY` — obtida em [Serper.dev](https://serper.dev/)
- As chaves do provedor de modelo conforme necessário — veja [configuração de LLM](/pt-BR/concepts/llms#setting-up-your-llm)
</Step>
<Step title="Veja seu relatório final">
Você verá a saída no console e o arquivo `report.md` deve ser criado na raiz do seu projeto com o relatório final.
Veja um exemplo de como o relatório deve ser:
<Step title="Instalar e executar">
<CodeGroup>
```shell Terminal
crewai install
crewai run
```
</CodeGroup>
O `crewai run` executa o ponto de entrada do Flow definido no projeto (o mesmo comando dos crews; o tipo do projeto é `"flow"` no `pyproject.toml`).
</Step>
<Step title="Confira o resultado">
Você deve ver logs do Flow e do crew. Abra **`output/report.md`** para o relatório gerado (trecho):
<CodeGroup>
```markdown output/report.md
# Relatório Abrangente sobre a Ascensão e o Impacto dos Agentes de IA em 2025
# Agentes de IA em 2026: panorama e tendências
## 1. Introduction to AI Agents
In 2025, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce.
## Resumo executivo
## 2. Benefits of AI Agents
AI agents bring numerous advantages that are transforming traditional work environments. Key benefits include:
## Principais tendências
- **Uso de ferramentas e orquestração** — …
- **Adoção empresarial** — …
- **Task Automation**: AI agents can carry out repetitive tasks such as data entry, scheduling, and payroll processing without human intervention, greatly reducing the time and resources spent on these activities.
- **Improved Efficiency**: By quickly processing large datasets and performing analyses that would take humans significantly longer, AI agents enhance operational efficiency. This allows teams to focus on strategic tasks that require higher-level thinking.
- **Enhanced Decision-Making**: AI agents can analyze trends and patterns in data, provide insights, and even suggest actions, helping stakeholders make informed decisions based on factual data rather than intuition alone.
## 3. Popular AI Agent Frameworks
Several frameworks have emerged to facilitate the development of AI agents, each with its own unique features and capabilities. Some of the most popular frameworks include:
- **Autogen**: A framework designed to streamline the development of AI agents through automation of code generation.
- **Semantic Kernel**: Focuses on natural language processing and understanding, enabling agents to comprehend user intentions better.
- **Promptflow**: Provides tools for developers to create conversational agents that can navigate complex interactions seamlessly.
- **Langchain**: Specializes in leveraging various APIs to ensure agents can access and utilize external data effectively.
- **CrewAI**: Aimed at collaborative environments, CrewAI strengthens teamwork by facilitating communication through AI-driven insights.
- **MemGPT**: Combines memory-optimized architectures with generative capabilities, allowing for more personalized interactions with users.
These frameworks empower developers to build versatile and intelligent agents that can engage users, perform advanced analytics, and execute various tasks aligned with organizational goals.
## 4. AI Agents in Human Resources
AI agents are revolutionizing HR practices by automating and optimizing key functions:
- **Recruiting**: AI agents can screen resumes, schedule interviews, and even conduct initial assessments, thus accelerating the hiring process while minimizing biases.
- **Succession Planning**: AI systems analyze employee performance data and potential, helping organizations identify future leaders and plan appropriate training.
- **Employee Engagement**: Chatbots powered by AI can facilitate feedback loops between employees and management, promoting an open culture and addressing concerns promptly.
As AI continues to evolve, HR departments leveraging these agents can realize substantial improvements in both efficiency and employee satisfaction.
## 5. AI Agents in Finance
The finance sector is seeing extensive integration of AI agents that enhance financial practices:
- **Expense Tracking**: Automated systems manage and monitor expenses, flagging anomalies and offering recommendations based on spending patterns.
- **Risk Assessment**: AI models assess credit risk and uncover potential fraud by analyzing transaction data and behavioral patterns.
- **Investment Decisions**: AI agents provide stock predictions and analytics based on historical data and current market conditions, empowering investors with informative insights.
The incorporation of AI agents into finance is fostering a more responsive and risk-aware financial landscape.
## 6. Market Trends and Investments
The growth of AI agents has attracted significant investment, especially amidst the rising popularity of chatbots and generative AI technologies. Companies and entrepreneurs are eager to explore the potential of these systems, recognizing their ability to streamline operations and improve customer engagement.
Conversely, corporations like Microsoft are taking strides to integrate AI agents into their product offerings, with enhancements to their Copilot 365 applications. This strategic move emphasizes the importance of AI literacy in the modern workplace and indicates the stabilizing of AI agents as essential business tools.
## 7. Future Predictions and Implications
Experts predict that AI agents will transform essential aspects of work life. As we look toward the future, several anticipated changes include:
- Enhanced integration of AI agents across all business functions, creating interconnected systems that leverage data from various departmental silos for comprehensive decision-making.
- Continued advancement of AI technologies, resulting in smarter, more adaptable agents capable of learning and evolving from user interactions.
- Increased regulatory scrutiny to ensure ethical use, especially concerning data privacy and employee surveillance as AI agents become more prevalent.
To stay competitive and harness the full potential of AI agents, organizations must remain vigilant about latest developments in AI technology and consider continuous learning and adaptation in their strategic planning.
## 8. Conclusion
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
## Implicações
```
</CodeGroup>
O arquivo real será mais longo e refletirá resultados de busca ao vivo.
</Step>
</Steps>
## Como isso se encaixa
1. **Flow** — `LatestAiFlow` executa `prepare_topic`, depois `run_research`, depois `summarize`. O estado (`topic`, `report`) fica no Flow.
2. **Crew** — `ResearchCrew` executa uma tarefa com um agente: o pesquisador usa **Serper** na web e escreve o relatório.
3. **Artefato** — O `output_file` da tarefa grava o relatório em `output/report.md`.
Para ir além em Flows (roteamento, persistência, human-in-the-loop), veja [Construa seu primeiro Flow](/pt-BR/guides/flows/first-flow) e [Flows](/pt-BR/concepts/flows). Para crews sem Flow, veja [Crews](/pt-BR/concepts/crews). Para um único `Agent` com `kickoff()` sem tarefas, veja [Agents](/pt-BR/concepts/agents#direct-agent-interaction-with-kickoff).
<Check>
Parabéns!
Você configurou seu projeto de tripulação com sucesso e está pronto para começar a construir seus próprios fluxos de trabalho baseados em agentes!
Você tem um Flow ponta a ponta com um crew de agente e um relatório salvo — uma base sólida para novas etapas, crews ou ferramentas.
</Check>
### Observação sobre Consistência nos Nomes
### Consistência de nomes
Os nomes utilizados nos seus arquivos YAML (`agents.yaml` e `tasks.yaml`) devem corresponder aos nomes dos métodos no seu código Python.
Por exemplo, você pode referenciar o agente para tarefas específicas a partir do arquivo `tasks.yaml`.
Essa consistência de nomes permite que a CrewAI conecte automaticamente suas configurações ao seu código; caso contrário, sua tarefa não reconhecerá a referência corretamente.
As chaves do YAML (`researcher`, `research_task`) devem coincidir com os nomes dos métodos na classe `@CrewBase`. Veja [Crews](/pt-BR/concepts/crews) para o padrão completo com decoradores.
#### Exemplos de Referências
## Implantação
<Tip>
Observe como usamos o mesmo nome para o agente no arquivo `agents.yaml`
(`email_summarizer`) e no método do arquivo `crew.py` (`email_summarizer`).
</Tip>
Envie seu Flow para o **[CrewAI AMP](https://app.crewai.com)** quando rodar localmente e o projeto estiver em um repositório **GitHub**. Na raiz do projeto:
```yaml agents.yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: provider/model-id # Add your choice of model here
<CodeGroup>
```bash Autenticar
crewai login
```
<Tip>
Observe como usamos o mesmo nome para a tarefa no arquivo `tasks.yaml`
(`email_summarizer_task`) e no método no arquivo `crew.py`
(`email_summarizer_task`).
</Tip>
```yaml tasks.yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
```bash Criar implantação
crewai deploy create
```
## Fazendo o Deploy da Sua Tripulação
```bash Status e logs
crewai deploy status
crewai deploy logs
```
A forma mais fácil de fazer deploy da sua tripulação em produção é através da [CrewAI AMP](http://app.crewai.com).
```bash Enviar atualizações após mudanças no código
crewai deploy push
```
Assista a este vídeo tutorial para uma demonstração detalhada de como fazer deploy da sua tripulação na [CrewAI AMP](http://app.crewai.com) usando a CLI.
```bash Listar ou remover implantações
crewai deploy list
crewai deploy remove <deployment_id>
```
</CodeGroup>
<iframe
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/3EqSV-CYDZA"
title="CrewAI Deployment Guide"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
<Tip>
A primeira implantação costuma levar **cerca de 1 minuto**. Pré-requisitos completos e fluxo na interface web estão em [Implantar no AMP](/pt-BR/enterprise/guides/deploy-to-amp).
</Tip>
<CardGroup cols={2}>
<Card title="Deploy no Enterprise" icon="rocket" href="http://app.crewai.com">
Comece com o CrewAI AMP e faça o deploy da sua tripulação em ambiente de
produção com apenas alguns cliques.
<Card title="Guia de implantação" icon="book" href="/pt-BR/enterprise/guides/deploy-to-amp">
AMP passo a passo (CLI e painel).
</Card>
<Card
title="Junte-se à Comunidade"
title="Comunidade"
icon="comments"
href="https://community.crewai.com"
>
Participe da nossa comunidade open source para discutir ideias, compartilhar
seus projetos e conectar-se com outros desenvolvedores CrewAI.
Troque ideias, compartilhe projetos e conecte-se com outros desenvolvedores CrewAI.
</Card>
</CardGroup>

50
docs/pt-BR/skills.mdx Normal file
View File

@@ -0,0 +1,50 @@
---
title: Skills
description: Instale crewaiinc/skills pelo registro oficial em skills.sh—Flows, Crews e agentes alinhados à documentação para Claude Code, Cursor, Codex e outros.
icon: wand-magic-sparkles
mode: "wide"
---
# Skills
**Dê ao seu agente de código o contexto do CrewAI em um comando.**
As **Skills** do CrewAI são publicadas em **[skills.sh/crewaiinc/skills](https://skills.sh/crewaiinc/skills)**—o registro oficial de `crewaiinc/skills`, com cada skill (por exemplo **design-agent**, **getting-started**, **design-task** e **ask-docs**), estatísticas de instalação e auditorias. Ensinam agentes de código—como Claude Code, Cursor e Codex—a estruturar Flows, configurar Crews, usar ferramentas e seguir os padrões do CrewAI. Execute o comando abaixo (ou cole no seu agente).
```shell Terminal
npx skills add crewaiinc/skills
```
Isso adiciona o pacote de skills ao fluxo do seu agente para aplicar convenções do CrewAI sem precisar reexplicar o framework a cada sessão. Código-fonte e issues ficam no [GitHub](https://github.com/crewAIInc/skills).
## O que seu agente ganha
- **Flows** — apps com estado, passos e kickoffs de crew no estilo CrewAI
- **Crews e agentes** — padrões YAML-first, papéis, tarefas e delegação
- **Ferramentas e integrações** — conectar agentes a busca, APIs e ferramentas comuns
- **Layout de projeto** — alinhar com scaffolds da CLI e convenções do repositório
- **Padrões atualizados** — skills acompanham a documentação e as práticas recomendadas
## Saiba mais neste site
<CardGroup cols={2}>
<Card title="Ferramentas de codificação e AGENTS.md" icon="terminal" href="/pt-BR/guides/coding-tools/agents-md">
Como usar `AGENTS.md` e fluxos de agente de código com o CrewAI.
</Card>
<Card title="Início rápido" icon="rocket" href="/pt-BR/quickstart">
Construa seu primeiro Flow e crew ponta a ponta.
</Card>
<Card title="Instalação" icon="download" href="/pt-BR/installation">
Instale a CLI e o pacote Python do CrewAI.
</Card>
<Card title="Registro de skills (skills.sh)" icon="globe" href="https://skills.sh/crewaiinc/skills">
Listagem oficial de `crewaiinc/skills`—skills, instalações e auditorias.
</Card>
<Card title="Código no GitHub" icon="code-branch" href="https://github.com/crewAIInc/skills">
Fonte, atualizações e issues do pacote de skills.
</Card>
</CardGroup>
### Vídeo: CrewAI com coding agent skills
<iframe src="https://www.loom.com/embed/befb9f68b81f42ad8112bfdd95a780af" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style={{ width: "100%", height: "400px" }} />

View File

@@ -7,6 +7,10 @@ mode: "wide"
# `CodeInterpreterTool`
<Warning>
**Depreciado:** O `CodeInterpreterTool` foi removido do `crewai-tools`. Os parâmetros `allow_code_execution` e `code_execution_mode` do `Agent` também estão depreciados. Use um serviço de sandbox dedicado — [E2B](https://e2b.dev) ou [Modal](https://modal.com) — para execução de código segura e isolada.
</Warning>
## Descrição
O `CodeInterpreterTool` permite que agentes CrewAI executem códigos Python 3 gerados autonomamente. Essa funcionalidade é particularmente valiosa, pois permite que os agentes criem códigos, os executem, obtenham os resultados e usem essas informações para orientar decisões e ações subsequentes.

View File

@@ -11,7 +11,75 @@ Esta ferramenta é utilizada para converter linguagem natural em consultas SQL.
Isso possibilita múltiplos fluxos de trabalho, como por exemplo ter um Agente acessando o banco de dados para buscar informações com base em um objetivo e, então, usar essas informações para gerar uma resposta, relatório ou qualquer outro tipo de saída. Além disso, permite que o Agente atualize o banco de dados de acordo com seu objetivo.
**Atenção**: Certifique-se de que o Agente tenha acesso a um Read-Replica ou que seja permitido que o Agente execute consultas de inserção/atualização no banco de dados.
**Atenção**: Por padrão, a ferramenta opera em modo somente leitura (apenas SELECT/SHOW/DESCRIBE/EXPLAIN). Operações de escrita exigem `allow_dml=True` ou a variável de ambiente `CREWAI_NL2SQL_ALLOW_DML=true`. Quando o acesso de escrita estiver habilitado, certifique-se de que o Agente use um usuário de banco de dados com privilégios mínimos ou um Read-Replica sempre que possível.
## Modo Somente Leitura e Configuração de DML
O `NL2SQLTool` opera em **modo somente leitura por padrão**. Apenas os seguintes tipos de instrução são permitidos sem configuração adicional:
- `SELECT`
- `SHOW`
- `DESCRIBE`
- `EXPLAIN`
Qualquer tentativa de executar uma operação de escrita (`INSERT`, `UPDATE`, `DELETE`, `DROP`, `CREATE`, `ALTER`, `TRUNCATE`, etc.) resultará em erro, a menos que o DML seja habilitado explicitamente.
Consultas com múltiplas instruções contendo ponto e vírgula (ex.: `SELECT 1; DROP TABLE users`) também são bloqueadas no modo somente leitura para prevenir ataques de injeção.
### Habilitando Operações de Escrita
Você pode habilitar DML (Linguagem de Manipulação de Dados) de duas formas:
**Opção 1 — parâmetro do construtor:**
```python
from crewai_tools import NL2SQLTool
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
**Opção 2 — variável de ambiente:**
```bash
CREWAI_NL2SQL_ALLOW_DML=true
```
```python
from crewai_tools import NL2SQLTool
# DML habilitado via variável de ambiente
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
### Exemplos de Uso
**Somente leitura (padrão) — seguro para análise e relatórios:**
```python
from crewai_tools import NL2SQLTool
# Apenas SELECT/SHOW/DESCRIBE/EXPLAIN são permitidos
nl2sql = NL2SQLTool(db_uri="postgresql://example@localhost:5432/test_db")
```
**Com DML habilitado — necessário para workloads de escrita:**
```python
from crewai_tools import NL2SQLTool
# INSERT, UPDATE, DELETE, DROP, etc. são permitidos
nl2sql = NL2SQLTool(
db_uri="postgresql://example@localhost:5432/test_db",
allow_dml=True,
)
```
<Warning>
Habilitar DML concede ao agente a capacidade de modificar ou destruir dados. Ative apenas quando o seu caso de uso exigir explicitamente acesso de escrita e certifique-se de que as credenciais do banco de dados estejam limitadas aos privilégios mínimos necessários.
</Warning>
## Requisitos

View File

@@ -75,4 +75,20 @@ tool = CSVSearchTool(
),
)
)
## Segurança
### Validação de Caminhos
Os caminhos de arquivo fornecidos a esta ferramenta são validados em relação ao diretório de trabalho atual. Caminhos que resolvem fora do diretório de trabalho são rejeitados com um `ValueError`.
Para permitir caminhos fora do diretório de trabalho (por exemplo, em testes ou pipelines confiáveis), defina a variável de ambiente:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### Validação de URLs
Entradas de URL também são validadas: URIs `file://` e requisições direcionadas a faixas de IP privadas ou reservadas são bloqueadas para prevenir ataques de falsificação de requisições do lado do servidor (SSRF).
```

View File

@@ -67,4 +67,16 @@ tool = DirectorySearchTool(
},
}
)
```
## Segurança
### Validação de Caminhos
Os caminhos de diretório fornecidos a esta ferramenta são validados em relação ao diretório de trabalho atual. Caminhos que resolvem fora do diretório de trabalho são rejeitados com um `ValueError`.
Para permitir caminhos fora do diretório de trabalho (por exemplo, em testes ou pipelines confiáveis), defina a variável de ambiente:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```

View File

@@ -73,4 +73,20 @@ tool = JSONSearchTool(
},
}
)
## Segurança
### Validação de Caminhos
Os caminhos de arquivo fornecidos a esta ferramenta são validados em relação ao diretório de trabalho atual. Caminhos que resolvem fora do diretório de trabalho são rejeitados com um `ValueError`.
Para permitir caminhos fora do diretório de trabalho (por exemplo, em testes ou pipelines confiáveis), defina a variável de ambiente:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### Validação de URLs
Entradas de URL também são validadas: URIs `file://` e requisições direcionadas a faixas de IP privadas ou reservadas são bloqueadas para prevenir ataques de falsificação de requisições do lado do servidor (SSRF).
```

View File

@@ -101,4 +101,20 @@ tool = PDFSearchTool(
},
}
)
```
```
## Segurança
### Validação de Caminhos
Os caminhos de arquivo fornecidos a esta ferramenta são validados em relação ao diretório de trabalho atual. Caminhos que resolvem fora do diretório de trabalho são rejeitados com um `ValueError`.
Para permitir caminhos fora do diretório de trabalho (por exemplo, em testes ou pipelines confiáveis), defina a variável de ambiente:
```shell
CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true
```
### Validação de URLs
Entradas de URL também são validadas: URIs `file://` e requisições direcionadas a faixas de IP privadas ou reservadas são bloqueadas para prevenir ataques de falsificação de requisições do lado do servidor (SSRF).

View File

@@ -9,7 +9,7 @@ authors = [
requires-python = ">=3.10, <3.14"
dependencies = [
"Pillow~=12.1.1",
"pypdf~=6.9.1",
"pypdf~=6.10.0",
"python-magic>=0.4.27",
"aiocache~=0.12.3",
"aiofiles~=24.1.0",

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.0a2"
__version__ = "1.14.2a2"

View File

@@ -10,8 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.14.0a2",
"crewai==1.14.2a2",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -35,9 +35,6 @@ from crewai_tools.tools.browserbase_load_tool.browserbase_load_tool import (
from crewai_tools.tools.code_docs_search_tool.code_docs_search_tool import (
CodeDocsSearchTool,
)
from crewai_tools.tools.code_interpreter_tool.code_interpreter_tool import (
CodeInterpreterTool,
)
from crewai_tools.tools.composio_tool.composio_tool import ComposioTool
from crewai_tools.tools.contextualai_create_agent_tool.contextual_create_agent_tool import (
ContextualAICreateAgentTool,
@@ -225,7 +222,6 @@ __all__ = [
"BrowserbaseLoadTool",
"CSVSearchTool",
"CodeDocsSearchTool",
"CodeInterpreterTool",
"ComposioTool",
"ContextualAICreateAgentTool",
"ContextualAIParseTool",
@@ -309,4 +305,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.0a2"
__version__ = "1.14.2a2"

View File

@@ -154,21 +154,19 @@ class ToolSpecExtractor:
return default_value
# Dynamically computed from BaseTool so that any future fields or
# computed_fields added to BaseTool are automatically excluded from
# the generated spec — no hardcoded denylist to maintain.
# ``package_dependencies`` is not a BaseTool field but is extracted
# into its own top-level key, so it's also excluded from init_params.
_BASE_TOOL_FIELDS: set[str] = (
set(BaseTool.model_fields)
| set(BaseTool.model_computed_fields)
| {"package_dependencies"}
)
@staticmethod
def _extract_init_params(tool_class: type[BaseTool]) -> dict[str, Any]:
ignored_init_params = [
"name",
"description",
"env_vars",
"args_schema",
"description_updated",
"cache_function",
"result_as_answer",
"max_usage_count",
"current_usage_count",
"package_dependencies",
]
json_schema = tool_class.model_json_schema(
schema_generator=SchemaGenerator, mode="serialization"
)
@@ -176,8 +174,14 @@ class ToolSpecExtractor:
json_schema["properties"] = {
key: value
for key, value in json_schema["properties"].items()
if key not in ignored_init_params
if key not in ToolSpecExtractor._BASE_TOOL_FIELDS
}
if "required" in json_schema:
json_schema["required"] = [
key
for key in json_schema["required"]
if key not in ToolSpecExtractor._BASE_TOOL_FIELDS
]
return json_schema
def save_to_json(self, output_path: str) -> None:

View File

@@ -109,7 +109,7 @@ class DataTypes:
if isinstance(content, str):
try:
url = urlparse(content)
is_url = bool(url.scheme and url.netloc) or url.scheme == "file"
is_url = bool(url.scheme in ("http", "https") and url.netloc)
except Exception: # noqa: S110
pass

View File

@@ -0,0 +1,205 @@
"""Path and URL validation utilities for crewai-tools.
Provides validation for file paths and URLs to prevent unauthorized
file access and server-side request forgery (SSRF) when tools accept
user-controlled or LLM-controlled inputs at runtime.
Set CREWAI_TOOLS_ALLOW_UNSAFE_PATHS=true to bypass validation (not
recommended for production).
"""
from __future__ import annotations
import ipaddress
import logging
import os
import socket
from urllib.parse import urlparse
logger = logging.getLogger(__name__)
_UNSAFE_PATHS_ENV = "CREWAI_TOOLS_ALLOW_UNSAFE_PATHS"
def _is_escape_hatch_enabled() -> bool:
"""Check if the unsafe paths escape hatch is enabled."""
return os.environ.get(_UNSAFE_PATHS_ENV, "").lower() in ("true", "1", "yes")
# ---------------------------------------------------------------------------
# File path validation
# ---------------------------------------------------------------------------
def validate_file_path(path: str, base_dir: str | None = None) -> str:
"""Validate that a file path is safe to read.
Resolves symlinks and ``..`` components, then checks that the resolved
path falls within *base_dir* (defaults to the current working directory).
Args:
path: The file path to validate.
base_dir: Allowed root directory. Defaults to ``os.getcwd()``.
Returns:
The resolved, validated absolute path.
Raises:
ValueError: If the path escapes the allowed directory.
"""
if _is_escape_hatch_enabled():
logger.warning(
"%s is enabled — skipping file path validation for: %s",
_UNSAFE_PATHS_ENV,
path,
)
return os.path.realpath(path)
if base_dir is None:
base_dir = os.getcwd()
resolved_base = os.path.realpath(base_dir)
resolved_path = os.path.realpath(
os.path.join(resolved_base, path) if not os.path.isabs(path) else path
)
# Ensure the resolved path is within the base directory.
# When resolved_base already ends with a separator (e.g. the filesystem
# root "/"), appending os.sep would double it ("//"), so use the base
# as-is in that case.
prefix = resolved_base if resolved_base.endswith(os.sep) else resolved_base + os.sep
if not resolved_path.startswith(prefix) and resolved_path != resolved_base:
raise ValueError(
f"Path '{path}' resolves to '{resolved_path}' which is outside "
f"the allowed directory '{resolved_base}'. "
f"Set {_UNSAFE_PATHS_ENV}=true to bypass this check."
)
return resolved_path
def validate_directory_path(path: str, base_dir: str | None = None) -> str:
"""Validate that a directory path is safe to read.
Same as :func:`validate_file_path` but also checks that the path
is an existing directory.
Args:
path: The directory path to validate.
base_dir: Allowed root directory. Defaults to ``os.getcwd()``.
Returns:
The resolved, validated absolute path.
Raises:
ValueError: If the path escapes the allowed directory or is not a directory.
"""
validated = validate_file_path(path, base_dir)
if not os.path.isdir(validated):
raise ValueError(f"Path '{validated}' is not a directory.")
return validated
# ---------------------------------------------------------------------------
# URL validation
# ---------------------------------------------------------------------------
# Private and reserved IP ranges that should not be accessed
_BLOCKED_IPV4_NETWORKS = [
ipaddress.ip_network("10.0.0.0/8"),
ipaddress.ip_network("172.16.0.0/12"),
ipaddress.ip_network("192.168.0.0/16"),
ipaddress.ip_network("127.0.0.0/8"),
ipaddress.ip_network("169.254.0.0/16"), # Link-local / cloud metadata
ipaddress.ip_network("0.0.0.0/32"),
]
_BLOCKED_IPV6_NETWORKS = [
ipaddress.ip_network("::1/128"),
ipaddress.ip_network("::/128"),
ipaddress.ip_network("fc00::/7"), # Unique local addresses
ipaddress.ip_network("fe80::/10"), # Link-local IPv6
]
def _is_private_or_reserved(ip_str: str) -> bool:
"""Check if an IP address is private, reserved, or otherwise unsafe."""
try:
addr = ipaddress.ip_address(ip_str)
# Unwrap IPv4-mapped IPv6 addresses (e.g., ::ffff:127.0.0.1) to IPv4
# so they are only checked against IPv4 networks (avoids TypeError when
# an IPv4Address is compared against an IPv6Network).
if isinstance(addr, ipaddress.IPv6Address) and addr.ipv4_mapped:
addr = addr.ipv4_mapped
networks = (
_BLOCKED_IPV4_NETWORKS
if isinstance(addr, ipaddress.IPv4Address)
else _BLOCKED_IPV6_NETWORKS
)
return any(addr in network for network in networks)
except ValueError:
return True # If we can't parse, block it
def validate_url(url: str) -> str:
"""Validate that a URL is safe to fetch.
Blocks ``file://`` scheme entirely. For ``http``/``https``, resolves
DNS and checks that the target IP is not private or reserved (prevents
SSRF to internal services and cloud metadata endpoints).
Args:
url: The URL to validate.
Returns:
The validated URL string.
Raises:
ValueError: If the URL uses a blocked scheme or resolves to a
private/reserved IP address.
"""
if _is_escape_hatch_enabled():
logger.warning(
"%s is enabled — skipping URL validation for: %s",
_UNSAFE_PATHS_ENV,
url,
)
return url
parsed = urlparse(url)
# Block file:// scheme
if parsed.scheme == "file":
raise ValueError(
f"file:// URLs are not allowed: '{url}'. "
f"Use a file path instead, or set {_UNSAFE_PATHS_ENV}=true to bypass."
)
# Only allow http and https
if parsed.scheme not in ("http", "https"):
raise ValueError(
f"URL scheme '{parsed.scheme}' is not allowed. Only http and https are supported."
)
if not parsed.hostname:
raise ValueError(f"URL has no hostname: '{url}'")
# Resolve DNS and check IPs
try:
addrinfos = socket.getaddrinfo(
parsed.hostname, parsed.port or (443 if parsed.scheme == "https" else 80)
)
except socket.gaierror as exc:
raise ValueError(f"Could not resolve hostname: '{parsed.hostname}'") from exc
for _family, _, _, _, sockaddr in addrinfos:
ip_str = str(sockaddr[0])
if _is_private_or_reserved(ip_str):
raise ValueError(
f"URL '{url}' resolves to private/reserved IP {ip_str}. "
f"Access to internal networks is not allowed. "
f"Set {_UNSAFE_PATHS_ENV}=true to bypass."
)
return url

View File

@@ -24,9 +24,6 @@ from crewai_tools.tools.browserbase_load_tool.browserbase_load_tool import (
from crewai_tools.tools.code_docs_search_tool.code_docs_search_tool import (
CodeDocsSearchTool,
)
from crewai_tools.tools.code_interpreter_tool.code_interpreter_tool import (
CodeInterpreterTool,
)
from crewai_tools.tools.composio_tool.composio_tool import ComposioTool
from crewai_tools.tools.contextualai_create_agent_tool.contextual_create_agent_tool import (
ContextualAICreateAgentTool,
@@ -210,7 +207,6 @@ __all__ = [
"BrowserbaseLoadTool",
"CSVSearchTool",
"CodeDocsSearchTool",
"CodeInterpreterTool",
"ComposioTool",
"ContextualAICreateAgentTool",
"ContextualAIParseTool",

View File

@@ -7,6 +7,8 @@ from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, Field
import requests
from crewai_tools.security.safe_path import validate_url
class BrightDataConfig(BaseModel):
API_URL: str = "https://api.brightdata.com/request"
@@ -134,6 +136,7 @@ class BrightDataWebUnlockerTool(BaseTool):
"Content-Type": "application/json",
}
validate_url(url)
try:
response = requests.post(
self.base_url, json=payload, headers=headers, timeout=30

View File

@@ -1,6 +0,0 @@
FROM python:3.12-alpine
RUN pip install requests beautifulsoup4
# Set the working directory
WORKDIR /workspace

View File

@@ -1,95 +0,0 @@
# CodeInterpreterTool
## Description
This tool is used to give the Agent the ability to run code (Python3) from the code generated by the Agent itself. The code is executed in a Docker container for secure isolation.
It is incredibly useful since it allows the Agent to generate code, run it in an isolated environment, get the result and use it to make decisions.
## ⚠️ Security Requirements
**Docker is REQUIRED** for safe code execution. The tool will refuse to execute code without Docker to prevent security vulnerabilities.
### Why Docker is Required
Previous versions included a "restricted sandbox" fallback when Docker was unavailable. This has been **removed** due to critical security vulnerabilities:
- The Python-based sandbox could be escaped via object introspection
- Attackers could recover the original `__import__` function and access any module
- This allowed arbitrary command execution on the host system
**Docker provides real process isolation** and is the only secure way to execute untrusted code.
## Requirements
- **Docker (REQUIRED)** - Install from [docker.com](https://docs.docker.com/get-docker/)
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
Remember that when using this tool, the code must be generated by the Agent itself. The code must be Python3 code. It will take some time the first time to run because it needs to build the Docker image.
### Basic Usage (Docker Container - Recommended)
```python
from crewai_tools import CodeInterpreterTool
Agent(
...
tools=[CodeInterpreterTool()],
)
```
### Custom Dockerfile
If you need to pass your own Dockerfile:
```python
from crewai_tools import CodeInterpreterTool
Agent(
...
tools=[CodeInterpreterTool(user_dockerfile_path="<Dockerfile_path>")],
)
```
### Manual Docker Host Configuration
If it is difficult to connect to the Docker daemon automatically (especially for macOS users), you can set up the Docker host manually:
```python
from crewai_tools import CodeInterpreterTool
Agent(
...
tools=[CodeInterpreterTool(
user_docker_base_url="<Docker Host Base Url>",
user_dockerfile_path="<Dockerfile_path>"
)],
)
```
### Unsafe Mode (NOT RECOMMENDED)
If you absolutely cannot use Docker and **fully trust the code source**, you can use unsafe mode:
```python
from crewai_tools import CodeInterpreterTool
# WARNING: Only use with fully trusted code!
Agent(
...
tools=[CodeInterpreterTool(unsafe_mode=True)],
)
```
**⚠️ SECURITY WARNING:** `unsafe_mode=True` executes code directly on the host without any isolation. Only use this if:
- You completely trust the code being executed
- You understand the security risks
- You cannot install Docker in your environment
For production use, **always use Docker** (the default mode).

View File

@@ -1,424 +0,0 @@
"""Code Interpreter Tool for executing Python code in isolated environments.
This module provides a tool for executing Python code either in a Docker container for
safe isolation or directly in a restricted sandbox. It includes mechanisms for blocking
potentially unsafe operations and importing restricted modules.
"""
import importlib.util
import os
import subprocess
import sys
from types import ModuleType
from typing import Any, ClassVar, TypedDict
from crewai.tools import BaseTool
from docker import ( # type: ignore[import-untyped]
DockerClient,
from_env as docker_from_env,
)
from docker.errors import ImageNotFound, NotFound # type: ignore[import-untyped]
from pydantic import BaseModel, Field
from typing_extensions import Unpack
from crewai_tools.printer import Printer
class RunKwargs(TypedDict, total=False):
"""Keyword arguments for the _run method."""
code: str
libraries_used: list[str]
class CodeInterpreterSchema(BaseModel):
"""Schema for defining inputs to the CodeInterpreterTool.
This schema defines the required parameters for code execution,
including the code to run and any libraries that need to be installed.
"""
code: str = Field(
...,
description="Python3 code used to be interpreted in the Docker container. ALWAYS PRINT the final result and the output of the code",
)
libraries_used: list[str] = Field(
...,
description="List of libraries used in the code with proper installing names separated by commas. Example: numpy,pandas,beautifulsoup4",
)
class SandboxPython:
"""INSECURE: A restricted Python execution environment with known vulnerabilities.
WARNING: This class does NOT provide real security isolation and is vulnerable to
sandbox escape attacks via Python object introspection. Attackers can recover the
original __import__ function and bypass all restrictions.
DO NOT USE for untrusted code execution. Use Docker containers instead.
This class attempts to restrict access to dangerous modules and built-in functions
but provides no real security boundary against a motivated attacker.
"""
BLOCKED_MODULES: ClassVar[set[str]] = {
"os",
"sys",
"subprocess",
"shutil",
"importlib",
"inspect",
"tempfile",
"sysconfig",
"builtins",
}
UNSAFE_BUILTINS: ClassVar[set[str]] = {
"exec",
"eval",
"open",
"compile",
"input",
"globals",
"locals",
"vars",
"help",
"dir",
}
@staticmethod
def restricted_import(
name: str,
custom_globals: dict[str, Any] | None = None,
custom_locals: dict[str, Any] | None = None,
fromlist: list[str] | None = None,
level: int = 0,
) -> ModuleType:
"""A restricted import function that blocks importing of unsafe modules.
Args:
name: The name of the module to import.
custom_globals: Global namespace to use.
custom_locals: Local namespace to use.
fromlist: List of items to import from the module.
level: The level value passed to __import__.
Returns:
The imported module if allowed.
Raises:
ImportError: If the module is in the blocked modules list.
"""
if name in SandboxPython.BLOCKED_MODULES:
raise ImportError(f"Importing '{name}' is not allowed.")
return __import__(name, custom_globals, custom_locals, fromlist or (), level)
@staticmethod
def safe_builtins() -> dict[str, Any]:
"""Creates a dictionary of built-in functions with unsafe ones removed.
Returns:
A dictionary of safe built-in functions and objects.
"""
import builtins
safe_builtins = {
k: v
for k, v in builtins.__dict__.items()
if k not in SandboxPython.UNSAFE_BUILTINS
}
safe_builtins["__import__"] = SandboxPython.restricted_import
return safe_builtins
@staticmethod
def exec(code: str, locals_: dict[str, Any]) -> None:
"""Executes Python code in a restricted environment.
Args:
code: The Python code to execute as a string.
locals_: A dictionary that will be used for local variable storage.
"""
exec(code, {"__builtins__": SandboxPython.safe_builtins()}, locals_) # noqa: S102
class CodeInterpreterTool(BaseTool):
"""A tool for executing Python code in isolated environments.
This tool provides functionality to run Python code either in a Docker container
for safe isolation or directly in a restricted sandbox. It can handle installing
Python packages and executing arbitrary Python code.
"""
name: str = "Code Interpreter"
description: str = "Interprets Python3 code strings with a final print statement."
args_schema: type[BaseModel] = CodeInterpreterSchema
default_image_tag: str = "code-interpreter:latest"
code: str | None = None
user_dockerfile_path: str | None = None
user_docker_base_url: str | None = None
unsafe_mode: bool = False
@staticmethod
def _get_installed_package_path() -> str:
"""Gets the installation path of the crewai_tools package.
Returns:
The directory path where the package is installed.
Raises:
RuntimeError: If the package cannot be found.
"""
spec = importlib.util.find_spec("crewai_tools")
if spec is None or spec.origin is None:
raise RuntimeError("Cannot find crewai_tools package installation path")
return os.path.dirname(spec.origin)
def _verify_docker_image(self) -> None:
"""Verifies if the Docker image is available or builds it if necessary.
Checks if the required Docker image exists. If not, builds it using either a
user-provided Dockerfile or the default one included with the package.
Raises:
FileNotFoundError: If the Dockerfile cannot be found.
"""
client = (
docker_from_env()
if self.user_docker_base_url is None
else DockerClient(base_url=self.user_docker_base_url)
)
try:
client.images.get(self.default_image_tag)
except ImageNotFound:
if self.user_dockerfile_path and os.path.exists(self.user_dockerfile_path):
dockerfile_path = self.user_dockerfile_path
else:
package_path = self._get_installed_package_path()
dockerfile_path = os.path.join(
package_path, "tools/code_interpreter_tool"
)
if not os.path.exists(dockerfile_path):
raise FileNotFoundError(
f"Dockerfile not found in {dockerfile_path}"
) from None
client.images.build(
path=dockerfile_path,
tag=self.default_image_tag,
rm=True,
)
def _run(self, **kwargs: Unpack[RunKwargs]) -> str:
"""Runs the code interpreter tool with the provided arguments.
Args:
**kwargs: Keyword arguments that should include 'code' and 'libraries_used'.
Returns:
The output of the executed code as a string.
"""
code: str | None = kwargs.get("code", self.code)
libraries_used: list[str] = kwargs.get("libraries_used", [])
if not code:
return "No code provided to execute."
if self.unsafe_mode:
return self.run_code_unsafe(code, libraries_used)
return self.run_code_safety(code, libraries_used)
@staticmethod
def _install_libraries(container: Any, libraries: list[str]) -> None:
"""Installs required Python libraries in the Docker container.
Args:
container: The Docker container where libraries will be installed.
libraries: A list of library names to install using pip.
"""
for library in libraries:
container.exec_run(["pip", "install", library])
def _init_docker_container(self) -> Any:
"""Initializes and returns a Docker container for code execution.
Stops and removes any existing container with the same name before creating
a new one. Maps the current working directory to /workspace in the container.
Returns:
A Docker container object ready for code execution.
"""
container_name = "code-interpreter"
client = docker_from_env()
current_path = os.getcwd()
# Check if the container is already running
try:
existing_container = client.containers.get(container_name)
existing_container.stop()
existing_container.remove()
except NotFound:
pass # Container does not exist, no need to remove
return client.containers.run(
self.default_image_tag,
detach=True,
tty=True,
working_dir="/workspace",
name=container_name,
volumes={current_path: {"bind": "/workspace", "mode": "rw"}},
)
@staticmethod
def _check_docker_available() -> bool:
"""Checks if Docker is available and running on the system.
Attempts to run the 'docker info' command to verify Docker availability.
Prints appropriate messages if Docker is not installed or not running.
Returns:
True if Docker is available and running, False otherwise.
"""
try:
subprocess.run(
["docker", "info"], # noqa: S607
check=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=1,
)
return True
except (subprocess.CalledProcessError, subprocess.TimeoutExpired):
Printer.print(
"Docker is installed but not running or inaccessible.",
color="bold_purple",
)
return False
except FileNotFoundError:
Printer.print("Docker is not installed", color="bold_purple")
return False
def run_code_safety(self, code: str, libraries_used: list[str]) -> str:
"""Runs code in the safest available environment.
Requires Docker to be available for secure code execution. Fails closed
if Docker is not available to prevent sandbox escape vulnerabilities.
Args:
code: The Python code to execute as a string.
libraries_used: A list of Python library names to install before execution.
Returns:
The output of the executed code as a string.
Raises:
RuntimeError: If Docker is not available, as the restricted sandbox
is vulnerable to escape attacks and should not be used
for untrusted code execution.
"""
if self._check_docker_available():
return self.run_code_in_docker(code, libraries_used)
error_msg = (
"Docker is required for safe code execution but is not available. "
"The restricted sandbox fallback has been removed due to security vulnerabilities "
"that allow sandbox escape via Python object introspection. "
"Please install Docker (https://docs.docker.com/get-docker/) or use unsafe_mode=True "
"if you trust the code source and understand the security risks."
)
Printer.print(error_msg, color="bold_red")
raise RuntimeError(error_msg)
def run_code_in_docker(self, code: str, libraries_used: list[str]) -> str:
"""Runs Python code in a Docker container for safe isolation.
Creates a Docker container, installs the required libraries, executes the code,
and then cleans up by stopping and removing the container.
Args:
code: The Python code to execute as a string.
libraries_used: A list of Python library names to install before execution.
Returns:
The output of the executed code as a string, or an error message if execution failed.
"""
Printer.print("Running code in Docker environment", color="bold_blue")
self._verify_docker_image()
container = self._init_docker_container()
self._install_libraries(container, libraries_used)
exec_result: Any = container.exec_run(["python3", "-c", code])
container.stop()
container.remove()
if exec_result.exit_code != 0:
return f"Something went wrong while running the code: \n{exec_result.output.decode('utf-8')}"
return str(exec_result.output.decode("utf-8"))
@staticmethod
def run_code_in_restricted_sandbox(code: str) -> str:
"""DEPRECATED AND INSECURE: Runs Python code in a restricted sandbox environment.
WARNING: This method is vulnerable to sandbox escape attacks via Python object
introspection and should NOT be used for untrusted code execution. It has been
deprecated and is only kept for backward compatibility with trusted code.
The "restricted" environment can be bypassed by attackers who can:
- Use object graph introspection to recover the original __import__ function
- Access any Python module including os, subprocess, sys, etc.
- Execute arbitrary commands on the host system
Use run_code_in_docker() for secure code execution, or run_code_unsafe()
if you explicitly acknowledge the security risks.
Args:
code: The Python code to execute as a string.
Returns:
The value of the 'result' variable from the executed code,
or an error message if execution failed.
"""
Printer.print(
"WARNING: Running code in INSECURE restricted sandbox (vulnerable to escape attacks)",
color="bold_red",
)
exec_locals: dict[str, Any] = {}
try:
SandboxPython.exec(code=code, locals_=exec_locals)
return exec_locals.get("result", "No result variable found.") # type: ignore[no-any-return]
except Exception as e:
return f"An error occurred: {e!s}"
@staticmethod
def run_code_unsafe(code: str, libraries_used: list[str]) -> str:
"""Runs code directly on the host machine without any safety restrictions.
WARNING: This mode is unsafe and should only be used in trusted environments
with code from trusted sources.
Args:
code: The Python code to execute as a string.
libraries_used: A list of Python library names to install before execution.
Returns:
The value of the 'result' variable from the executed code,
or an error message if execution failed.
"""
Printer.print("WARNING: Running code in unsafe mode", color="bold_magenta")
# Install libraries on the host machine
for library in libraries_used:
subprocess.run( # noqa: S603
[sys.executable, "-m", "pip", "install", library], check=False
)
# Execute the code
try:
exec_locals: dict[str, Any] = {}
exec(code, {}, exec_locals) # noqa: S102
return exec_locals.get("result", "No result variable found.") # type: ignore[no-any-return]
except Exception as e:
return f"An error occurred: {e!s}"

View File

@@ -3,6 +3,8 @@ from typing import Any
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from crewai_tools.security.safe_path import validate_file_path
class ContextualAICreateAgentSchema(BaseModel):
"""Schema for contextual create agent tool."""
@@ -47,6 +49,7 @@ class ContextualAICreateAgentTool(BaseTool):
document_paths: list[str],
) -> str:
"""Create a complete RAG pipeline with documents."""
resolved_paths = [validate_file_path(doc_path) for doc_path in document_paths]
try:
import os
@@ -56,7 +59,7 @@ class ContextualAICreateAgentTool(BaseTool):
# Upload documents
document_ids = []
for doc_path in document_paths:
for doc_path in resolved_paths:
if not os.path.exists(doc_path):
raise FileNotFoundError(f"Document not found: {doc_path}")

View File

@@ -1,6 +1,8 @@
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from crewai_tools.security.safe_path import validate_file_path
class ContextualAIParseSchema(BaseModel):
"""Schema for contextual parse tool."""
@@ -45,6 +47,7 @@ class ContextualAIParseTool(BaseTool):
"""Parse a document using Contextual AI's parser."""
if output_types is None:
output_types = ["markdown-per-page"]
file_path = validate_file_path(file_path)
try:
import json
import os

View File

@@ -4,6 +4,8 @@ from typing import Any
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from crewai_tools.security.safe_path import validate_directory_path
class FixedDirectoryReadToolSchema(BaseModel):
"""Input for DirectoryReadTool."""
@@ -39,6 +41,7 @@ class DirectoryReadTool(BaseTool):
if directory is None:
raise ValueError("Directory must be provided.")
directory = validate_directory_path(directory)
if directory[-1] == "/":
directory = directory[:-1]
files_list = [

View File

@@ -3,6 +3,7 @@ from typing import Any
from pydantic import BaseModel, Field
from crewai_tools.rag.data_types import DataType
from crewai_tools.security.safe_path import validate_directory_path
from crewai_tools.tools.rag.rag_tool import RagTool
@@ -37,6 +38,7 @@ class DirectorySearchTool(RagTool):
self._generate_description()
def add(self, directory: str) -> None: # type: ignore[override]
directory = validate_directory_path(directory)
super().add(directory, data_type=DataType.DIRECTORY)
def _run( # type: ignore[override]

View File

@@ -3,6 +3,8 @@ from typing import Any
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from crewai_tools.security.safe_path import validate_file_path
class FileReadToolSchema(BaseModel):
"""Input for FileReadTool."""
@@ -76,6 +78,7 @@ class FileReadTool(BaseTool):
if file_path is None:
return "Error: No file path provided. Please provide a file path either in the constructor or as an argument."
file_path = validate_file_path(file_path)
try:
with open(file_path, "r") as file:
if start_line == 1 and line_count is None:

View File

@@ -5,6 +5,8 @@ import zipfile
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from crewai_tools.security.safe_path import validate_file_path
class FileCompressorToolInput(BaseModel):
"""Input schema for FileCompressorTool."""
@@ -40,12 +42,15 @@ class FileCompressorTool(BaseTool):
overwrite: bool = False,
format: str = "zip",
) -> str:
input_path = validate_file_path(input_path)
if not os.path.exists(input_path):
return f"Input path '{input_path}' does not exist."
if not output_path:
output_path = self._generate_output_path(input_path, format)
output_path = validate_file_path(output_path)
format_extension = {
"zip": ".zip",
"tar": ".tar",

View File

@@ -5,6 +5,8 @@ from typing import Any
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, ConfigDict, Field, PrivateAttr
from crewai_tools.security.safe_path import validate_url
try:
from firecrawl import FirecrawlApp # type: ignore[import-untyped]
@@ -106,6 +108,7 @@ class FirecrawlCrawlWebsiteTool(BaseTool):
if not self._firecrawl:
raise RuntimeError("FirecrawlApp not properly initialized")
url = validate_url(url)
return self._firecrawl.crawl(url=url, poll_interval=2, **self.config)

View File

@@ -5,6 +5,8 @@ from typing import Any
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, ConfigDict, Field, PrivateAttr
from crewai_tools.security.safe_path import validate_url
try:
from firecrawl import FirecrawlApp # type: ignore[import-untyped]
@@ -106,6 +108,7 @@ class FirecrawlScrapeWebsiteTool(BaseTool):
if not self._firecrawl:
raise RuntimeError("FirecrawlApp not properly initialized")
url = validate_url(url)
return self._firecrawl.scrape(url=url, **self.config)

View File

@@ -4,6 +4,8 @@ from typing import Any, Literal
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, Field
from crewai_tools.security.safe_path import validate_url
class HyperbrowserLoadToolSchema(BaseModel):
url: str = Field(description="Website URL")
@@ -119,6 +121,7 @@ class HyperbrowserLoadTool(BaseTool):
) from e
params = self._prepare_params(params)
url = validate_url(url)
if operation == "scrape":
scrape_params = StartScrapeJobParams(url=url, **params)

View File

@@ -4,6 +4,8 @@ from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import requests
from crewai_tools.security.safe_path import validate_url
class JinaScrapeWebsiteToolInput(BaseModel):
"""Input schema for JinaScrapeWebsiteTool."""
@@ -45,6 +47,7 @@ class JinaScrapeWebsiteTool(BaseTool):
"Website URL must be provided either during initialization or execution"
)
url = validate_url(url)
response = requests.get(
f"https://r.jina.ai/{url}", headers=self.headers, timeout=15
)

View File

@@ -1,7 +1,17 @@
from collections.abc import Iterator
import logging
import os
import re
from typing import Any
try:
from typing import Self
except ImportError:
from typing_extensions import Self
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from pydantic import BaseModel, Field, model_validator
try:
@@ -12,6 +22,186 @@ try:
except ImportError:
SQLALCHEMY_AVAILABLE = False
logger = logging.getLogger(__name__)
# Commands allowed in read-only mode
# NOTE: WITH is intentionally excluded — writable CTEs start with WITH, so the
# CTE body must be inspected separately (see _validate_statement).
_READ_ONLY_COMMANDS = {"SELECT", "SHOW", "DESCRIBE", "DESC", "EXPLAIN"}
# Commands that mutate state and are blocked by default
_WRITE_COMMANDS = {
"INSERT",
"UPDATE",
"DELETE",
"DROP",
"ALTER",
"CREATE",
"TRUNCATE",
"GRANT",
"REVOKE",
"EXEC",
"EXECUTE",
"CALL",
"MERGE",
"REPLACE",
"UPSERT",
"LOAD",
"COPY",
"VACUUM",
"ANALYZE",
"ANALYSE",
"REINDEX",
"CLUSTER",
"REFRESH",
"COMMENT",
"SET",
"RESET",
}
# Subset of write commands that can realistically appear *inside* a CTE body.
# Narrower than _WRITE_COMMANDS to avoid false positives on identifiers like
# ``comment``, ``set``, or ``reset`` which are common column/table names.
_CTE_WRITE_INDICATORS = {
"INSERT",
"UPDATE",
"DELETE",
"DROP",
"ALTER",
"CREATE",
"TRUNCATE",
"MERGE",
}
_AS_PAREN_RE = re.compile(r"\bAS\s*\(", re.IGNORECASE)
def _iter_as_paren_matches(stmt: str) -> Iterator[re.Match[str]]:
"""Yield regex matches for ``AS\\s*(`` outside of string literals."""
# Build a set of character positions that are inside string literals.
in_string: set[int] = set()
i = 0
while i < len(stmt):
if stmt[i] == "'":
start = i
end = _skip_string_literal(stmt, i)
in_string.update(range(start, end))
i = end
else:
i += 1
for m in _AS_PAREN_RE.finditer(stmt):
if m.start() not in in_string:
yield m
def _detect_writable_cte(stmt: str) -> str | None:
"""Return the first write command inside a CTE body, or None.
Instead of tokenizing the whole statement (which falsely matches column
names like ``comment``), this walks through parenthesized CTE bodies and
checks only the *first keyword after* an opening ``AS (`` for a write
command. Uses a regex to handle any whitespace (spaces, tabs, newlines)
between ``AS`` and ``(``. Skips matches inside string literals.
"""
for m in _iter_as_paren_matches(stmt):
body = stmt[m.end() :].lstrip()
first_word = body.split()[0].upper().strip("()") if body.split() else ""
if first_word in _CTE_WRITE_INDICATORS:
return first_word
return None
def _skip_string_literal(stmt: str, pos: int) -> int:
"""Skip past a string literal starting at pos (single-quoted).
Handles escaped quotes ('') inside the literal.
Returns the index after the closing quote.
"""
quote_char = stmt[pos]
i = pos + 1
while i < len(stmt):
if stmt[i] == quote_char:
# Check for escaped quote ('')
if i + 1 < len(stmt) and stmt[i + 1] == quote_char:
i += 2
continue
return i + 1
i += 1
return i # Unterminated literal — return end
def _find_matching_close_paren(stmt: str, start: int) -> int:
"""Find the matching close paren, skipping string literals."""
depth = 1
i = start
while i < len(stmt) and depth > 0:
ch = stmt[i]
if ch == "'":
i = _skip_string_literal(stmt, i)
continue
if ch == "(":
depth += 1
elif ch == ")":
depth -= 1
i += 1
return i
def _extract_main_query_after_cte(stmt: str) -> str | None:
"""Extract the main (outer) query that follows all CTE definitions.
For ``WITH cte AS (SELECT 1) DELETE FROM users``, returns ``DELETE FROM users``.
Returns None if no main query is found after the last CTE body.
Handles parentheses inside string literals (e.g., ``SELECT '(' FROM t``).
"""
last_cte_end = 0
for m in _iter_as_paren_matches(stmt):
last_cte_end = _find_matching_close_paren(stmt, m.end())
if last_cte_end > 0:
remainder = stmt[last_cte_end:].strip().lstrip(",").strip()
if remainder:
return remainder
return None
def _resolve_explain_command(stmt: str) -> str | None:
"""Resolve the underlying command from an EXPLAIN [ANALYZE] [VERBOSE] statement.
Returns the real command (e.g., 'DELETE') if ANALYZE is present, else None.
Handles both space-separated and parenthesized syntax.
"""
rest = stmt.strip()[len("EXPLAIN") :].strip()
if not rest:
return None
analyze_found = False
explain_opts = {"ANALYZE", "ANALYSE", "VERBOSE"}
if rest.startswith("("):
close = rest.find(")")
if close != -1:
options_str = rest[1:close].upper()
analyze_found = any(
opt.strip() in ("ANALYZE", "ANALYSE") for opt in options_str.split(",")
)
rest = rest[close + 1 :].strip()
else:
while rest:
first_opt = rest.split()[0].upper().rstrip(";") if rest.split() else ""
if first_opt in ("ANALYZE", "ANALYSE"):
analyze_found = True
if first_opt not in explain_opts:
break
rest = rest[len(first_opt) :].strip()
if analyze_found and rest:
return rest.split()[0].upper().rstrip(";")
return None
class NL2SQLToolInput(BaseModel):
sql_query: str = Field(
@@ -21,20 +211,70 @@ class NL2SQLToolInput(BaseModel):
class NL2SQLTool(BaseTool):
"""Tool that converts natural language to SQL and executes it against a database.
By default the tool operates in **read-only mode**: only SELECT, SHOW,
DESCRIBE, EXPLAIN, and read-only CTEs (WITH … SELECT) are permitted. Write
operations (INSERT, UPDATE, DELETE, DROP, ALTER, CREATE, TRUNCATE, …) are
blocked unless ``allow_dml=True`` is set explicitly or the environment
variable ``CREWAI_NL2SQL_ALLOW_DML=true`` is present.
Writable CTEs (``WITH d AS (DELETE …) SELECT …``) and
``EXPLAIN ANALYZE <write-stmt>`` are treated as write operations and are
blocked in read-only mode.
The ``_fetch_all_available_columns`` helper uses parameterised queries so
that table names coming from the database catalogue cannot be used as an
injection vector.
"""
name: str = "NL2SQLTool"
description: str = "Converts natural language to SQL queries and executes them."
description: str = (
"Converts natural language to SQL queries and executes them against a "
"database. Read-only by default — only SELECT/SHOW/DESCRIBE/EXPLAIN "
"queries (and read-only CTEs) are allowed unless configured with "
"allow_dml=True."
)
db_uri: str = Field(
title="Database URI",
description="The URI of the database to connect to.",
)
allow_dml: bool = Field(
default=False,
title="Allow DML",
description=(
"When False (default) only read statements are permitted. "
"Set to True to allow INSERT/UPDATE/DELETE/DROP and other "
"write operations."
),
)
tables: list[dict[str, Any]] = Field(default_factory=list)
columns: dict[str, list[dict[str, Any]] | str] = Field(default_factory=dict)
args_schema: type[BaseModel] = NL2SQLToolInput
@model_validator(mode="after")
def _apply_env_override(self) -> Self:
"""Allow CREWAI_NL2SQL_ALLOW_DML=true to override allow_dml at runtime."""
if os.environ.get("CREWAI_NL2SQL_ALLOW_DML", "").strip().lower() == "true":
if not self.allow_dml:
logger.warning(
"NL2SQLTool: CREWAI_NL2SQL_ALLOW_DML env var is set — "
"DML/DDL operations are enabled. Ensure this is intentional."
)
self.allow_dml = True
return self
def model_post_init(self, __context: Any) -> None:
if not SQLALCHEMY_AVAILABLE:
raise ImportError(
"sqlalchemy is not installed. Please install it with `pip install crewai-tools[sqlalchemy]`"
"sqlalchemy is not installed. Please install it with "
"`pip install crewai-tools[sqlalchemy]`"
)
if self.allow_dml:
logger.warning(
"NL2SQLTool: allow_dml=True — write operations (INSERT/UPDATE/"
"DELETE/DROP/…) are permitted. Use with caution."
)
data: dict[str, list[dict[str, Any]] | str] = {}
@@ -50,42 +290,216 @@ class NL2SQLTool(BaseTool):
self.tables = tables
self.columns = data
# ------------------------------------------------------------------
# Query validation
# ------------------------------------------------------------------
def _validate_query(self, sql_query: str) -> None:
"""Raise ValueError if *sql_query* is not permitted under the current config.
Splits the query on semicolons and validates each statement
independently. When ``allow_dml=False`` (the default), multi-statement
queries are rejected outright to prevent ``SELECT 1; DROP TABLE users``
style bypasses. When ``allow_dml=True`` every statement is checked and
a warning is emitted for write operations.
"""
statements = [s.strip() for s in sql_query.split(";") if s.strip()]
if not statements:
raise ValueError("NL2SQLTool received an empty SQL query.")
if not self.allow_dml and len(statements) > 1:
raise ValueError(
"NL2SQLTool blocked a multi-statement query in read-only mode. "
"Semicolons are not permitted when allow_dml=False."
)
for stmt in statements:
self._validate_statement(stmt)
def _validate_statement(self, stmt: str) -> None:
"""Validate a single SQL statement (no semicolons)."""
command = self._extract_command(stmt)
# EXPLAIN ANALYZE / EXPLAIN ANALYSE actually *executes* the underlying
# query. Resolve the real command so write operations are caught.
# Handles both space-separated ("EXPLAIN ANALYZE DELETE …") and
# parenthesized ("EXPLAIN (ANALYZE) DELETE …", "EXPLAIN (ANALYZE, VERBOSE) DELETE …").
# EXPLAIN ANALYZE actually executes the underlying query — resolve the
# real command so write operations are caught.
if command == "EXPLAIN":
resolved = _resolve_explain_command(stmt)
if resolved:
command = resolved
# WITH starts a CTE. Read-only CTEs are fine; writable CTEs
# (e.g. WITH d AS (DELETE …) SELECT …) must be blocked in read-only mode.
if command == "WITH":
# Check for write commands inside CTE bodies.
write_found = _detect_writable_cte(stmt)
if write_found:
found = write_found
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool is configured in read-only mode and blocked a "
f"writable CTE containing a '{found}' statement. To allow "
f"write operations set allow_dml=True or "
f"CREWAI_NL2SQL_ALLOW_DML=true."
)
logger.warning(
"NL2SQLTool: executing writable CTE with '%s' because allow_dml=True.",
found,
)
return
# Check the main query after the CTE definitions.
main_query = _extract_main_query_after_cte(stmt)
if main_query:
main_cmd = main_query.split()[0].upper().rstrip(";")
if main_cmd in _WRITE_COMMANDS:
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool is configured in read-only mode and blocked a "
f"'{main_cmd}' statement after a CTE. To allow write "
f"operations set allow_dml=True or "
f"CREWAI_NL2SQL_ALLOW_DML=true."
)
logger.warning(
"NL2SQLTool: executing '%s' after CTE because allow_dml=True.",
main_cmd,
)
elif main_cmd not in _READ_ONLY_COMMANDS:
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool blocked an unrecognised SQL command '{main_cmd}' "
f"after a CTE. Only {sorted(_READ_ONLY_COMMANDS)} are allowed "
f"in read-only mode."
)
return
if command in _WRITE_COMMANDS:
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool is configured in read-only mode and blocked a "
f"'{command}' statement. To allow write operations set "
f"allow_dml=True or CREWAI_NL2SQL_ALLOW_DML=true."
)
logger.warning(
"NL2SQLTool: executing write statement '%s' because allow_dml=True.",
command,
)
elif command not in _READ_ONLY_COMMANDS:
# Unknown command — block by default unless DML is explicitly enabled
if not self.allow_dml:
raise ValueError(
f"NL2SQLTool blocked an unrecognised SQL command '{command}'. "
f"Only {sorted(_READ_ONLY_COMMANDS)} are allowed in read-only "
f"mode."
)
@staticmethod
def _extract_command(sql_query: str) -> str:
"""Return the uppercased first keyword of *sql_query*."""
stripped = sql_query.strip().lstrip("(")
first_token = stripped.split()[0] if stripped.split() else ""
return first_token.upper().rstrip(";")
# ------------------------------------------------------------------
# Schema introspection helpers
# ------------------------------------------------------------------
def _fetch_available_tables(self) -> list[dict[str, Any]] | str:
return self.execute_sql(
"SELECT table_name FROM information_schema.tables WHERE table_schema = 'public';"
"SELECT table_name FROM information_schema.tables "
"WHERE table_schema = 'public';"
)
def _fetch_all_available_columns(
self, table_name: str
) -> list[dict[str, Any]] | str:
"""Fetch columns for *table_name* using a parameterised query.
The table name is bound via SQLAlchemy's ``:param`` syntax to prevent
SQL injection from catalogue values.
"""
return self.execute_sql(
f"SELECT column_name, data_type FROM information_schema.columns WHERE table_name = '{table_name}';" # noqa: S608
"SELECT column_name, data_type FROM information_schema.columns "
"WHERE table_name = :table_name",
params={"table_name": table_name},
)
# ------------------------------------------------------------------
# Core execution
# ------------------------------------------------------------------
def _run(self, sql_query: str) -> list[dict[str, Any]] | str:
try:
self._validate_query(sql_query)
data = self.execute_sql(sql_query)
except ValueError:
raise
except Exception as exc:
data = (
f"Based on these tables {self.tables} and columns {self.columns}, "
"you can create SQL queries to retrieve data from the database."
f"Get the original request {sql_query} and the error {exc} and create the correct SQL query."
"you can create SQL queries to retrieve data from the database. "
f"Get the original request {sql_query} and the error {exc} and "
"create the correct SQL query."
)
return data
def execute_sql(self, sql_query: str) -> list[dict[str, Any]] | str:
def execute_sql(
self,
sql_query: str,
params: dict[str, Any] | None = None,
) -> list[dict[str, Any]] | str:
"""Execute *sql_query* and return the results as a list of dicts.
Parameters
----------
sql_query:
The SQL statement to run.
params:
Optional mapping of bind parameters (e.g. ``{"table_name": "users"}``).
"""
if not SQLALCHEMY_AVAILABLE:
raise ImportError(
"sqlalchemy is not installed. Please install it with `pip install crewai-tools[sqlalchemy]`"
"sqlalchemy is not installed. Please install it with "
"`pip install crewai-tools[sqlalchemy]`"
)
# Check ALL statements so that e.g. "SELECT 1; DROP TABLE t" triggers a
# commit when allow_dml=True, regardless of statement order.
_stmts = [s.strip() for s in sql_query.split(";") if s.strip()]
def _is_write_stmt(s: str) -> bool:
cmd = self._extract_command(s)
if cmd in _WRITE_COMMANDS:
return True
if cmd == "EXPLAIN":
# Resolve the underlying command for EXPLAIN ANALYZE
resolved = _resolve_explain_command(s)
if resolved and resolved in _WRITE_COMMANDS:
return True
if cmd == "WITH":
if _detect_writable_cte(s):
return True
main_q = _extract_main_query_after_cte(s)
if main_q:
return main_q.split()[0].upper().rstrip(";") in _WRITE_COMMANDS
return False
is_write = any(_is_write_stmt(s) for s in _stmts)
engine = create_engine(self.db_uri)
Session = sessionmaker(bind=engine) # noqa: N806
session = Session()
try:
result = session.execute(text(sql_query))
session.commit()
result = session.execute(text(sql_query), params or {})
# Only commit when the operation actually mutates state
if self.allow_dml and is_write:
session.commit()
if result.returns_rows: # type: ignore[attr-defined]
columns = result.keys()

View File

@@ -11,6 +11,8 @@ from crewai.tools.base_tool import BaseTool
from crewai.utilities.types import LLMMessage
from pydantic import BaseModel, Field
from crewai_tools.security.safe_path import validate_file_path
class OCRToolSchema(BaseModel):
"""Input schema for Optical Character Recognition Tool.
@@ -98,5 +100,6 @@ class OCRTool(BaseTool):
Returns:
str: Base64-encoded image data as a UTF-8 string.
"""
image_path = validate_file_path(image_path)
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode()

View File

@@ -1,4 +1,5 @@
from abc import ABC, abstractmethod
import os
from typing import Any, Literal, cast
from crewai.rag.core.base_embeddings_callable import EmbeddingFunction
@@ -246,7 +247,94 @@ class RagTool(BaseTool):
# Auto-detect type from extension
rag_tool.add("path/to/document.pdf") # auto-detects PDF
"""
self.adapter.add(*args, **kwargs)
# Validate file paths and URLs before adding to prevent
# unauthorized file reads and SSRF.
from urllib.parse import urlparse
from crewai_tools.security.safe_path import validate_file_path, validate_url
def _check_url(value: str, label: str) -> None:
try:
validate_url(value)
except ValueError as e:
raise ValueError(f"Blocked unsafe {label}: {e}") from e
def _check_path(value: str, label: str) -> str:
try:
return validate_file_path(value)
except ValueError as e:
raise ValueError(f"Blocked unsafe {label}: {e}") from e
validated_args: list[ContentItem] = []
for arg in args:
source_ref = (
str(arg.get("source", arg.get("content", "")))
if isinstance(arg, dict)
else str(arg)
)
# Check if it's a URL — only catch urlparse-specific errors here;
# validate_url's ValueError must propagate so it is never silently bypassed.
try:
parsed = urlparse(source_ref)
except (ValueError, AttributeError):
parsed = None
if parsed is not None and parsed.scheme in ("http", "https", "file"):
try:
validate_url(source_ref)
except ValueError as e:
raise ValueError(f"Blocked unsafe URL: {e}") from e
validated_args.append(arg)
continue
# Check if it looks like a file path (not a plain text string).
# Check both os.sep (backslash on Windows) and "/" so that
# forward-slash paths like "sub/file.txt" are caught on all platforms.
if (
os.path.sep in source_ref
or "/" in source_ref
or source_ref.startswith(".")
or os.path.isabs(source_ref)
):
try:
resolved_ref = validate_file_path(source_ref)
except ValueError as e:
raise ValueError(f"Blocked unsafe file path: {e}") from e
# Use the resolved path to prevent symlink TOCTOU
if isinstance(arg, dict):
arg = {**arg}
if "source" in arg:
arg["source"] = resolved_ref
elif "content" in arg:
arg["content"] = resolved_ref
else:
arg = resolved_ref
validated_args.append(arg)
# Validate keyword path/URL arguments — these are equally user-controlled
# and must not bypass the checks applied to positional args.
if "path" in kwargs and kwargs.get("path") is not None:
kwargs["path"] = _check_path(str(kwargs["path"]), "path")
if "file_path" in kwargs and kwargs.get("file_path") is not None:
kwargs["file_path"] = _check_path(str(kwargs["file_path"]), "file_path")
if "directory_path" in kwargs and kwargs.get("directory_path") is not None:
kwargs["directory_path"] = _check_path(
str(kwargs["directory_path"]), "directory_path"
)
if "url" in kwargs and kwargs.get("url") is not None:
_check_url(str(kwargs["url"]), "url")
if "website" in kwargs and kwargs.get("website") is not None:
_check_url(str(kwargs["website"]), "website")
if "github_url" in kwargs and kwargs.get("github_url") is not None:
_check_url(str(kwargs["github_url"]), "github_url")
if "youtube_url" in kwargs and kwargs.get("youtube_url") is not None:
_check_url(str(kwargs["youtube_url"]), "youtube_url")
self.adapter.add(*validated_args, **kwargs)
def _run(
self,

Some files were not shown because too many files have changed in this diff Show More