Release/v1.0.0 (#3618)

* feat: add `apps` & `actions` attributes to Agent (#3504)

* feat: add app attributes to Agent

* feat: add actions attribute to Agent

* chore: resolve linter issues

* refactor: merge the apps and actions parameters into a single one

* fix: remove unnecessary print

* feat: logging error when CrewaiPlatformTools fails

* chore: export CrewaiPlatformTools directly from crewai_tools

* style: resolver linter issues

* test: fix broken tests

* style: solve linter issues

* fix: fix broken test

* feat: monorepo restructure and test/ci updates

- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules

* chore: update python version to 3.13 and package metadata

* feat: add crewai-tools workspace and fix tests/dependencies

* feat: add crewai-tools workspace structure

* Squashed 'temp-crewai-tools/' content from commit 9bae5633

git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9

* feat: configure crewai-tools workspace package with dependencies

* fix: apply ruff auto-formatting to crewai-tools code

* chore: update lockfile

* fix: don't allow tool tests yet

* fix: comment out extra pytest flags for now

* fix: remove conflicting conftest.py from crewai-tools tests

* fix: resolve dependency conflicts and test issues

- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0

* chore: update CI workflows and docs for monorepo structure

* chore: update CI workflows and docs for monorepo structure

* fix: actions syntax

* chore: ci publish and pin versions

* fix: add permission to action

* chore: bump version to 1.0.0a1 across all packages

- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency

* WIP: v1 docs (#3626)

(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)

* docs: parity for all translations

* docs: full name of acronym AMP

* docs: fix lingering unused code

* docs: expand contextual options in docs.json

* docs: add contextual action to request feature on GitHub (#3635)

* chore: apply linting fixes to crewai-tools

* feat: add required env var validation for brightdata

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* fix: handle properly anyOf oneOf allOf schema's props

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: bump version to 1.0.0a2

* Lorenze/native inference sdks (#3619)

* ruff linted

* using native sdks with litellm fallback

* drop exa

* drop print on completion

* Refactor LLM and utility functions for type consistency

- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.

* fix agent_tests

* fix litellm tests and usagemetrics fix

* drop print

* Refactor LLM event handling and improve test coverage

- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.

* fix test_converter

* fixed tests/agents/test_agent.py

* Refactor LLM context length exception handling and improve provider integration

- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.

* Enhance LLM context window handling across providers

- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.

* fix test agent again

* fix test agent

* feat: add native LLM providers for Anthropic, Azure, and Gemini

- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.

* chore: update dependencies and improve context management

- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.

* refactor(tests): update LLM instantiation to include is_litellm flag in test cases

- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.

* linter

* linted

* revert constants

* fix(tests): correct type hint in expected model description

- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.

* refactor(llm): enhance LLM instantiation and error handling

- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.

* fixed test

* refactor(llm): enhance token usage tracking and add copy methods

- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.

* refactor(tests): reorganize imports and enhance error messages in test cases

- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.

* feat: add base devtooling

* fix: ensure dep refs are updated for devtools

* fix: allow pre-release

* feat: allow release after tag

* feat: bump versions to 1.0.0a3 

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: match tag and release title, ignore devtools build for pypi

* fix: allow failed pypi publish

* feat: introduce trigger listing and execution commands for local development (#3643)

* chore: exclude tests from ruff linting

* chore: exclude tests from GitHub Actions linter

* fix: replace print statements with logger in agent and memory handling

* chore: add noqa for intentional print in printer utility

* fix: resolve linting errors across codebase

* feat: update docs with new approach to consume Platform Actions (#3675)

* fix: remove duplicate line and add explicit env var

* feat: bump versions to 1.0.0a4 (#3686)

* Update triggers docs (#3678)

* docs: introduce triggers list & triggers run command

* docs: add KO triggers docs

* docs: ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned on docs (#3687)

* Lorenze/bedrock llm (#3693)

* feat: add AWS Bedrock support and update dependencies

- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.

* using converse api

* converse

* linted

* refactor: update BedrockCompletion class to improve parameter handling

- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.

* feat: improve event bus thread safety and async support

Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.

Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.

* feat: enhance OpenAICompletion class with additional client parameters (#3701)

* feat: enhance OpenAICompletion class with additional client parameters

- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.

* fix: correct test case for unsupported OpenAI model

- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.

* fix: enhance error handling in OpenAICompletion class

- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.

* fix: improve test for unsupported OpenAI model handling

- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.

* feat: bump versions to 1.0.0b1 (#3706)

* Lorenze/tools drop litellm (#3710)

* completely drop litellm and correctly pass config for qdrant

* feat: add support for additional embedding models in EmbeddingService

- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.

* fix: update collection parameter handling in CrewAIRagAdapter

- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.

* moved stagehand as optional dep (#3712)

* feat: bump versions to 1.0.0b2 (#3713)

* feat: enhance AnthropicCompletion class with additional client parame… (#3707)

* feat: enhance AnthropicCompletion class with additional client parameters and tool handling

- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.

* drop print statements

* test: add fixture to mock ANTHROPIC_API_KEY for tests

- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.

* refactor: streamline streaming message handling in AnthropicCompletion class

- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.

* fix streaming here too

* fix: improve error handling in tool conversion for AnthropicCompletion class

- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.

* feat: enhance GeminiCompletion class with client parameter support (#3717)

* feat: enhance GeminiCompletion class with client parameter support

- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.

* add optional dependancies

* refactor: update test fixture to mock GOOGLE_API_KEY

- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.

* fix tests

* feat: enhance BedrockCompletion class with advanced features

* feat: enhance BedrockCompletion class with advanced features and error handling

- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.

* chore: add boto typing

* fix: use typing_extensions.Required for Python 3.10 compatibility

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: azure native tests

* feat: add Azure AI Inference support and related tests

- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.

* feat: enhance AzureCompletion class with Azure OpenAI support

- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.

* refactor: convert project module to metaclass with full typing

* Lorenze/OpenAI base url backwards support (#3723)

* fix: enhance OpenAICompletion class base URL handling

- Updated the base URL assignment in the OpenAICompletion class to prioritize the new `api_base` attribute and fallback to the environment variable `OPENAI_BASE_URL` if both are not set.
- Added `api_base` to the list of parameters in the OpenAICompletion class to ensure proper configuration and flexibility in API endpoint management.

* feat: enhance OpenAICompletion class with api_base support

- Added the `api_base` parameter to the OpenAICompletion class to allow for flexible API endpoint configuration.
- Updated the `_get_client_params` method to prioritize `base_url` over `api_base`, ensuring correct URL handling.
- Introduced comprehensive tests to validate the behavior of `api_base` and `base_url` in various scenarios, including environment variable fallback.
- Enhanced test coverage for client parameter retrieval, ensuring robust integration with the OpenAI API.

* fix: improve OpenAICompletion class configuration handling

- Added a debug print statement to log the client configuration parameters during initialization for better traceability.
- Updated the base URL assignment logic to ensure it defaults to None if no valid base URL is provided, enhancing robustness in API endpoint configuration.
- Refined the retrieval of the `api_base` environment variable to streamline the configuration process.

* drop print

* feat: improvements on import native sdk support (#3725)

* feat: add support for Anthropic provider and enhance logging

- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.

* feat: enhance LLM support with Bedrock provider and update dependencies

- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.

* test: update native provider fallback tests to expect ImportError

* adjust the test with the expected bevaior - raising ImportError

* this is exoecting the litellm format, all gemini native tests are in test_google.py

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>

* fix: remove stdout prints, improve test determinism, and update trace handling

Removed `print` statements from the `LLMStreamChunkEvent` handler to prevent
LLM response chunks from being written directly to stdout. The listener now
only tracks chunks internally.

Fixes #3715

Added explicit return statements for trace-related tests.

Updated cassette for `test_failed_evaluation` to reflect new behavior where
an empty trace dict is used instead of returning early.

Ensured deterministic cleanup order in test fixtures by making
`clear_event_bus_handlers` depend on `setup_test_environment`. This guarantees
event bus shutdown and file handle cleanup occur before temporary directory
deletion, resolving intermittent “Directory not empty” errors in CI.

* chore: remove lib/crewai exclusion from pre-commit hooks

* feat: enhance task guardrail functionality and validation

* feat: enhance task guardrail functionality and validation

- Introduced support for multiple guardrails in the Task class, allowing for sequential processing of guardrails.
- Added a new `guardrails` field to the Task model to accept a list of callable guardrails or string descriptions.
- Implemented validation to ensure guardrails are processed correctly, including handling of retries and error messages.
- Enhanced the `_invoke_guardrail_function` method to manage guardrail execution and integrate with existing task output processing.
- Updated tests to cover various scenarios involving multiple guardrails, including success, failure, and retry mechanisms.

This update improves the flexibility and robustness of task execution by allowing for more complex validation scenarios.

* refactor: enhance guardrail type handling in Task model

- Updated the Task class to improve guardrail type definitions, introducing GuardrailType and GuardrailsType for better clarity and type safety.
- Simplified the validation logic for guardrails, ensuring that both single and multiple guardrails are processed correctly.
- Enhanced error messages for guardrail validation to provide clearer feedback when incorrect types are provided.
- This refactor improves the maintainability and robustness of task execution by standardizing guardrail handling.

* feat: implement per-guardrail retry tracking in Task model

- Introduced a new private attribute `_guardrail_retry_counts` to the Task class for tracking retry attempts on a per-guardrail basis.
- Updated the guardrail processing logic to utilize the new retry tracking, allowing for independent retry counts for each guardrail.
- Enhanced error handling to provide clearer feedback when guardrails fail validation after exceeding retry limits.
- Modified existing tests to validate the new retry tracking behavior, ensuring accurate assertions on guardrail retries.

This update improves the robustness and flexibility of task execution by allowing for more granular control over guardrail validation and retry mechanisms.

* chore: 1.0.0b3 bump (#3734)

* chore: full ruff and mypy

improved linting, pre-commit setup, and internal architecture. Configured Ruff to respect .gitignore, added stricter rules, and introduced a lock pre-commit hook with virtualenv activation. Fixed type shadowing in EXASearchTool using a type_ alias to avoid PEP 563 conflicts and resolved circular imports in agent executor and guardrail modules. Removed agent-ops attributes, deprecated watson alias, and dropped crewai-enterprise tools with corresponding test updates. Refactored cache and memoization for thread safety and cleaned up structured output adapters and related logic.

* New MCL DSL (#3738)

* Adding MCP implementation

* New tests for MCP implementation

* fix tests

* update docs

* Revert "New tests for MCP implementation"

This reverts commit 0bbe6dee90.

* linter

* linter

* fix

* verify mcp pacakge exists

* adjust docs to be clear only remote servers are supported

* reverted

* ensure args schema generated properly

* properly close out

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>

* feat: a2a experimental

experimental a2a support

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
Co-authored-by: Mike Plachta <mplachta@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
This commit is contained in:
Lorenze Jay
2025-10-20 14:10:19 -07:00
committed by GitHub
parent 42f2b4d551
commit d1343b96ed
1339 changed files with 111657 additions and 19564 deletions

777
lib/crewai/README.md Normal file
View File

@@ -0,0 +1,777 @@
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="docs/images/crewai_logo.png" width="600px" alt="Open source Multi-AI Agent orchestration framework">
</a>
</p>
<p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;">
<a href="https://trendshift.io/repositories/11239" target="_blank">
<img src="https://trendshift.io/api/badge/repositories/11239" alt="crewAIInc%2FcrewAI | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/>
</a>
</p>
<p align="center">
<a href="https://crewai.com">Homepage</a>
·
<a href="https://docs.crewai.com">Docs</a>
·
<a href="https://app.crewai.com">Start Cloud Trial</a>
·
<a href="https://blog.crewai.com">Blog</a>
·
<a href="https://community.crewai.com">Forum</a>
</p>
<p align="center">
<a href="https://github.com/crewAIInc/crewAI">
<img src="https://img.shields.io/github/stars/crewAIInc/crewAI" alt="GitHub Repo stars">
</a>
<a href="https://github.com/crewAIInc/crewAI/network/members">
<img src="https://img.shields.io/github/forks/crewAIInc/crewAI" alt="GitHub forks">
</a>
<a href="https://github.com/crewAIInc/crewAI/issues">
<img src="https://img.shields.io/github/issues/crewAIInc/crewAI" alt="GitHub issues">
</a>
<a href="https://github.com/crewAIInc/crewAI/pulls">
<img src="https://img.shields.io/github/issues-pr/crewAIInc/crewAI" alt="GitHub pull requests">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
</a>
</p>
<p align="center">
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/v/crewai" alt="PyPI version">
</a>
<a href="https://pypi.org/project/crewai/">
<img src="https://img.shields.io/pypi/dm/crewai" alt="PyPI downloads">
</a>
<a href="https://twitter.com/crewAIInc">
<img src="https://img.shields.io/twitter/follow/crewAIInc?style=social" alt="Twitter Follow">
</a>
</p>
### Fast and Flexible Multi-Agent Automation Framework
> CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely **independent of LangChain or other agent frameworks**.
> It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at [learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI AMP Suite
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
## Crew Control Plane Key Features:
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient,
intelligent automations.
## Table of contents
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Understanding Flows and Crews](#understanding-flows-and-crews)
- [CrewAI vs LangGraph](#how-crewai-compares)
- [Examples](#examples)
- [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Using Crews and Flows Together](#using-crews-and-flows-together)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
- [Contribution](#contribution)
- [Telemetry](#telemetry)
- [License](#license)
## Why CrewAI?
<div align="center" style="margin-bottom: 30px;">
<img src="docs/images/asset.png" alt="CrewAI Logo" width="100%">
</div>
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
## Getting Started
Setup and run your first CrewAI agents by following this tutorial.
[![CrewAI Getting Started Tutorial](https://img.youtube.com/vi/-kSOTtYzgEw/hqdefault.jpg)](https://www.youtube.com/watch?v=-kSOTtYzgEw "CrewAI Getting Started Tutorial")
###
Learning Resources
Learn CrewAI through our comprehensive courses:
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
### Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
### Getting Started with Installation
To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
```shell
pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
#### Common Issues
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
```shell
crewai create crew <project_name>
```
This command creates a new project folder with the following structure:
```
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your crew by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents, and the `tasks.yaml` file is where you define your tasks.
#### To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
#### Example of a simple crew with a sequential process:
Instantiate your crew:
```shell
crewai create crew latest-ai-development
```
Modify the files as needed to fit your use case:
**agents.yaml**
```yaml
# src/my_project/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
```
**tasks.yaml**
```yaml
# src/my_project/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
```
**crew.py**
```python
# src/my_project/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)
```
**main.py**
```python
#!/usr/bin/env python
# src/my_project/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
```
### 3. Running Your Crew
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
Lock the dependencies and install them by using the CLI command but first, navigate to your project directory:
```shell
cd my_project
crewai install (Optional)
```
To run your crew, execute the following command in the root of your project:
```bash
crewai run
```
or
```bash
python src/my_project/main.py
```
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
```bash
crewai update
```
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
## Key Features
CrewAI stands apart as a lean, standalone, high-performance multi-AI Agent framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
## Examples
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis)
### Quick Tutorial
[![CrewAI Tutorial](https://img.youtube.com/vi/tnejrr-0a94/maxresdefault.jpg)](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
### Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines.
CrewAI flows support logical operators like `or_` and `and_` to combine multiple conditions. This can be used with `@start`, `@listen`, or `@router` decorators to create complex triggering conditions.
- `or_`: Triggers when any of the specified conditions are met.
- `and_`Triggers when all of the specified conditions are met.
Here's how you can orchestrate multiple Crews within a Flow:
```python
from crewai.flow.flow import Flow, listen, start, router, or_
from crewai import Crew, Agent, Task, Process
from pydantic import BaseModel
# Define structured state for precise control
class MarketState(BaseModel):
sentiment: str = "neutral"
confidence: float = 0.0
recommendations: list = []
class AdvancedAnalysisFlow(Flow[MarketState]):
@start()
def fetch_market_data(self):
# Demonstrate low-level control with structured state
self.state.sentiment = "analyzing"
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
@listen(fetch_market_data)
def analyze_with_crew(self, market_data):
# Show crew agency through specialized roles
analyst = Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher = Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task = Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task = Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew = Crew(
agents=[analyst, researcher],
tasks=[analysis_task, research_task],
process=Process.sequential,
verbose=True
)
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
@router(analyze_with_crew)
def determine_next_steps(self):
# Show flow control with conditional routing
if self.state.confidence > 0.8:
return "high_confidence"
elif self.state.confidence > 0.5:
return "medium_confidence"
return "low_confidence"
@listen("high_confidence")
def execute_strategy(self):
# Demonstrate complex decision making
strategy_crew = Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
return strategy_crew.kickoff()
@listen(or_("medium_confidence", "low_confidence"))
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
```
This example demonstrates how to:
1. Use Python code for basic data operations
2. Create and execute Crews as steps in your workflow
3. Use Flow decorators to manage the sequence of operations
4. Implement conditional branching based on Crew results
## Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models.
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
- Fork the repository.
- Create a new branch for your feature.
- Add your feature or improvement.
- Send a pull request.
- We appreciate your input!
### Installing Dependencies
```bash
uv lock
uv sync
```
### Virtual Env
```bash
uv venv
```
### Pre-commit hooks
```bash
pre-commit install
```
### Running Tests
```bash
uv run pytest .
```
### Running static type checks
```bash
uvx mypy src
```
### Packaging
```bash
uv build
```
### Installing Locally
```bash
pip install dist/*.tar.gz
```
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
Data collected includes:
- Version of CrewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publicly available tools, which ones are being used the most so we can improve them
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
## License
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
## Frequently Asked Questions (FAQ)
### General
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
- [Is CrewAI open-source?](#q-is-crewai-open-source)
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
### Features and Capabilities
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
### Resources and Community
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
### Enterprise Features
- [What additional features does CrewAI AMP offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI AMP available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI AMP for free?](#q-can-i-try-crewai-enterprise-for-free)
### Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
### Q: How do I install CrewAI?
A: Install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
### Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
### Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
### Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
### Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
### Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
### Q: How can I contribute to CrewAI?
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI AMP offer?
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI AMP available for cloud and on-premise deployments?
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI AMP for free?
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
### Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
### Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
### Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
### Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.

149
lib/crewai/pyproject.toml Normal file
View File

@@ -0,0 +1,149 @@
[project]
name = "crewai"
dynamic = ["version"]
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]
requires-python = ">=3.10, <3.14"
dependencies = [
# Core Dependencies
"pydantic>=2.11.9",
"openai>=1.13.3",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
"regex>=2024.9.11",
# Telemetry and Monitoring
"opentelemetry-api>=1.30.0",
"opentelemetry-sdk>=1.30.0",
"opentelemetry-exporter-otlp-proto-http>=1.30.0",
# Data Handling
"chromadb~=1.1.0",
"tokenizers>=0.20.3",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
"python-dotenv>=1.1.1",
"pyjwt>=2.9.0",
# Configuration and Utils
"click>=8.1.7",
"appdirs>=1.4.4",
"jsonref>=1.1.0",
"json-repair==0.25.2",
"uv>=0.4.25",
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"json5>=0.10.0",
"portalocker==2.7.0",
"pydantic-settings>=2.10.1",
"mcp>=1.16.0",
]
[project.urls]
Homepage = "https://crewai.com"
Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.0.0b3",
]
embeddings = [
"tiktoken~=0.8.0"
]
pdfplumber = [
"pdfplumber>=0.11.4",
]
pandas = [
"pandas>=2.2.3",
]
openpyxl = [
"openpyxl>=3.1.5",
]
mem0 = ["mem0ai>=0.1.94"]
docling = [
"docling>=2.12.0",
]
aisuite = [
"aisuite>=0.1.11",
]
qdrant = [
"qdrant-client[fastembed]>=1.14.3",
]
aws = [
"boto3>=1.40.38",
]
watson = [
"ibm-watsonx-ai>=1.3.39",
]
voyageai = [
"voyageai>=0.3.5",
]
litellm = [
"litellm>=1.74.9",
]
bedrock = [
"boto3>=1.40.45",
]
google-genai = [
"google-genai>=1.2.0",
]
azure-ai-inference = [
"azure-ai-inference>=1.0.0b9",
]
anthropic = [
"anthropic>=0.69.0",
]
# a2a = [
# "a2a-sdk~=0.3.9",
# "httpx-sse>=0.4.0",
# ]
[project.scripts]
crewai = "crewai.cli.cli:crewai"
# PyTorch index configuration, since torch 2.5.0 is not compatible with python 3.13
[[tool.uv.index]]
name = "pytorch-nightly"
url = "https://download.pytorch.org/whl/nightly/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[tool.uv.sources]
torch = [
{ index = "pytorch-nightly", marker = "python_version >= '3.13'" },
{ index = "pytorch", marker = "python_version < '3.13'" },
]
torchvision = [
{ index = "pytorch-nightly", marker = "python_version >= '3.13'" },
{ index = "pytorch", marker = "python_version < '3.13'" },
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.version]
path = "src/crewai/__init__.py"
# Declare mutually exclusive extras due to conflicting httpx requirements
# a2a requires httpx>=0.28.1, while aisuite requires httpx>=0.27.0,<0.28.0
# [tool.uv]
# conflicts = [
# [
# { extra = "a2a" },
# { extra = "aisuite" },
# ],
# ]

View File

@@ -0,0 +1,87 @@
import threading
from typing import Any
import urllib.request
import warnings
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.flow.flow import Flow
from crewai.knowledge.knowledge import Knowledge
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
def _suppress_pydantic_deprecation_warnings() -> None:
"""Suppress Pydantic deprecation warnings using targeted monkey patch."""
original_warn = warnings.warn
def filtered_warn(
message: Any,
category: type | None = None,
stacklevel: int = 1,
source: Any = None,
) -> Any:
if (
category
and hasattr(category, "__module__")
and category.__module__ == "pydantic.warnings"
):
return None
return original_warn(message, category, stacklevel + 1, source)
warnings.warn = filtered_warn # type: ignore[assignment]
_suppress_pydantic_deprecation_warnings()
__version__ = "1.0.0b3"
_telemetry_submitted = False
def _track_install() -> None:
"""Track package installation/first-use via Scarf analytics."""
global _telemetry_submitted
if _telemetry_submitted or Telemetry._is_telemetry_disabled():
return
try:
pixel_url = "https://api.scarf.sh/v2/packages/CrewAI/crewai/docs/00f2dad1-8334-4a39-934e-003b2e1146db"
req = urllib.request.Request(pixel_url) # noqa: S310
req.add_header("User-Agent", f"CrewAI-Python/{__version__}")
with urllib.request.urlopen(req, timeout=2): # noqa: S310
_telemetry_submitted = True
except Exception: # noqa: S110
pass
def _track_install_async() -> None:
"""Track installation in background thread to avoid blocking imports."""
if not Telemetry._is_telemetry_disabled():
thread = threading.Thread(target=_track_install, daemon=True)
thread.start()
_track_install_async()
__all__ = [
"LLM",
"Agent",
"BaseLLM",
"Crew",
"CrewOutput",
"Flow",
"Knowledge",
"LLMGuardrail",
"Process",
"Task",
"TaskOutput",
"__version__",
]

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,13 @@
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.parser import AgentAction, AgentFinish, OutputParserError, parse
from crewai.agents.tools_handler import ToolsHandler
__all__ = [
"AgentAction",
"AgentFinish",
"CacheHandler",
"OutputParserError",
"ToolsHandler",
"parse",
]

View File

@@ -0,0 +1,45 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any
from pydantic import ConfigDict, PrivateAttr
from crewai.agents.agent_builder.base_agent import BaseAgent
if TYPE_CHECKING:
from crewai.tools.base_tool import BaseTool
class BaseAgentAdapter(BaseAgent, ABC):
"""Base class for all agent adapters in CrewAI.
This abstract class defines the common interface and functionality that all
agent adapters must implement. It extends BaseAgent to maintain compatibility
with the CrewAI framework while adding adapter-specific requirements.
"""
adapted_structured_output: bool = False
_agent_config: dict[str, Any] | None = PrivateAttr(default=None)
model_config = ConfigDict(arbitrary_types_allowed=True)
def __init__(self, agent_config: dict[str, Any] | None = None, **kwargs: Any):
super().__init__(adapted_agent=True, **kwargs)
self._agent_config = agent_config
@abstractmethod
def configure_tools(self, tools: list[BaseTool] | None = None) -> None:
"""Configure and adapt tools for the specific agent implementation.
Args:
tools: Optional list of BaseTool instances to be configured
"""
def configure_structured_output(self, structured_output: Any) -> None:
"""Configure the structured output for the specific agent implementation.
Args:
structured_output: The structured output to be configured
"""

View File

@@ -0,0 +1,154 @@
"""Base converter adapter for structured output conversion."""
from __future__ import annotations
from abc import ABC, abstractmethod
import json
import re
from typing import TYPE_CHECKING, Final, Literal
from crewai.utilities.converter import generate_model_description
if TYPE_CHECKING:
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.task import Task
_CODE_BLOCK_PATTERN: Final[re.Pattern[str]] = re.compile(
r"```(?:json)?\s*([\s\S]*?)```"
)
_JSON_OBJECT_PATTERN: Final[re.Pattern[str]] = re.compile(r"\{[\s\S]*}")
class BaseConverterAdapter(ABC):
"""Abstract base class for converter adapters in CrewAI.
Defines the common interface for converting agent outputs to structured formats.
All converter adapters must implement the methods defined here.
Attributes:
agent_adapter: The agent adapter instance.
_output_format: The expected output format (json, pydantic, or None).
_schema: The schema description for the expected output.
"""
def __init__(self, agent_adapter: BaseAgentAdapter) -> None:
"""Initialize the converter adapter.
Args:
agent_adapter: The agent adapter to configure for structured output.
"""
self.agent_adapter = agent_adapter
self._output_format: Literal["json", "pydantic"] | None = None
self._schema: str | None = None
@abstractmethod
def configure_structured_output(self, task: Task) -> None:
"""Configure agents to return structured output.
Must support both JSON and Pydantic output formats.
Args:
task: The task requiring structured output.
"""
@abstractmethod
def enhance_system_prompt(self, base_prompt: str) -> str:
"""Enhance the system prompt with structured output instructions.
Args:
base_prompt: The original system prompt.
Returns:
Enhanced prompt with structured output guidance.
"""
def post_process_result(self, result: str) -> str:
"""Post-process the result to ensure proper string format.
Extracts valid JSON from text that may contain markdown or other formatting.
Args:
result: The raw result from agent execution.
Returns:
Processed result as a string.
"""
if not self._output_format:
return result
return self._extract_json_from_text(result)
@staticmethod
def _validate_json(text: str) -> str | None:
"""Validate if text is valid JSON and return it, or None if invalid.
Args:
text: The text to validate as JSON.
Returns:
The text if it's valid JSON, None otherwise.
"""
try:
json.loads(text)
return text
except json.JSONDecodeError:
return None
@staticmethod
def _extract_json_from_text(result: str) -> str:
"""Extract valid JSON from text that may contain markdown or other formatting.
This method provides a comprehensive approach to extracting JSON from LLM responses,
handling cases where JSON may be wrapped in Markdown code blocks or embedded in text.
Args:
result: The text potentially containing JSON.
Returns:
Extracted JSON string if found and valid, otherwise the original result.
"""
if not isinstance(result, str):
return str(result)
if valid := BaseConverterAdapter._validate_json(result):
return valid
for match in _CODE_BLOCK_PATTERN.finditer(result):
if valid := BaseConverterAdapter._validate_json(match.group(1).strip()):
return valid
for match in _JSON_OBJECT_PATTERN.finditer(result):
if valid := BaseConverterAdapter._validate_json(match.group()):
return valid
return result
@staticmethod
def _configure_format_from_task(
task: Task,
) -> tuple[Literal["json", "pydantic"] | None, str | None]:
"""Determine output format and schema from task requirements.
This is a helper method that examines the task's output requirements
and returns the appropriate format type and schema description.
Args:
task: The task containing output format requirements.
Returns:
A tuple of (output_format, schema) where both may be None if no
structured output is required.
"""
if not (task.output_json or task.output_pydantic):
return None, None
if task.output_json:
return "json", generate_model_description(task.output_json)
if task.output_pydantic:
return "pydantic", generate_model_description(task.output_pydantic)
return None, None

View File

@@ -0,0 +1,38 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from crewai.tools.base_tool import BaseTool
class BaseToolAdapter(ABC):
"""Base class for all tool adapters in CrewAI.
This abstract class defines the common interface that all tool adapters
must implement. It provides the structure for adapting CrewAI tools to
different frameworks and platforms.
"""
def __init__(self, tools: list[BaseTool] | None = None):
self.original_tools = tools or []
self.converted_tools: list[Any] = []
@abstractmethod
def configure_tools(self, tools: list[BaseTool]) -> None:
"""Configure and convert tools for the specific implementation.
Args:
tools: List of BaseTool instances to be configured and converted
"""
def tools(self) -> list[Any]:
"""Return all converted tools."""
return self.converted_tools
@staticmethod
def sanitize_tool_name(tool_name: str) -> str:
"""Sanitize tool name for API compatibility."""
return tool_name.replace(" ", "_")

View File

@@ -0,0 +1 @@
"""LangGraph adapter for crewAI."""

View File

@@ -0,0 +1,315 @@
"""LangGraph agent adapter for CrewAI integration.
This module contains the LangGraphAgentAdapter class that integrates LangGraph ReAct agents
with CrewAI's agent system. Provides memory persistence, tool integration, and structured
output functionality.
"""
from collections.abc import Callable
from typing import Any, cast
from pydantic import ConfigDict, Field, PrivateAttr
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.agents.agent_adapters.langgraph.langgraph_tool_adapter import (
LangGraphToolAdapter,
)
from crewai.agents.agent_adapters.langgraph.protocols import (
LangGraphCheckPointMemoryModule,
LangGraphPrebuiltModule,
)
from crewai.agents.agent_adapters.langgraph.structured_output_converter import (
LangGraphConverterAdapter,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import BaseTool
from crewai.utilities import Logger
from crewai.utilities.converter import Converter
from crewai.utilities.import_utils import require
class LangGraphAgentAdapter(BaseAgentAdapter):
"""Adapter for LangGraph agents to work with CrewAI.
This adapter integrates LangGraph's ReAct agents with CrewAI's agent system,
providing memory persistence, tool integration, and structured output support.
"""
model_config = ConfigDict(arbitrary_types_allowed=True)
_logger: Logger = PrivateAttr(default_factory=Logger)
_tool_adapter: LangGraphToolAdapter = PrivateAttr()
_graph: Any = PrivateAttr(default=None)
_memory: Any = PrivateAttr(default=None)
_max_iterations: int = PrivateAttr(default=10)
function_calling_llm: Any = Field(default=None)
step_callback: Callable[..., Any] | None = Field(default=None)
model: str = Field(default="gpt-4o")
verbose: bool = Field(default=False)
def __init__(
self,
role: str,
goal: str,
backstory: str,
tools: list[BaseTool] | None = None,
llm: Any = None,
max_iterations: int = 10,
agent_config: dict[str, Any] | None = None,
**kwargs,
) -> None:
"""Initialize the LangGraph agent adapter.
Args:
role: The role description for the agent.
goal: The primary goal the agent should achieve.
backstory: Background information about the agent.
tools: Optional list of tools available to the agent.
llm: Language model to use, defaults to gpt-4o.
max_iterations: Maximum number of iterations for task execution.
agent_config: Additional configuration for the LangGraph agent.
**kwargs: Additional arguments passed to the base adapter.
"""
super().__init__(
role=role,
goal=goal,
backstory=backstory,
tools=tools,
llm=llm or self.model,
agent_config=agent_config,
**kwargs,
)
self._tool_adapter = LangGraphToolAdapter(tools=tools)
self._converter_adapter: LangGraphConverterAdapter = LangGraphConverterAdapter(
self
)
self._max_iterations = max_iterations
self._setup_graph()
def _setup_graph(self) -> None:
"""Set up the LangGraph workflow graph.
Initializes the memory saver and creates a ReAct agent with the configured
tools, memory checkpointer, and debug settings.
"""
memory_saver: type[Any] = cast(
LangGraphCheckPointMemoryModule,
require(
"langgraph.checkpoint.memory",
purpose="LangGraph core functionality",
),
).MemorySaver
create_react_agent: Callable[..., Any] = cast(
LangGraphPrebuiltModule,
require(
"langgraph.prebuilt",
purpose="LangGraph core functionality",
),
).create_react_agent
self._memory = memory_saver()
converted_tools: list[Any] = self._tool_adapter.tools()
if self._agent_config:
self._graph = create_react_agent(
model=self.llm,
tools=converted_tools,
checkpointer=self._memory,
debug=self.verbose,
**self._agent_config,
)
else:
self._graph = create_react_agent(
model=self.llm,
tools=converted_tools or [],
checkpointer=self._memory,
debug=self.verbose,
)
def _build_system_prompt(self) -> str:
"""Build a system prompt for the LangGraph agent.
Creates a prompt that includes the agent's role, goal, and backstory,
then enhances it through the converter adapter for structured output.
Returns:
The complete system prompt string.
"""
base_prompt = f"""
You are {self.role}.
Your goal is: {self.goal}
Your backstory: {self.backstory}
When working on tasks, think step-by-step and use the available tools when necessary.
"""
return self._converter_adapter.enhance_system_prompt(base_prompt)
def execute_task(
self,
task: Any,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> str:
"""Execute a task using the LangGraph workflow.
Configures the agent, processes the task through the LangGraph workflow,
and handles event emission for execution tracking.
Args:
task: The task object to execute.
context: Optional context information for the task.
tools: Optional additional tools for this specific execution.
Returns:
The final answer from the task execution.
Raises:
Exception: If task execution fails.
"""
self.create_agent_executor(tools)
self.configure_structured_output(task)
try:
task_prompt = task.prompt() if hasattr(task, "prompt") else str(task)
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
session_id = f"task_{id(task)}"
config: dict[str, dict[str, str]] = {
"configurable": {"thread_id": session_id}
}
result: dict[str, Any] = self._graph.invoke(
{
"messages": [
("system", self._build_system_prompt()),
("user", task_prompt),
]
},
config,
)
messages: list[Any] = result.get("messages", [])
last_message: Any = messages[-1] if messages else None
final_answer: str = ""
if isinstance(last_message, dict):
final_answer = last_message.get("content", "")
elif hasattr(last_message, "content"):
final_answer = getattr(last_message, "content", "")
final_answer = (
self._converter_adapter.post_process_result(final_answer)
or "Task execution completed but no clear answer was provided."
)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(
agent=self, task=task, output=final_answer
),
)
return final_answer
except Exception as e:
self._logger.log("error", f"Error executing LangGraph task: {e!s}")
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise
def create_agent_executor(self, tools: list[BaseTool] | None = None) -> None:
"""Configure the LangGraph agent for execution.
Args:
tools: Optional tools to configure for the agent.
"""
self.configure_tools(tools)
def configure_tools(self, tools: list[BaseTool] | None = None) -> None:
"""Configure tools for the LangGraph agent.
Merges additional tools with existing ones and updates the graph's
available tools through the tool adapter.
Args:
tools: Optional additional tools to configure.
"""
if tools:
all_tools: list[BaseTool] = list(self.tools or []) + list(tools or [])
self._tool_adapter.configure_tools(all_tools)
available_tools: list[Any] = self._tool_adapter.tools()
self._graph.tools = available_tools
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
"""Implement delegation tools support for LangGraph.
Creates delegation tools that allow this agent to delegate tasks to other agents.
Args:
agents: List of agents available for delegation.
Returns:
List of delegation tools.
"""
agent_tools: AgentTools = AgentTools(agents=agents)
return agent_tools.tools()
@staticmethod
def get_output_converter(
llm: Any, text: str, model: Any, instructions: str
) -> Converter:
"""Convert output format if needed.
Args:
llm: Language model instance.
text: Text to convert.
model: Model configuration.
instructions: Conversion instructions.
Returns:
Converter instance for output transformation.
"""
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def configure_structured_output(self, task: Any) -> None:
"""Configure the structured output for LangGraph.
Uses the converter adapter to set up structured output formatting
based on the task requirements.
Args:
task: Task object containing output requirements.
"""
self._converter_adapter.configure_structured_output(task)

View File

@@ -0,0 +1,99 @@
"""LangGraph tool adapter for CrewAI tool integration.
This module contains the LangGraphToolAdapter class that converts CrewAI tools
to LangGraph-compatible format using langchain_core.tools.
"""
from collections.abc import Awaitable
import inspect
from typing import Any
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools.base_tool import BaseTool
class LangGraphToolAdapter(BaseToolAdapter):
"""Adapts CrewAI tools to LangGraph agent tool compatible format.
Converts CrewAI BaseTool instances to langchain_core.tools format
that can be used by LangGraph agents.
"""
def __init__(self, tools: list[BaseTool] | None = None) -> None:
"""Initialize the tool adapter.
Args:
tools: Optional list of CrewAI tools to adapt.
"""
super().__init__()
self.original_tools: list[BaseTool] = tools or []
self.converted_tools: list[Any] = []
def configure_tools(self, tools: list[BaseTool]) -> None:
"""Configure and convert CrewAI tools to LangGraph-compatible format.
LangGraph expects tools in langchain_core.tools format. This method
converts CrewAI BaseTool instances to StructuredTool instances.
Args:
tools: List of CrewAI tools to convert.
"""
from langchain_core.tools import BaseTool as LangChainBaseTool, StructuredTool
converted_tools: list[Any] = []
if self.original_tools:
all_tools: list[BaseTool] = tools + self.original_tools
else:
all_tools = tools
for tool in all_tools:
if isinstance(tool, LangChainBaseTool):
converted_tools.append(tool)
continue
sanitized_name: str = self.sanitize_tool_name(tool.name)
async def tool_wrapper(
*args: Any, tool: BaseTool = tool, **kwargs: Any
) -> Any:
"""Wrapper function to adapt CrewAI tool calls to LangGraph format.
Args:
*args: Positional arguments for the tool.
tool: The CrewAI tool to wrap.
**kwargs: Keyword arguments for the tool.
Returns:
The result from the tool execution.
"""
output: Any | Awaitable[Any]
if len(args) > 0 and isinstance(args[0], str):
output = tool.run(args[0])
elif "input" in kwargs:
output = tool.run(kwargs["input"])
else:
output = tool.run(**kwargs)
if inspect.isawaitable(output):
result: Any = await output
else:
result = output
return result
converted_tool: StructuredTool = StructuredTool(
name=sanitized_name,
description=tool.description,
func=tool_wrapper,
args_schema=tool.args_schema,
)
converted_tools.append(converted_tool)
self.converted_tools = converted_tools
def tools(self) -> list[Any]:
"""Get the list of converted tools.
Returns:
List of LangGraph-compatible tools.
"""
return self.converted_tools or []

View File

@@ -0,0 +1,55 @@
"""Type protocols for LangGraph modules."""
from typing import Any, Protocol, runtime_checkable
@runtime_checkable
class LangGraphMemorySaver(Protocol):
"""Protocol for LangGraph MemorySaver.
Defines the interface for LangGraph's memory persistence mechanism.
"""
def __init__(self) -> None:
"""Initialize the memory saver."""
...
@runtime_checkable
class LangGraphCheckPointMemoryModule(Protocol):
"""Protocol for LangGraph checkpoint memory module.
Defines the interface for modules containing memory checkpoint functionality.
"""
MemorySaver: type[LangGraphMemorySaver]
@runtime_checkable
class LangGraphPrebuiltModule(Protocol):
"""Protocol for LangGraph prebuilt module.
Defines the interface for modules containing prebuilt agent factories.
"""
def create_react_agent(
self,
model: Any,
tools: list[Any],
checkpointer: Any,
debug: bool = False,
**kwargs: Any,
) -> Any:
"""Create a ReAct agent with the given configuration.
Args:
model: The language model to use for the agent.
tools: List of tools available to the agent.
checkpointer: Memory checkpointer for state persistence.
debug: Whether to enable debug mode.
**kwargs: Additional configuration options.
Returns:
The configured ReAct agent instance.
"""
...

View File

@@ -0,0 +1,77 @@
"""LangGraph structured output converter for CrewAI task integration.
This module contains the LangGraphConverterAdapter class that handles structured
output conversion for LangGraph agents, supporting JSON and Pydantic model formats.
"""
from typing import Any
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
class LangGraphConverterAdapter(BaseConverterAdapter):
"""Adapter for handling structured output conversion in LangGraph agents.
Converts task output requirements into system prompt modifications and
post-processing logic to ensure agents return properly structured outputs.
Attributes:
_system_prompt_appendix: Cached system prompt instructions for structured output.
"""
def __init__(self, agent_adapter: Any) -> None:
"""Initialize the converter adapter with a reference to the agent adapter.
Args:
agent_adapter: The LangGraph agent adapter instance.
"""
super().__init__(agent_adapter=agent_adapter)
self.agent_adapter: Any = agent_adapter
self._system_prompt_appendix: str | None = None
def configure_structured_output(self, task: Any) -> None:
"""Configure the structured output for LangGraph.
Analyzes the task's output requirements and sets up the necessary
formatting and validation logic.
Args:
task: The task object containing output format specifications.
"""
self._output_format, self._schema = self._configure_format_from_task(task)
self._system_prompt_appendix = self._generate_system_prompt_appendix()
def _generate_system_prompt_appendix(self) -> str:
"""Generate an appendix for the system prompt to enforce structured output.
Creates instructions that are appended to the system prompt to guide
the agent in producing properly formatted output.
Returns:
System prompt appendix string, or empty string if no structured output.
"""
if not self._output_format or not self._schema:
return ""
return f"""
Important: Your final answer MUST be provided in the following structured format:
{self._schema}
DO NOT include any markdown code blocks, backticks, or other formatting around your response.
The output should be raw JSON that exactly matches the specified schema.
"""
def enhance_system_prompt(self, original_prompt: str) -> str:
"""Add structured output instructions to the system prompt if needed.
Args:
original_prompt: The base system prompt.
Returns:
Enhanced system prompt with structured output instructions.
"""
if not self._system_prompt_appendix:
return original_prompt
return f"{original_prompt}\n{self._system_prompt_appendix}"

View File

@@ -0,0 +1 @@
"""OpenAI agent adapters for crewAI."""

View File

@@ -0,0 +1,244 @@
"""OpenAI agents adapter for CrewAI integration.
This module contains the OpenAIAgentAdapter class that integrates OpenAI Assistants
with CrewAI's agent system, providing tool integration and structured output support.
"""
from typing import Any, cast
from pydantic import ConfigDict, Field, PrivateAttr
from typing_extensions import Unpack
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.agents.agent_adapters.openai_agents.openai_agent_tool_adapter import (
OpenAIAgentToolAdapter,
)
from crewai.agents.agent_adapters.openai_agents.protocols import (
AgentKwargs,
OpenAIAgent as OpenAIAgentProtocol,
OpenAIAgentsModule,
)
from crewai.agents.agent_adapters.openai_agents.structured_output_converter import (
OpenAIConverterAdapter,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.utilities import Logger
from crewai.utilities.import_utils import require
openai_agents_module = cast(
OpenAIAgentsModule,
require(
"agents",
purpose="OpenAI agents functionality",
),
)
OpenAIAgent = openai_agents_module.Agent
Runner = openai_agents_module.Runner
enable_verbose_stdout_logging = openai_agents_module.enable_verbose_stdout_logging
class OpenAIAgentAdapter(BaseAgentAdapter):
"""Adapter for OpenAI Assistants.
Integrates OpenAI Assistants API with CrewAI's agent system, providing
tool configuration, structured output handling, and task execution.
"""
model_config = ConfigDict(arbitrary_types_allowed=True)
_openai_agent: OpenAIAgentProtocol = PrivateAttr()
_logger: Logger = PrivateAttr(default_factory=Logger)
_active_thread: str | None = PrivateAttr(default=None)
function_calling_llm: Any = Field(default=None)
step_callback: Any = Field(default=None)
_tool_adapter: OpenAIAgentToolAdapter = PrivateAttr()
_converter_adapter: OpenAIConverterAdapter = PrivateAttr()
def __init__(
self,
**kwargs: Unpack[AgentKwargs],
) -> None:
"""Initialize the OpenAI agent adapter.
Args:
**kwargs: All initialization arguments including role, goal, backstory,
model, tools, and agent_config.
Raises:
ImportError: If OpenAI agent dependencies are not installed.
"""
self.llm = kwargs.pop("model", "gpt-4o-mini")
super().__init__(**kwargs)
self._tool_adapter = OpenAIAgentToolAdapter(tools=kwargs.get("tools"))
self._converter_adapter = OpenAIConverterAdapter(agent_adapter=self)
def _build_system_prompt(self) -> str:
"""Build a system prompt for the OpenAI agent.
Creates a prompt containing the agent's role, goal, and backstory,
then enhances it with structured output instructions if needed.
Returns:
The complete system prompt string.
"""
base_prompt = f"""
You are {self.role}.
Your goal is: {self.goal}
Your backstory: {self.backstory}
When working on tasks, think step-by-step and use the available tools when necessary.
"""
return self._converter_adapter.enhance_system_prompt(base_prompt)
def execute_task(
self,
task: Any,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> str:
"""Execute a task using the OpenAI Assistant.
Configures the assistant, processes the task, and handles event emission
for execution tracking.
Args:
task: The task object to execute.
context: Optional context information for the task.
tools: Optional additional tools for this execution.
Returns:
The final answer from the task execution.
Raises:
Exception: If task execution fails.
"""
self._converter_adapter.configure_structured_output(task)
self.create_agent_executor(tools)
if self.verbose:
enable_verbose_stdout_logging()
try:
task_prompt: str = task.prompt()
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
if not self.agent_executor or not isinstance(self.agent_executor, Runner):
raise ValueError("Agent executor is not configured.")
result: Any = self.agent_executor.run_sync(self._openai_agent, task_prompt)
final_answer: str = self.handle_execution_result(result)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(
agent=self, task=task, output=final_answer
),
)
return final_answer
except Exception as e:
self._logger.log("error", f"Error executing OpenAI task: {e!s}")
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise
def create_agent_executor(self, tools: list[BaseTool] | None = None) -> None:
"""Configure the OpenAI agent for execution.
While OpenAI handles execution differently through Runner,
this method sets up tools and agent configuration.
Args:
tools: Optional tools to configure for the agent.
Notes:
TODO: Properly type agent_executor in BaseAgent to avoid type issues
when assigning Runner class to this attribute.
"""
all_tools: list[BaseTool] = list(self.tools or []) + list(tools or [])
instructions: str = self._build_system_prompt()
self._openai_agent = OpenAIAgent(
name=self.role,
instructions=instructions,
model=self.llm,
**self._agent_config or {},
)
if all_tools:
self.configure_tools(all_tools)
self.agent_executor = Runner
def configure_tools(self, tools: list[BaseTool] | None = None) -> None:
"""Configure tools for the OpenAI Assistant.
Args:
tools: Optional tools to configure for the assistant.
"""
if tools:
self._tool_adapter.configure_tools(tools)
if self._tool_adapter.converted_tools:
self._openai_agent.tools = self._tool_adapter.converted_tools
def handle_execution_result(self, result: Any) -> str:
"""Process OpenAI Assistant execution result.
Converts any structured output to a string through the converter adapter.
Args:
result: The execution result from the OpenAI assistant.
Returns:
Processed result as a string.
"""
return self._converter_adapter.post_process_result(result.final_output)
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
"""Implement delegation tools support.
Creates delegation tools that allow this agent to delegate tasks to other agents.
Args:
agents: List of agents available for delegation.
Returns:
List of delegation tools.
"""
agent_tools: AgentTools = AgentTools(agents=agents)
return agent_tools.tools()
def configure_structured_output(self, task: Any) -> None:
"""Configure the structured output for the specific agent implementation.
Args:
task: The task object containing output format specifications.
"""
self._converter_adapter.configure_structured_output(task)

View File

@@ -0,0 +1,163 @@
"""OpenAI agent tool adapter for CrewAI tool integration.
This module contains the OpenAIAgentToolAdapter class that converts CrewAI tools
to OpenAI Assistant-compatible format using the agents library.
"""
from collections.abc import Awaitable
import inspect
import json
import re
from typing import Any, cast
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.agents.agent_adapters.openai_agents.protocols import (
OpenAIFunctionTool,
OpenAITool,
)
from crewai.tools import BaseTool
from crewai.utilities.import_utils import require
agents_module = cast(
Any,
require(
"agents",
purpose="OpenAI agents functionality",
),
)
FunctionTool = agents_module.FunctionTool
Tool = agents_module.Tool
class OpenAIAgentToolAdapter(BaseToolAdapter):
"""Adapter for OpenAI Assistant tools.
Converts CrewAI BaseTool instances to OpenAI Assistant FunctionTool format
that can be used by OpenAI agents.
"""
def __init__(self, tools: list[BaseTool] | None = None) -> None:
"""Initialize the tool adapter.
Args:
tools: Optional list of CrewAI tools to adapt.
"""
super().__init__()
self.original_tools: list[BaseTool] = tools or []
self.converted_tools: list[OpenAITool] = []
def configure_tools(self, tools: list[BaseTool]) -> None:
"""Configure tools for the OpenAI Assistant.
Merges provided tools with original tools and converts them to
OpenAI Assistant format.
Args:
tools: List of CrewAI tools to configure.
"""
if self.original_tools:
all_tools: list[BaseTool] = tools + self.original_tools
else:
all_tools = tools
if all_tools:
self.converted_tools = self._convert_tools_to_openai_format(all_tools)
@staticmethod
def _convert_tools_to_openai_format(
tools: list[BaseTool] | None,
) -> list[OpenAITool]:
"""Convert CrewAI tools to OpenAI Assistant tool format.
Args:
tools: List of CrewAI tools to convert.
Returns:
List of OpenAI Assistant FunctionTool instances.
"""
if not tools:
return []
def sanitize_tool_name(name: str) -> str:
"""Convert tool name to match OpenAI's required pattern.
Args:
name: Original tool name.
Returns:
Sanitized tool name matching OpenAI requirements.
"""
return re.sub(r"[^a-zA-Z0-9_-]", "_", name).lower()
def create_tool_wrapper(tool: BaseTool) -> Any:
"""Create a wrapper function that handles the OpenAI function tool interface.
Args:
tool: The CrewAI tool to wrap.
Returns:
Async wrapper function for OpenAI agent integration.
"""
async def wrapper(context_wrapper: Any, arguments: Any) -> Any:
"""Wrapper function to adapt CrewAI tool calls to OpenAI format.
Args:
context_wrapper: OpenAI context wrapper.
arguments: Tool arguments from OpenAI.
Returns:
Tool execution result.
"""
# Get the parameter name from the schema
param_name: str = next(
iter(tool.args_schema.model_json_schema()["properties"].keys())
)
# Handle different argument types
args_dict: dict[str, Any]
if isinstance(arguments, dict):
args_dict = arguments
elif isinstance(arguments, str):
try:
args_dict = json.loads(arguments)
except json.JSONDecodeError:
args_dict = {param_name: arguments}
else:
args_dict = {param_name: str(arguments)}
# Run the tool with the processed arguments
output: Any | Awaitable[Any] = tool._run(**args_dict)
# Await if the tool returned a coroutine
if inspect.isawaitable(output):
result: Any = await output
else:
result = output
# Ensure the result is JSON serializable
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
return result
return str(result)
return wrapper
openai_tools: list[OpenAITool] = []
for tool in tools:
schema: dict[str, Any] = tool.args_schema.model_json_schema()
schema.update({"additionalProperties": False, "type": "object"})
openai_tool: OpenAIFunctionTool = cast(
OpenAIFunctionTool,
FunctionTool(
name=sanitize_tool_name(tool.name),
description=tool.description,
params_json_schema=schema,
on_invoke_tool=create_tool_wrapper(tool),
),
)
openai_tools.append(openai_tool)
return openai_tools

View File

@@ -0,0 +1,74 @@
"""Type protocols for OpenAI agents modules."""
from collections.abc import Callable
from typing import Any, Protocol, TypedDict, runtime_checkable
from crewai.tools.base_tool import BaseTool
class AgentKwargs(TypedDict, total=False):
"""Typed dict for agent initialization kwargs."""
role: str
goal: str
backstory: str
model: str
tools: list[BaseTool] | None
agent_config: dict[str, Any] | None
@runtime_checkable
class OpenAIAgent(Protocol):
"""Protocol for OpenAI Agent."""
def __init__(
self,
name: str,
instructions: str,
model: str,
**kwargs: Any,
) -> None:
"""Initialize the OpenAI agent."""
...
tools: list[Any]
output_type: Any
@runtime_checkable
class OpenAIRunner(Protocol):
"""Protocol for OpenAI Runner."""
@classmethod
def run_sync(cls, agent: OpenAIAgent, message: str) -> Any:
"""Run agent synchronously with a message."""
...
@runtime_checkable
class OpenAIAgentsModule(Protocol):
"""Protocol for OpenAI agents module."""
Agent: type[OpenAIAgent]
Runner: type[OpenAIRunner]
enable_verbose_stdout_logging: Callable[[], None]
@runtime_checkable
class OpenAITool(Protocol):
"""Protocol for OpenAI Tool."""
@runtime_checkable
class OpenAIFunctionTool(Protocol):
"""Protocol for OpenAI FunctionTool."""
def __init__(
self,
name: str,
description: str,
params_json_schema: dict[str, Any],
on_invoke_tool: Any,
) -> None:
"""Initialize the function tool."""
...

View File

@@ -0,0 +1,67 @@
"""OpenAI structured output converter for CrewAI task integration.
This module contains the OpenAIConverterAdapter class that handles structured
output conversion for OpenAI agents, supporting JSON and Pydantic model formats.
"""
from typing import Any
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
from crewai.utilities.i18n import I18N
class OpenAIConverterAdapter(BaseConverterAdapter):
"""Adapter for handling structured output conversion in OpenAI agents.
This adapter enhances the OpenAI agent to handle structured output formats
and post-processes the results when needed.
Attributes:
_output_model: The Pydantic model for the output (OpenAI-specific).
"""
def __init__(self, agent_adapter: Any) -> None:
"""Initialize the converter adapter with a reference to the agent adapter.
Args:
agent_adapter: The OpenAI agent adapter instance.
"""
super().__init__(agent_adapter=agent_adapter)
self.agent_adapter: Any = agent_adapter
self._output_model: Any = None
def configure_structured_output(self, task: Any) -> None:
"""Configure the structured output for OpenAI agent based on task requirements.
Args:
task: The task containing output format requirements.
"""
self._output_format, self._schema = self._configure_format_from_task(task)
self._output_model = None
if task.output_json:
self.agent_adapter._openai_agent.output_type = task.output_json
self._output_model = task.output_json
elif task.output_pydantic:
self.agent_adapter._openai_agent.output_type = task.output_pydantic
self._output_model = task.output_pydantic
def enhance_system_prompt(self, base_prompt: str) -> str:
"""Enhance the base system prompt with structured output requirements if needed.
Args:
base_prompt: The original system prompt.
Returns:
Enhanced system prompt with output format instructions if needed.
"""
if not self._output_format:
return base_prompt
output_schema: str = (
I18N()
.slice("formatted_task_instructions")
.format(output_format=self._schema)
)
return f"{base_prompt}\n\n{output_schema}"

View File

@@ -0,0 +1,446 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from collections.abc import Callable
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Literal
import uuid
from pydantic import (
UUID4,
BaseModel,
Field,
PrivateAttr,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError
from typing_extensions import Self
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.knowledge_config import KnowledgeConfig
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.security_config import SecurityConfig
from crewai.tools.base_tool import BaseTool, Tool
from crewai.utilities.config import process_config
from crewai.utilities.i18n import I18N
from crewai.utilities.logger import Logger
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.string_utils import interpolate_only
PlatformApp = Literal[
"asana",
"box",
"clickup",
"github",
"gmail",
"google_calendar",
"google_sheets",
"hubspot",
"jira",
"linear",
"notion",
"salesforce",
"shopify",
"slack",
"stripe",
"zendesk",
]
PlatformAppOrAction = PlatformApp | str
class BaseAgent(BaseModel, ABC):
"""Abstract Base Class for all third party agents compatible with CrewAI.
Attributes:
id (UUID4): Unique identifier for the agent.
role (str): Role of the agent.
goal (str): Objective of the agent.
backstory (str): Backstory of the agent.
cache (bool): Whether the agent should use a cache for tool usage.
config (dict[str, Any] | None): Configuration for the agent.
verbose (bool): Verbose mode for the Agent Execution.
max_rpm (int | None): Maximum number of requests per minute for the agent execution.
allow_delegation (bool): Allow delegation of tasks to agents.
tools (list[Any] | None): Tools at the agent's disposal.
max_iter (int): Maximum iterations for an agent to execute a task.
agent_executor: An instance of the CrewAgentExecutor class.
llm (Any): Language model that will run the agent.
crew (Any): Crew to which the agent belongs.
i18n (I18N): Internationalization settings.
cache_handler ([CacheHandler]): An instance of the CacheHandler class.
tools_handler ([ToolsHandler]): An instance of the ToolsHandler class.
max_tokens: Maximum number of tokens for the agent to generate in a response.
knowledge_sources: Knowledge sources for the agent.
knowledge_storage: Custom knowledge storage for the agent.
security_config: Security configuration for the agent, including fingerprinting.
apps: List of enterprise applications that the agent can access through CrewAI AMP Tools.
Methods:
execute_task(task: Any, context: str | None = None, tools: list[BaseTool] | None = None) -> str:
Abstract method to execute a task.
create_agent_executor(tools=None) -> None:
Abstract method to create an agent executor.
get_delegation_tools(agents: list["BaseAgent"]):
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
get_platform_tools(apps: list[PlatformAppOrAction]):
Abstract method to get platform tools for the specified list of applications and/or application/action combinations.
get_output_converter(llm, model, instructions):
Abstract method to get the converter class for the agent to create json/pydantic outputs.
interpolate_inputs(inputs: dict[str, Any]) -> None:
Interpolate inputs into the agent description and backstory.
set_cache_handler(cache_handler: CacheHandler) -> None:
Set the cache handler for the agent.
copy() -> "BaseAgent":
Create a copy of the agent.
set_rpm_controller(rpm_controller: RPMController) -> None:
Set the rpm controller for the agent.
set_private_attrs() -> "BaseAgent":
Set private attributes.
"""
__hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr(default_factory=lambda: Logger(verbose=False))
_rpm_controller: RPMController | None = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
_original_role: str | None = PrivateAttr(default=None)
_original_goal: str | None = PrivateAttr(default=None)
_original_backstory: str | None = PrivateAttr(default=None)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
config: dict[str, Any] | None = Field(
description="Configuration for the agent", default=None, exclude=True
)
cache: bool = Field(
default=True, description="Whether the agent should use a cache for tool usage."
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
max_rpm: int | None = Field(
default=None,
description="Maximum number of requests per minute for the agent execution to be respected.",
)
allow_delegation: bool = Field(
default=False,
description="Enable agent to delegate and ask questions among each other.",
)
tools: list[BaseTool] | None = Field(
default_factory=list, description="Tools at agents' disposal"
)
max_iter: int = Field(
default=25, description="Maximum iterations for an agent to execute a task"
)
agent_executor: Any = Field(
default=None, description="An instance of the CrewAgentExecutor class."
)
llm: Any = Field(
default=None, description="Language model that will run the agent."
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
i18n: I18N = Field(
default_factory=I18N, description="Internationalization settings."
)
cache_handler: CacheHandler | None = Field(
default=None, description="An instance of the CacheHandler class."
)
tools_handler: ToolsHandler = Field(
default_factory=ToolsHandler,
description="An instance of the ToolsHandler class.",
)
tools_results: list[dict[str, Any]] = Field(
default=[], description="Results of the tools used by the agent."
)
max_tokens: int | None = Field(
default=None, description="Maximum number of tokens for the agent's execution."
)
knowledge: Knowledge | None = Field(
default=None, description="Knowledge for the agent."
)
knowledge_sources: list[BaseKnowledgeSource] | None = Field(
default=None,
description="Knowledge sources for the agent.",
)
knowledge_storage: Any | None = Field(
default=None,
description="Custom knowledge storage for the agent.",
)
security_config: SecurityConfig = Field(
default_factory=SecurityConfig,
description="Security configuration for the agent, including fingerprinting.",
)
callbacks: list[Callable] = Field(
default=[], description="Callbacks to be used for the agent"
)
adapted_agent: bool = Field(
default=False, description="Whether the agent is adapted"
)
knowledge_config: KnowledgeConfig | None = Field(
default=None,
description="Knowledge configuration for the agent such as limits and threshold",
)
apps: list[PlatformAppOrAction] | None = Field(
default=None,
description="List of applications or application/action combinations that the agent can access through CrewAI Platform. Can contain app names (e.g., 'gmail') or specific actions (e.g., 'gmail/send_email')",
)
mcps: list[str] | None = Field(
default=None,
description="List of MCP server references. Supports 'https://server.com/path' for external servers and 'crewai-amp:mcp-name' for AMP marketplace. Use '#tool_name' suffix for specific tools.",
)
@model_validator(mode="before")
@classmethod
def process_model_config(cls, values):
return process_config(values, cls)
@field_validator("tools")
@classmethod
def validate_tools(cls, tools: list[Any]) -> list[BaseTool]:
"""Validate and process the tools provided to the agent.
This method ensures that each tool is either an instance of BaseTool
or an object with 'name', 'func', and 'description' attributes. If the
tool meets these criteria, it is processed and added to the list of
tools. Otherwise, a ValueError is raised.
"""
if not tools:
return []
processed_tools = []
required_attrs = ["name", "func", "description"]
for tool in tools:
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif all(hasattr(tool, attr) for attr in required_attrs):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
raise ValueError(
f"Invalid tool type: {type(tool)}. "
"Tool must be an instance of BaseTool or "
"an object with 'name', 'func', and 'description' attributes."
)
return processed_tools
@field_validator("apps")
@classmethod
def validate_apps(
cls, apps: list[PlatformAppOrAction] | None
) -> list[PlatformAppOrAction] | None:
if not apps:
return apps
validated_apps = []
for app in apps:
if app.count("/") > 1:
raise ValueError(
f"Invalid app format '{app}'. Apps can only have one '/' for app/action format (e.g., 'gmail/send_email')"
)
validated_apps.append(app)
return list(set(validated_apps))
@field_validator("mcps")
@classmethod
def validate_mcps(cls, mcps: list[str] | None) -> list[str] | None:
if not mcps:
return mcps
validated_mcps = []
for mcp in mcps:
if mcp.startswith(("https://", "crewai-amp:")):
validated_mcps.append(mcp)
else:
raise ValueError(
f"Invalid MCP reference: {mcp}. Must start with 'https://' or 'crewai-amp:'"
)
return list(set(validated_mcps))
@model_validator(mode="after")
def validate_and_set_attributes(self):
# Validate required fields
for field in ["role", "goal", "backstory"]:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
# Set private attributes
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
if not self._token_process:
self._token_process = TokenProcess()
# Initialize security_config if not provided
if self.security_config is None:
self.security_config = SecurityConfig()
return self
@field_validator("id", mode="before")
@classmethod
def _deny_user_set_id(cls, v: UUID4 | None) -> None:
if v:
raise PydanticCustomError(
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_private_attrs(self):
"""Set private attributes."""
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
if not self._token_process:
self._token_process = TokenProcess()
return self
@property
def key(self):
source = [
self._original_role or self.role,
self._original_goal or self.goal,
self._original_backstory or self.backstory,
]
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
@abstractmethod
def execute_task(
self,
task: Any,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> str:
pass
@abstractmethod
def create_agent_executor(self, tools=None) -> None:
pass
@abstractmethod
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
"""Set the task tools that init BaseAgenTools class."""
@abstractmethod
def get_platform_tools(self, apps: list[PlatformAppOrAction]) -> list[BaseTool]:
"""Get platform tools for the specified list of applications and/or application/action combinations."""
@abstractmethod
def get_mcp_tools(self, mcps: list[str]) -> list[BaseTool]:
"""Get MCP tools for the specified list of MCP server references."""
def copy(self) -> Self: # type: ignore # Signature of "copy" incompatible with supertype "BaseModel"
"""Create a deep copy of the Agent."""
exclude = {
"id",
"_logger",
"_rpm_controller",
"_request_within_rpm_limit",
"_token_process",
"agent_executor",
"tools",
"tools_handler",
"cache_handler",
"llm",
"knowledge_sources",
"knowledge_storage",
"knowledge",
"apps",
"mcps",
"actions",
}
# Copy llm
existing_llm = shallow_copy(self.llm)
copied_knowledge = shallow_copy(self.knowledge)
copied_knowledge_storage = shallow_copy(self.knowledge_storage)
# Properly copy knowledge sources if they exist
existing_knowledge_sources = None
if self.knowledge_sources:
# Create a shared storage instance for all knowledge sources
shared_storage = (
self.knowledge_sources[0].storage if self.knowledge_sources else None
)
existing_knowledge_sources = []
for source in self.knowledge_sources:
copied_source = (
source.model_copy()
if hasattr(source, "model_copy")
else shallow_copy(source)
)
# Ensure all copied sources use the same storage instance
copied_source.storage = shared_storage
existing_knowledge_sources.append(copied_source)
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
return type(self)(
**copied_data,
llm=existing_llm,
tools=self.tools,
knowledge_sources=existing_knowledge_sources,
knowledge=copied_knowledge,
knowledge_storage=copied_knowledge_storage,
)
def interpolate_inputs(self, inputs: dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
if self._original_goal is None:
self._original_goal = self.goal
if self._original_backstory is None:
self._original_backstory = self.backstory
if inputs:
self.role = interpolate_only(
input_string=self._original_role, inputs=inputs
)
self.goal = interpolate_only(
input_string=self._original_goal, inputs=inputs
)
self.backstory = interpolate_only(
input_string=self._original_backstory, inputs=inputs
)
def set_cache_handler(self, cache_handler: CacheHandler) -> None:
"""Set the cache handler for the agent.
Args:
cache_handler: An instance of the CacheHandler class.
"""
self.tools_handler = ToolsHandler()
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
"""Set the rpm controller for the agent.
Args:
rpm_controller: An instance of the RPMController class.
"""
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self.create_agent_executor()
def set_knowledge(self, crew_embedder: EmbedderConfig | None = None):
pass

View File

@@ -0,0 +1,175 @@
from __future__ import annotations
import time
from typing import TYPE_CHECKING
from crewai.events.event_listener import event_listener
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.printer import Printer
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.task import Task
from crewai.utilities.i18n import I18N
from crewai.utilities.types import LLMMessage
class CrewAgentExecutorMixin:
crew: Crew
agent: Agent
task: Task
iterations: int
max_iter: int
messages: list[LLMMessage]
_i18n: I18N
_printer: Printer = Printer()
def _create_short_term_memory(self, output) -> None:
"""Create and save a short-term memory item if conditions are met."""
if (
self.crew
and self.agent
and self.task
and "Action: Delegate work to coworker" not in output.text
):
try:
if (
hasattr(self.crew, "_short_term_memory")
and self.crew._short_term_memory
):
self.crew._short_term_memory.save(
value=output.text,
metadata={
"observation": self.task.description,
},
)
except Exception as e:
self.agent._logger.log(
"error", f"Failed to add to short term memory: {e}"
)
def _create_external_memory(self, output) -> None:
"""Create and save a external-term memory item if conditions are met."""
if (
self.crew
and self.agent
and self.task
and hasattr(self.crew, "_external_memory")
and self.crew._external_memory
):
try:
self.crew._external_memory.save(
value=output.text,
metadata={
"description": self.task.description,
"messages": self.messages,
},
)
except Exception as e:
self.agent._logger.log(
"error", f"Failed to add to external memory: {e}"
)
def _create_long_term_memory(self, output) -> None:
"""Create and save long-term and entity memory items based on evaluation."""
if (
self.crew
and self.crew._long_term_memory
and self.crew._entity_memory
and self.task
and self.agent
):
try:
ltm_agent = TaskEvaluator(self.agent)
evaluation = ltm_agent.evaluate(self.task, output.text)
if isinstance(evaluation, ConverterError):
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": evaluation.suggestions,
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
entity_memories = [
EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join(
[f"- {r}" for r in entity.relationships]
),
)
for entity in evaluation.entities
]
if entity_memories:
self.crew._entity_memory.save(entity_memories)
except AttributeError as e:
self.agent._logger.log(
"error", f"Missing attributes for long term memory: {e}"
)
except Exception as e:
self.agent._logger.log(
"error", f"Failed to add to long term memory: {e}"
)
elif (
self.crew
and self.crew._long_term_memory
and self.crew._entity_memory is None
):
self._printer.print(
content="Long term memory is enabled, but entity memory is not enabled. Please configure entity memory or set memory=True to automatically enable it.",
color="bold_yellow",
)
def _ask_human_input(self, final_answer: str) -> str:
"""Prompt human input with mode-appropriate messaging."""
event_listener.formatter.pause_live_updates()
try:
self._printer.print(
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
)
# Training mode prompt (single iteration)
if self.crew and getattr(self.crew, "_train", False):
prompt = (
"\n\n=====\n"
"## TRAINING MODE: Provide feedback to improve the agent's performance.\n"
"This will be used to train better versions of the agent.\n"
"Please provide detailed feedback about the result quality and reasoning process.\n"
"=====\n"
)
# Regular human-in-the-loop prompt (multiple iterations)
else:
prompt = (
"\n\n=====\n"
"## HUMAN FEEDBACK: Provide feedback on the Final Result and Agent's actions.\n"
"Please follow these guidelines:\n"
" - If you are happy with the result, simply hit Enter without typing anything.\n"
" - Otherwise, provide specific improvement requests.\n"
" - You can provide multiple rounds of feedback until satisfied.\n"
"=====\n"
)
self._printer.print(content=prompt, color="bold_yellow")
response = input()
if response.strip() != "":
self._printer.print(
content="\nProcessing your feedback...", color="cyan"
)
return response
finally:
event_listener.formatter.resume_live_updates()

View File

@@ -0,0 +1,56 @@
"""Base output converter for transforming text into structured formats."""
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Any
from pydantic import BaseModel, Field
class OutputConverter(BaseModel, ABC):
"""Abstract base class for converting text to structured formats.
Uses language models to transform unstructured text into either Pydantic models
or JSON objects based on provided instructions and target schemas.
Attributes:
text: The input text to be converted.
llm: The language model used for conversion.
model: The target Pydantic model class for structuring output.
instructions: Specific instructions for the conversion process.
max_attempts: Maximum number of conversion attempts (default: 3).
"""
text: str = Field(description="Text to be converted.")
llm: Any = Field(description="The language model to be used to convert the text.")
model: type[BaseModel] = Field(
description="The model to be used to convert the text."
)
instructions: str = Field(description="Conversion instructions to the LLM.")
max_attempts: int = Field(
description="Max number of attempts to try to get the output formatted.",
default=3,
)
@abstractmethod
def to_pydantic(self, current_attempt: int = 1) -> BaseModel:
"""Convert text to a Pydantic model instance.
Args:
current_attempt: Current attempt number for retry logic.
Returns:
Pydantic model instance with structured data.
"""
@abstractmethod
def to_json(self, current_attempt: int = 1) -> dict[str, Any]:
"""Convert text to a JSON dictionary.
Args:
current_attempt: Current attempt number for retry logic.
Returns:
Dictionary containing structured JSON data.
"""

View File

@@ -0,0 +1,75 @@
"""Token usage tracking utilities.
This module provides utilities for tracking token consumption and request
metrics during agent execution.
"""
from crewai.types.usage_metrics import UsageMetrics
class TokenProcess:
"""Track token usage during agent processing.
Attributes:
total_tokens: Total number of tokens used.
prompt_tokens: Number of tokens used in prompts.
cached_prompt_tokens: Number of cached prompt tokens used.
completion_tokens: Number of tokens used in completions.
successful_requests: Number of successful requests made.
"""
def __init__(self) -> None:
"""Initialize token tracking with zero values."""
self.total_tokens: int = 0
self.prompt_tokens: int = 0
self.cached_prompt_tokens: int = 0
self.completion_tokens: int = 0
self.successful_requests: int = 0
def sum_prompt_tokens(self, tokens: int) -> None:
"""Add prompt tokens to the running totals.
Args:
tokens: Number of prompt tokens to add.
"""
self.prompt_tokens += tokens
self.total_tokens += tokens
def sum_completion_tokens(self, tokens: int) -> None:
"""Add completion tokens to the running totals.
Args:
tokens: Number of completion tokens to add.
"""
self.completion_tokens += tokens
self.total_tokens += tokens
def sum_cached_prompt_tokens(self, tokens: int) -> None:
"""Add cached prompt tokens to the running total.
Args:
tokens: Number of cached prompt tokens to add.
"""
self.cached_prompt_tokens += tokens
def sum_successful_requests(self, requests: int) -> None:
"""Add successful requests to the running total.
Args:
requests: Number of successful requests to add.
"""
self.successful_requests += requests
def get_summary(self) -> UsageMetrics:
"""Get a summary of all tracked metrics.
Returns:
UsageMetrics object with current totals.
"""
return UsageMetrics(
total_tokens=self.total_tokens,
prompt_tokens=self.prompt_tokens,
cached_prompt_tokens=self.cached_prompt_tokens,
completion_tokens=self.completion_tokens,
successful_requests=self.successful_requests,
)

View File

@@ -0,0 +1,5 @@
from crewai.agents.cache.cache_handler import CacheHandler
__all__ = ["CacheHandler"]

View File

@@ -0,0 +1,51 @@
"""Cache handler for tool usage results."""
from typing import Any
from pydantic import BaseModel, PrivateAttr
from crewai.utilities.rw_lock import RWLock
class CacheHandler(BaseModel):
"""Handles caching of tool execution results.
Provides thread-safe in-memory caching for tool outputs based on tool name and input.
Uses a read-write lock to allow concurrent reads while ensuring exclusive write access.
Notes:
- TODO: Rename 'input' parameter to avoid shadowing builtin.
"""
_cache: dict[str, Any] = PrivateAttr(default_factory=dict)
_lock: RWLock = PrivateAttr(default_factory=RWLock)
def add(self, tool: str, input: str, output: Any) -> None:
"""Add a tool result to the cache.
Args:
tool: Name of the tool.
input: Input string used for the tool.
output: Output result from tool execution.
Notes:
- TODO: Rename 'input' parameter to avoid shadowing builtin.
"""
with self._lock.w_locked():
self._cache[f"{tool}-{input}"] = output
def read(self, tool: str, input: str) -> Any | None:
"""Retrieve a cached tool result.
Args:
tool: Name of the tool.
input: Input string used for the tool.
Returns:
Cached result if found, None otherwise.
Notes:
- TODO: Rename 'input' parameter to avoid shadowing builtin.
"""
with self._lock.r_locked():
return self._cache.get(f"{tool}-{input}")

View File

@@ -0,0 +1,28 @@
"""Constants for agent-related modules."""
import re
from typing import Final
# crewai.agents.parser constants
FINAL_ANSWER_ACTION: Final[str] = "Final Answer:"
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE: Final[str] = (
"I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.\n"
)
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE: Final[str] = (
"I did it wrong. Invalid Format: I missed the 'Action Input:' after 'Action:'. I will do right next, and don't use a tool I have already used.\n"
)
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE: Final[str] = (
"I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"
)
UNABLE_TO_REPAIR_JSON_RESULTS: Final[list[str]] = ['""', "{}"]
ACTION_INPUT_REGEX: Final[re.Pattern[str]] = re.compile(
r"Action\s*\d*\s*:\s*(.*?)\s*Action\s*\d*\s*Input\s*\d*\s*:\s*(.*)", re.DOTALL
)
ACTION_REGEX: Final[re.Pattern[str]] = re.compile(
r"Action\s*\d*\s*:\s*(.*?)", re.DOTALL
)
ACTION_INPUT_ONLY_REGEX: Final[re.Pattern[str]] = re.compile(
r"\s*Action\s*\d*\s*Input\s*\d*\s*:\s*(.*)", re.DOTALL
)

View File

@@ -0,0 +1,539 @@
"""Agent executor for crew AI agents.
Handles agent execution flow including LLM interactions, tool execution,
and memory management.
"""
from __future__ import annotations
from collections.abc import Callable
from typing import TYPE_CHECKING, Any, Literal, cast
from pydantic import GetCoreSchemaHandler
from pydantic_core import CoreSchema, core_schema
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserError,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.logging_events import (
AgentLogsExecutionEvent,
AgentLogsStartedEvent,
)
from crewai.utilities.agent_utils import (
enforce_rpm_limit,
format_message_for_llm,
get_llm_response,
handle_agent_action_core,
handle_context_length,
handle_max_iterations_exceeded,
handle_output_parser_exception,
handle_unknown_error,
has_reached_max_iterations,
is_context_length_exceeded,
process_llm_response,
)
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
from crewai.utilities.tool_utils import execute_tool_and_check_finality
from crewai.utilities.training_handler import CrewTrainingHandler
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.agents.tools_handler import ToolsHandler
from crewai.crew import Crew
from crewai.llms.base_llm import BaseLLM
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_types import ToolResult
from crewai.utilities.prompts import StandardPromptResult, SystemPromptResult
from crewai.utilities.types import LLMMessage
class CrewAgentExecutor(CrewAgentExecutorMixin):
"""Executor for crew agents.
Manages the execution lifecycle of an agent including prompt formatting,
LLM interactions, tool execution, and feedback handling.
"""
def __init__(
self,
llm: BaseLLM | Any,
task: Task,
crew: Crew,
agent: Agent,
prompt: SystemPromptResult | StandardPromptResult,
max_iter: int,
tools: list[CrewStructuredTool],
tools_names: str,
stop_words: list[str],
tools_description: str,
tools_handler: ToolsHandler,
step_callback: Any = None,
original_tools: list[BaseTool] | None = None,
function_calling_llm: BaseLLM | Any | None = None,
respect_context_window: bool = False,
request_within_rpm_limit: Callable[[], bool] | None = None,
callbacks: list[Any] | None = None,
) -> None:
"""Initialize executor.
Args:
llm: Language model instance.
task: Task to execute.
crew: Crew instance.
agent: Agent to execute.
prompt: Prompt templates.
max_iter: Maximum iterations.
tools: Available tools.
tools_names: Tool names string.
stop_words: Stop word list.
tools_description: Tool descriptions.
tools_handler: Tool handler instance.
step_callback: Optional step callback.
original_tools: Original tool list.
function_calling_llm: Optional function calling LLM.
respect_context_window: Respect context limits.
request_within_rpm_limit: RPM limit check function.
callbacks: Optional callbacks list.
"""
self._i18n: I18N = I18N()
self.llm = llm
self.task = task
self.agent = agent
self.crew = crew
self.prompt = prompt
self.tools = tools
self.tools_names = tools_names
self.stop = stop_words
self.max_iter = max_iter
self.callbacks = callbacks or []
self._printer: Printer = Printer()
self.tools_handler = tools_handler
self.original_tools = original_tools or []
self.step_callback = step_callback
self.use_stop_words = self.llm.supports_stop_words()
self.tools_description = tools_description
self.function_calling_llm = function_calling_llm
self.respect_context_window = respect_context_window
self.request_within_rpm_limit = request_within_rpm_limit
self.ask_for_human_input = False
self.messages: list[LLMMessage] = []
self.iterations = 0
self.log_error_after = 3
existing_stop = getattr(self.llm, "stop", [])
self.llm.stop = list(
set(
existing_stop + self.stop
if isinstance(existing_stop, list)
else self.stop
)
)
def invoke(self, inputs: dict[str, Any]) -> dict[str, Any]:
"""Execute the agent with given inputs.
Args:
inputs: Input dictionary containing prompt variables.
Returns:
Dictionary with agent output.
"""
if "system" in self.prompt:
system_prompt = self._format_prompt(
cast(str, self.prompt.get("system", "")), inputs
)
user_prompt = self._format_prompt(
cast(str, self.prompt.get("user", "")), inputs
)
self.messages.append(format_message_for_llm(system_prompt, role="system"))
self.messages.append(format_message_for_llm(user_prompt))
else:
user_prompt = self._format_prompt(self.prompt.get("prompt", ""), inputs)
self.messages.append(format_message_for_llm(user_prompt))
self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
try:
formatted_answer = self._invoke_loop()
except AssertionError:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(self._printer, e)
raise
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer)
self._create_external_memory(formatted_answer)
return {"output": formatted_answer.output}
def _invoke_loop(self) -> AgentFinish:
"""Execute agent loop until completion.
Returns:
Final answer from the agent.
"""
formatted_answer = None
while not isinstance(formatted_answer, AgentFinish):
try:
if has_reached_max_iterations(self.iterations, self.max_iter):
formatted_answer = handle_max_iterations_exceeded(
formatted_answer,
printer=self._printer,
i18n=self._i18n,
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
)
enforce_rpm_limit(self.request_within_rpm_limit)
answer = get_llm_response(
llm=self.llm,
messages=self.messages,
callbacks=self.callbacks,
printer=self._printer,
from_task=self.task,
from_agent=self.agent,
)
formatted_answer = process_llm_response(answer, self.use_stop_words)
if isinstance(formatted_answer, AgentAction):
# Extract agent fingerprint if available
fingerprint_context = {}
if (
self.agent
and hasattr(self.agent, "security_config")
and hasattr(self.agent.security_config, "fingerprint")
):
fingerprint_context = {
"agent_fingerprint": str(
self.agent.security_config.fingerprint
)
}
tool_result = execute_tool_and_check_finality(
agent_action=formatted_answer,
fingerprint_context=fingerprint_context,
tools=self.tools,
i18n=self._i18n,
agent_key=self.agent.key if self.agent else None,
agent_role=self.agent.role if self.agent else None,
tools_handler=self.tools_handler,
task=self.task,
agent=self.agent,
function_calling_llm=self.function_calling_llm,
)
formatted_answer = self._handle_agent_action(
formatted_answer, tool_result
)
self._invoke_step_callback(formatted_answer)
self._append_message(formatted_answer.text)
except OutputParserError as e: # noqa: PERF203
formatted_answer = handle_output_parser_exception(
e=e,
messages=self.messages,
iterations=self.iterations,
log_error_after=self.log_error_after,
printer=self._printer,
)
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if is_context_length_exceeded(e):
handle_context_length(
respect_context_window=self.respect_context_window,
printer=self._printer,
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
)
continue
handle_unknown_error(self._printer, e)
raise e
finally:
self.iterations += 1
# During the invoke loop, formatted_answer alternates between AgentAction
# (when the agent is using tools) and eventually becomes AgentFinish
# (when the agent reaches a final answer). This check confirms we've
# reached a final answer and helps type checking understand this transition.
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer. "
f"Got {type(formatted_answer).__name__} instead of AgentFinish."
)
self._show_logs(formatted_answer)
return formatted_answer
def _handle_agent_action(
self, formatted_answer: AgentAction, tool_result: ToolResult
) -> AgentAction | AgentFinish:
"""Process agent action and tool execution.
Args:
formatted_answer: Agent's action to execute.
tool_result: Result from tool execution.
Returns:
Updated action or final answer.
"""
# Special case for add_image_tool
add_image_tool = self._i18n.tools("add_image")
if (
isinstance(add_image_tool, dict)
and formatted_answer.tool.casefold().strip()
== add_image_tool.get("name", "").casefold().strip()
):
self.messages.append({"role": "assistant", "content": tool_result.result})
return formatted_answer
return handle_agent_action_core(
formatted_answer=formatted_answer,
tool_result=tool_result,
messages=self.messages,
step_callback=self.step_callback,
show_logs=self._show_logs,
)
def _invoke_step_callback(
self, formatted_answer: AgentAction | AgentFinish
) -> None:
"""Invoke step callback.
Args:
formatted_answer: Current agent response.
"""
if self.step_callback:
self.step_callback(formatted_answer)
def _append_message(
self, text: str, role: Literal["user", "assistant", "system"] = "assistant"
) -> None:
"""Add message to conversation history.
Args:
text: Message content.
role: Message role (default: assistant).
"""
self.messages.append(format_message_for_llm(text, role=role))
def _show_start_logs(self) -> None:
"""Emit agent start event."""
if self.agent is None:
raise ValueError("Agent cannot be None")
crewai_event_bus.emit(
self.agent,
AgentLogsStartedEvent(
agent_role=self.agent.role,
task_description=(self.task.description if self.task else "Not Found"),
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
),
)
def _show_logs(self, formatted_answer: AgentAction | AgentFinish) -> None:
"""Emit agent execution event.
Args:
formatted_answer: Agent's response to log.
"""
if self.agent is None:
raise ValueError("Agent cannot be None")
crewai_event_bus.emit(
self.agent,
AgentLogsExecutionEvent(
agent_role=self.agent.role,
formatted_answer=formatted_answer,
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
),
)
def _handle_crew_training_output(
self, result: AgentFinish, human_feedback: str | None = None
) -> None:
"""Save training data.
Args:
result: Agent's final output.
human_feedback: Optional feedback from human.
"""
agent_id = str(self.agent.id)
train_iteration = (
getattr(self.crew, "_train_iteration", None) if self.crew else None
)
if train_iteration is None or not isinstance(train_iteration, int):
self._printer.print(
content="Invalid or missing train iteration. Cannot save training data.",
color="red",
)
return
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
training_data = training_handler.load() or {}
# Initialize or retrieve agent's training data
agent_training_data = training_data.get(agent_id, {})
if human_feedback is not None:
# Save initial output and human feedback
agent_training_data[train_iteration] = {
"initial_output": result.output,
"human_feedback": human_feedback,
}
else:
# Save improved output
if train_iteration in agent_training_data:
agent_training_data[train_iteration]["improved_output"] = result.output
else:
self._printer.print(
content=(
f"No existing training data for agent {agent_id} and iteration "
f"{train_iteration}. Cannot save improved output."
),
color="red",
)
return
# Update the training data and save
training_data[agent_id] = agent_training_data
training_handler.save(training_data)
@staticmethod
def _format_prompt(prompt: str, inputs: dict[str, str]) -> str:
"""Format prompt with input values.
Args:
prompt: Template string.
inputs: Values to substitute.
Returns:
Formatted prompt.
"""
prompt = prompt.replace("{input}", inputs["input"])
prompt = prompt.replace("{tool_names}", inputs["tool_names"])
return prompt.replace("{tools}", inputs["tools"])
def _handle_human_feedback(self, formatted_answer: AgentFinish) -> AgentFinish:
"""Process human feedback.
Args:
formatted_answer: Initial agent result.
Returns:
Final answer after feedback.
"""
human_feedback = self._ask_human_input(formatted_answer.output)
if self._is_training_mode():
return self._handle_training_feedback(formatted_answer, human_feedback)
return self._handle_regular_feedback(formatted_answer, human_feedback)
def _is_training_mode(self) -> bool:
"""Check if training mode is active.
Returns:
True if in training mode.
"""
return bool(self.crew and self.crew._train)
def _handle_training_feedback(
self, initial_answer: AgentFinish, feedback: str
) -> AgentFinish:
"""Process training feedback.
Args:
initial_answer: Initial agent output.
feedback: Training feedback.
Returns:
Improved answer.
"""
self._handle_crew_training_output(initial_answer, feedback)
self.messages.append(
format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
)
improved_answer = self._invoke_loop()
self._handle_crew_training_output(improved_answer)
self.ask_for_human_input = False
return improved_answer
def _handle_regular_feedback(
self, current_answer: AgentFinish, initial_feedback: str
) -> AgentFinish:
"""Process regular feedback iteratively.
Args:
current_answer: Current agent output.
initial_feedback: Initial user feedback.
Returns:
Final answer after iterations.
"""
feedback = initial_feedback
answer = current_answer
while self.ask_for_human_input:
# If the user provides a blank response, assume they are happy with the result
if feedback.strip() == "":
self.ask_for_human_input = False
else:
answer = self._process_feedback_iteration(feedback)
feedback = self._ask_human_input(answer.output)
return answer
def _process_feedback_iteration(self, feedback: str) -> AgentFinish:
"""Process single feedback iteration.
Args:
feedback: User feedback.
Returns:
Updated agent response.
"""
self.messages.append(
format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
)
return self._invoke_loop()
@classmethod
def __get_pydantic_core_schema__(
cls, _source_type: Any, _handler: GetCoreSchemaHandler
) -> CoreSchema:
"""Generate Pydantic core schema for BaseClient Protocol.
This allows the Protocol to be used in Pydantic models without
requiring arbitrary_types_allowed=True.
"""
return core_schema.any_schema()

View File

@@ -0,0 +1,194 @@
"""Agent output parsing module for ReAct-style LLM responses.
This module provides parsing functionality for agent outputs that follow
the ReAct (Reasoning and Acting) format, converting them into structured
AgentAction or AgentFinish objects.
"""
from dataclasses import dataclass
from json_repair import repair_json # type: ignore[import-untyped]
from crewai.agents.constants import (
ACTION_INPUT_ONLY_REGEX,
ACTION_INPUT_REGEX,
ACTION_REGEX,
FINAL_ANSWER_ACTION,
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE,
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
UNABLE_TO_REPAIR_JSON_RESULTS,
)
from crewai.utilities.i18n import I18N
_I18N = I18N()
@dataclass
class AgentAction:
"""Represents an action to be taken by an agent."""
thought: str
tool: str
tool_input: str
text: str
result: str | None = None
@dataclass
class AgentFinish:
"""Represents the final answer from an agent."""
thought: str
output: str
text: str
class OutputParserError(Exception):
"""Exception raised when output parsing fails.
Attributes:
error: The error message.
"""
def __init__(self, error: str) -> None:
"""Initialize OutputParserError.
Args:
error: The error message.
"""
self.error = error
super().__init__(error)
def parse(text: str) -> AgentAction | AgentFinish:
"""Parse agent output text into AgentAction or AgentFinish.
Expects output to be in one of two formats.
If the output signals that an action should be taken,
should be in the below format. This will result in an AgentAction
being returned.
Thought: agent thought here
Action: search
Action Input: what is the temperature in SF?
If the output signals that a final answer should be given,
should be in the below format. This will result in an AgentFinish
being returned.
Thought: agent thought here
Final Answer: The temperature is 100 degrees
Args:
text: The agent output text to parse.
Returns:
AgentAction or AgentFinish based on the content.
Raises:
OutputParserError: If the text format is invalid.
"""
thought = _extract_thought(text)
includes_answer = FINAL_ANSWER_ACTION in text
action_match = ACTION_INPUT_REGEX.search(text)
if includes_answer:
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
# Check whether the final answer ends with triple backticks.
if final_answer.endswith("```"):
# Count occurrences of triple backticks in the final answer.
count = final_answer.count("```")
# If count is odd then it's an unmatched trailing set; remove it.
if count % 2 != 0:
final_answer = final_answer[:-3].rstrip()
return AgentFinish(thought=thought, output=final_answer, text=text)
if action_match:
action = action_match.group(1)
clean_action = _clean_action(action)
action_input = action_match.group(2).strip()
tool_input = action_input.strip(" ").strip('"')
safe_tool_input = _safe_repair_json(tool_input)
return AgentAction(
thought=thought, tool=clean_action, tool_input=safe_tool_input, text=text
)
if not ACTION_REGEX.search(text):
raise OutputParserError(
f"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\n{_I18N.slice('final_answer_format')}",
)
if not ACTION_INPUT_ONLY_REGEX.search(text):
raise OutputParserError(
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
)
err_format = _I18N.slice("format_without_tools")
error = f"{err_format}"
raise OutputParserError(
error,
)
def _extract_thought(text: str) -> str:
"""Extract the thought portion from the text.
Args:
text: The full agent output text.
Returns:
The extracted thought string.
"""
thought_index = text.find("\nAction")
if thought_index == -1:
thought_index = text.find("\nFinal Answer")
if thought_index == -1:
return ""
thought = text[:thought_index].strip()
# Remove any triple backticks from the thought string
return thought.replace("```", "").strip()
def _clean_action(text: str) -> str:
"""Clean action string by removing non-essential formatting characters.
Args:
text: The action text to clean.
Returns:
The cleaned action string.
"""
return text.strip().strip("*").strip()
def _safe_repair_json(tool_input: str) -> str:
"""Safely repair JSON input.
Args:
tool_input: The tool input string to repair.
Returns:
The repaired JSON string or original if repair fails.
"""
# Skip repair if the input starts and ends with square brackets
# Explanation: The JSON parser has issues handling inputs that are enclosed in square brackets ('[]').
# These are typically valid JSON arrays or strings that do not require repair. Attempting to repair such inputs
# might lead to unintended alterations, such as wrapping the entire input in additional layers or modifying
# the structure in a way that changes its meaning. By skipping the repair for inputs that start and end with
# square brackets, we preserve the integrity of these valid JSON structures and avoid unnecessary modifications.
if tool_input.startswith("[") and tool_input.endswith("]"):
return tool_input
# Before repair, handle common LLM issues:
# 1. Replace """ with " to avoid JSON parser errors
tool_input = tool_input.replace('"""', '"')
result = repair_json(tool_input)
if result in UNABLE_TO_REPAIR_JSON_RESULTS:
return tool_input
return str(result)

View File

@@ -0,0 +1,74 @@
"""Tools handler for managing tool execution and caching."""
from __future__ import annotations
import json
from typing import TYPE_CHECKING, Any
from pydantic import GetCoreSchemaHandler
from pydantic_core import CoreSchema, core_schema
from crewai.tools.cache_tools.cache_tools import CacheTools
if TYPE_CHECKING:
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
class ToolsHandler:
"""Callback handler for tool usage.
Attributes:
last_used_tool: The most recently used tool calling instance.
cache: Optional cache handler for storing tool outputs.
"""
def __init__(self, cache: CacheHandler | None = None) -> None:
"""Initialize the callback handler.
Args:
cache: Optional cache handler for storing tool outputs.
"""
self.cache: CacheHandler | None = cache
self.last_used_tool: ToolCalling | InstructorToolCalling | None = None
def on_tool_use(
self,
calling: ToolCalling | InstructorToolCalling,
output: str,
should_cache: bool = True,
) -> None:
"""Run when tool ends running.
Args:
calling: The tool calling instance.
output: The output from the tool execution.
should_cache: Whether to cache the tool output.
"""
self.last_used_tool = calling
if self.cache and should_cache and calling.tool_name != CacheTools().name:
# Convert arguments to string for cache
input_str = ""
if calling.arguments:
if isinstance(calling.arguments, dict):
input_str = json.dumps(calling.arguments)
else:
input_str = str(calling.arguments)
self.cache.add(
tool=calling.tool_name,
input=input_str,
output=output,
)
@classmethod
def __get_pydantic_core_schema__(
cls, _source_type: Any, _handler: GetCoreSchemaHandler
) -> CoreSchema:
"""Generate Pydantic core schema for BaseClient Protocol.
This allows the Protocol to be used in Pydantic models without
requiring arbitrary_types_allowed=True.
"""
return core_schema.any_schema()

View File

View File

@@ -0,0 +1,76 @@
from pathlib import Path
import click
from crewai.cli.utils import copy_template
from crewai.utilities.printer import Printer
_printer = Printer()
def add_crew_to_flow(crew_name: str) -> None:
"""Add a new crew to the current flow."""
# Check if pyproject.toml exists in the current directory
if not Path("pyproject.toml").exists():
_printer.print(
"This command must be run from the root of a flow project.", color="red"
)
raise click.ClickException(
"This command must be run from the root of a flow project."
)
# Determine the flow folder based on the current directory
flow_folder = Path.cwd()
crews_folder = flow_folder / "src" / flow_folder.name / "crews"
if not crews_folder.exists():
_printer.print("Crews folder does not exist in the current flow.", color="red")
raise click.ClickException("Crews folder does not exist in the current flow.")
# Create the crew within the flow's crews directory
create_embedded_crew(crew_name, parent_folder=crews_folder)
click.echo(
f"Crew {crew_name} added to the current flow successfully!",
)
def create_embedded_crew(crew_name: str, parent_folder: Path) -> None:
"""Create a new crew within an existing flow project."""
folder_name = crew_name.replace(" ", "_").replace("-", "_").lower()
class_name = crew_name.replace("_", " ").replace("-", " ").title().replace(" ", "")
crew_folder = parent_folder / folder_name
if crew_folder.exists():
if not click.confirm(
f"Crew {folder_name} already exists. Do you want to override it?"
):
click.secho("Operation cancelled.", fg="yellow")
return
click.secho(f"Overriding crew {folder_name}...", fg="green", bold=True)
else:
click.secho(f"Creating crew {folder_name}...", fg="green", bold=True)
crew_folder.mkdir(parents=True)
# Create config and crew.py files
config_folder = crew_folder / "config"
config_folder.mkdir(exist_ok=True)
templates_dir = Path(__file__).parent / "templates" / "crew"
config_template_files = ["agents.yaml", "tasks.yaml"]
crew_template_file = f"{folder_name}.py" # Updated file name
for file_name in config_template_files:
src_file = templates_dir / "config" / file_name
dst_file = config_folder / file_name
copy_template(src_file, dst_file, crew_name, class_name, folder_name)
src_file = templates_dir / "crew.py"
dst_file = crew_folder / crew_template_file
copy_template(src_file, dst_file, crew_name, class_name, folder_name)
click.secho(
f"Crew {crew_name} added to the flow successfully!", fg="green", bold=True
)

View File

@@ -0,0 +1,5 @@
from crewai.cli.authentication.main import AuthenticationCommand
__all__ = ["AuthenticationCommand"]

View File

@@ -0,0 +1 @@
ALGORITHMS = ["RS256"]

View File

@@ -0,0 +1,187 @@
import time
from typing import Any
import webbrowser
from pydantic import BaseModel, Field
import requests
from rich.console import Console
from crewai.cli.authentication.utils import validate_jwt_token
from crewai.cli.config import Settings
from crewai.cli.shared.token_manager import TokenManager
console = Console()
class Oauth2Settings(BaseModel):
provider: str = Field(
description="OAuth2 provider used for authentication (e.g., workos, okta, auth0)."
)
client_id: str = Field(
description="OAuth2 client ID issued by the provider, used during authentication requests."
)
domain: str = Field(
description="OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens."
)
audience: str | None = Field(
description="OAuth2 audience value, typically used to identify the target API or resource.",
default=None,
)
@classmethod
def from_settings(cls):
settings = Settings()
return cls(
provider=settings.oauth2_provider,
domain=settings.oauth2_domain,
client_id=settings.oauth2_client_id,
audience=settings.oauth2_audience,
)
class ProviderFactory:
@classmethod
def from_settings(cls, settings: Oauth2Settings | None = None):
settings = settings or Oauth2Settings.from_settings()
import importlib
module = importlib.import_module(
f"crewai.cli.authentication.providers.{settings.provider.lower()}"
)
provider = getattr(module, f"{settings.provider.capitalize()}Provider")
return provider(settings)
class AuthenticationCommand:
def __init__(self):
self.token_manager = TokenManager()
self.oauth2_provider = ProviderFactory.from_settings()
def login(self) -> None:
"""Sign up to CrewAI+"""
console.print("Signing in to CrewAI AMP...\n", style="bold blue")
device_code_data = self._get_device_code()
self._display_auth_instructions(device_code_data)
return self._poll_for_token(device_code_data)
def _get_device_code(self) -> dict[str, Any]:
"""Get the device code to authenticate the user."""
device_code_payload = {
"client_id": self.oauth2_provider.get_client_id(),
"scope": "openid",
"audience": self.oauth2_provider.get_audience(),
}
response = requests.post(
url=self.oauth2_provider.get_authorize_url(),
data=device_code_payload,
timeout=20,
)
response.raise_for_status()
return response.json()
def _display_auth_instructions(self, device_code_data: dict[str, str]) -> None:
"""Display the authentication instructions to the user."""
console.print("1. Navigate to: ", device_code_data["verification_uri_complete"])
console.print("2. Enter the following code: ", device_code_data["user_code"])
webbrowser.open(device_code_data["verification_uri_complete"])
def _poll_for_token(self, device_code_data: dict[str, Any]) -> None:
"""Polls the server for the token until it is received, or max attempts are reached."""
token_payload = {
"grant_type": "urn:ietf:params:oauth:grant-type:device_code",
"device_code": device_code_data["device_code"],
"client_id": self.oauth2_provider.get_client_id(),
}
console.print("\nWaiting for authentication... ", style="bold blue", end="")
attempts = 0
while True and attempts < 10:
response = requests.post(
self.oauth2_provider.get_token_url(), data=token_payload, timeout=30
)
token_data = response.json()
if response.status_code == 200:
self._validate_and_save_token(token_data)
console.print(
"Success!",
style="bold green",
)
self._login_to_tool_repository()
console.print("\n[bold green]Welcome to CrewAI AMP![/bold green]\n")
return
if token_data["error"] not in ("authorization_pending", "slow_down"):
raise requests.HTTPError(token_data["error_description"])
time.sleep(device_code_data["interval"])
attempts += 1
console.print(
"Timeout: Failed to get the token. Please try again.", style="bold red"
)
def _validate_and_save_token(self, token_data: dict[str, Any]) -> None:
"""Validates the JWT token and saves the token to the token manager."""
jwt_token = token_data["access_token"]
issuer = self.oauth2_provider.get_issuer()
jwt_token_data = {
"jwt_token": jwt_token,
"jwks_url": self.oauth2_provider.get_jwks_url(),
"issuer": issuer,
"audience": self.oauth2_provider.get_audience(),
}
decoded_token = validate_jwt_token(**jwt_token_data)
expires_at = decoded_token.get("exp", 0)
self.token_manager.save_tokens(jwt_token, expires_at)
def _login_to_tool_repository(self) -> None:
"""Login to the tool repository."""
from crewai.cli.tools.main import ToolCommand
try:
console.print(
"Now logging you in to the Tool Repository... ",
style="bold blue",
end="",
)
ToolCommand().login()
console.print(
"Success!\n",
style="bold green",
)
settings = Settings()
console.print(
f"You are authenticated to the tool repository as [bold cyan]'{settings.org_name}'[/bold cyan] ({settings.org_uuid})",
style="green",
)
except Exception:
console.print(
"\n[bold yellow]Warning:[/bold yellow] Authentication with the Tool Repository failed.",
style="yellow",
)
console.print(
"Other features will work normally, but you may experience limitations "
"with downloading and publishing tools."
"\nRun [bold]crewai login[/bold] to try logging in again.\n",
style="yellow",
)

View File

@@ -0,0 +1,34 @@
from crewai.cli.authentication.providers.base_provider import BaseProvider
class Auth0Provider(BaseProvider):
def get_authorize_url(self) -> str:
return f"https://{self._get_domain()}/oauth/device/code"
def get_token_url(self) -> str:
return f"https://{self._get_domain()}/oauth/token"
def get_jwks_url(self) -> str:
return f"https://{self._get_domain()}/.well-known/jwks.json"
def get_issuer(self) -> str:
return f"https://{self._get_domain()}/"
def get_audience(self) -> str:
if self.settings.audience is None:
raise ValueError(
"Audience is required. Please set it in the configuration."
)
return self.settings.audience
def get_client_id(self) -> str:
if self.settings.client_id is None:
raise ValueError(
"Client ID is required. Please set it in the configuration."
)
return self.settings.client_id
def _get_domain(self) -> str:
if self.settings.domain is None:
raise ValueError("Domain is required. Please set it in the configuration.")
return self.settings.domain

View File

@@ -0,0 +1,26 @@
from abc import ABC, abstractmethod
from crewai.cli.authentication.main import Oauth2Settings
class BaseProvider(ABC):
def __init__(self, settings: Oauth2Settings):
self.settings = settings
@abstractmethod
def get_authorize_url(self) -> str: ...
@abstractmethod
def get_token_url(self) -> str: ...
@abstractmethod
def get_jwks_url(self) -> str: ...
@abstractmethod
def get_issuer(self) -> str: ...
@abstractmethod
def get_audience(self) -> str: ...
@abstractmethod
def get_client_id(self) -> str: ...

View File

@@ -0,0 +1,29 @@
from crewai.cli.authentication.providers.base_provider import BaseProvider
class OktaProvider(BaseProvider):
def get_authorize_url(self) -> str:
return f"https://{self.settings.domain}/oauth2/default/v1/device/authorize"
def get_token_url(self) -> str:
return f"https://{self.settings.domain}/oauth2/default/v1/token"
def get_jwks_url(self) -> str:
return f"https://{self.settings.domain}/oauth2/default/v1/keys"
def get_issuer(self) -> str:
return f"https://{self.settings.domain}/oauth2/default"
def get_audience(self) -> str:
if self.settings.audience is None:
raise ValueError(
"Audience is required. Please set it in the configuration."
)
return self.settings.audience
def get_client_id(self) -> str:
if self.settings.client_id is None:
raise ValueError(
"Client ID is required. Please set it in the configuration."
)
return self.settings.client_id

View File

@@ -0,0 +1,30 @@
from crewai.cli.authentication.providers.base_provider import BaseProvider
class WorkosProvider(BaseProvider):
def get_authorize_url(self) -> str:
return f"https://{self._get_domain()}/oauth2/device_authorization"
def get_token_url(self) -> str:
return f"https://{self._get_domain()}/oauth2/token"
def get_jwks_url(self) -> str:
return f"https://{self._get_domain()}/oauth2/jwks"
def get_issuer(self) -> str:
return f"https://{self._get_domain()}"
def get_audience(self) -> str:
return self.settings.audience or ""
def get_client_id(self) -> str:
if self.settings.client_id is None:
raise ValueError(
"Client ID is required. Please set it in the configuration."
)
return self.settings.client_id
def _get_domain(self) -> str:
if self.settings.domain is None:
raise ValueError("Domain is required. Please set it in the configuration.")
return self.settings.domain

View File

@@ -0,0 +1,13 @@
from crewai.cli.shared.token_manager import TokenManager
class AuthError(Exception):
pass
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise AuthError("No token found, make sure you are logged in")
return access_token

View File

@@ -0,0 +1,60 @@
import jwt
from jwt import PyJWKClient
def validate_jwt_token(
jwt_token: str, jwks_url: str, issuer: str, audience: str
) -> dict:
"""
Verify the token's signature and claims using PyJWT.
:param jwt_token: The JWT (JWS) string to validate.
:param jwks_url: The URL of the JWKS endpoint.
:param issuer: The expected issuer of the token.
:param audience: The expected audience of the token.
:return: The decoded token.
:raises Exception: If the token is invalid for any reason (e.g., signature mismatch,
expired, incorrect issuer/audience, JWKS fetching error,
missing required claims).
"""
try:
jwk_client = PyJWKClient(jwks_url)
signing_key = jwk_client.get_signing_key_from_jwt(jwt_token)
_unverified_decoded_token = jwt.decode(
jwt_token, options={"verify_signature": False}
)
return jwt.decode(
jwt_token,
signing_key.key,
algorithms=["RS256"],
audience=audience,
issuer=issuer,
leeway=10.0,
options={
"verify_signature": True,
"verify_exp": True,
"verify_nbf": True,
"verify_iat": True,
"require": ["exp", "iat", "iss", "aud", "sub"],
},
)
except jwt.ExpiredSignatureError as e:
raise Exception("Token has expired.") from e
except jwt.InvalidAudienceError as e:
actual_audience = _unverified_decoded_token.get("aud", "[no audience found]")
raise Exception(
f"Invalid token audience. Got: '{actual_audience}'. Expected: '{audience}'"
) from e
except jwt.InvalidIssuerError as e:
actual_issuer = _unverified_decoded_token.get("iss", "[no issuer found]")
raise Exception(
f"Invalid token issuer. Got: '{actual_issuer}'. Expected: '{issuer}'"
) from e
except jwt.MissingRequiredClaimError as e:
raise Exception(f"Token is missing required claims: {e!s}") from e
except jwt.exceptions.PyJWKClientError as e:
raise Exception(f"JWKS or key processing error: {e!s}") from e
except jwt.InvalidTokenError as e:
raise Exception(f"Invalid token: {e!s}") from e

View File

@@ -0,0 +1,497 @@
from importlib.metadata import version as get_version
import os
import subprocess
import click
from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.authentication.main import AuthenticationCommand
from crewai.cli.config import Settings
from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow
from crewai.cli.crew_chat import run_chat
from crewai.cli.deploy.main import DeployCommand
from crewai.cli.enterprise.main import EnterpriseConfigureCommand
from crewai.cli.evaluate_crew import evaluate_crew
from crewai.cli.install_crew import install_crew
from crewai.cli.kickoff_flow import kickoff_flow
from crewai.cli.organization.main import OrganizationCommand
from crewai.cli.plot_flow import plot_flow
from crewai.cli.replay_from_task import replay_task_command
from crewai.cli.reset_memories_command import reset_memories_command
from crewai.cli.run_crew import run_crew
from crewai.cli.settings.main import SettingsCommand
from crewai.cli.tools.main import ToolCommand
from crewai.cli.train_crew import train_crew
from crewai.cli.triggers.main import TriggersCommand
from crewai.cli.update_crew import update_crew
from crewai.cli.utils import build_env_with_tool_repository_credentials, read_toml
from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)
@click.group()
@click.version_option(get_version("crewai"))
def crewai():
"""Top-level command group for crewai."""
@crewai.command(
name="uv",
context_settings=dict(
ignore_unknown_options=True,
),
)
@click.argument("uv_args", nargs=-1, type=click.UNPROCESSED)
def uv(uv_args):
"""A wrapper around uv commands that adds custom tool authentication through env vars."""
env = os.environ.copy()
try:
pyproject_data = read_toml()
sources = pyproject_data.get("tool", {}).get("uv", {}).get("sources", {})
for source_config in sources.values():
if isinstance(source_config, dict):
index = source_config.get("index")
if index:
index_env = build_env_with_tool_repository_credentials(index)
env.update(index_env)
except (FileNotFoundError, KeyError) as e:
raise SystemExit(
"Error. A valid pyproject.toml file is required. Check that a valid pyproject.toml file exists in the current directory."
) from e
except Exception as e:
raise SystemExit(f"Error: {e}") from e
try:
subprocess.run( # noqa: S603
["uv", *uv_args], # noqa: S607
capture_output=False,
env=env,
text=True,
check=True,
)
except subprocess.CalledProcessError as e:
click.secho(f"uv command failed with exit code {e.returncode}", fg="red")
raise SystemExit(e.returncode) from e
@crewai.command()
@click.argument("type", type=click.Choice(["crew", "flow"]))
@click.argument("name")
@click.option("--provider", type=str, help="The provider to use for the crew")
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
"""Create a new crew, or flow."""
if type == "crew":
create_crew(name, provider, skip_provider)
elif type == "flow":
create_flow(name)
else:
click.secho("Error: Invalid type. Must be 'crew' or 'flow'.", fg="red")
@crewai.command()
@click.option(
"--tools", is_flag=True, help="Show the installed version of crewai tools"
)
def version(tools):
"""Show the installed version of crewai."""
try:
crewai_version = get_version("crewai")
except Exception:
crewai_version = "unknown version"
click.echo(f"crewai version: {crewai_version}")
if tools:
try:
tools_version = get_version("crewai")
click.echo(f"crewai tools version: {tools_version}")
except Exception:
click.echo("crewai tools not installed")
@crewai.command()
@click.option(
"-n",
"--n_iterations",
type=int,
default=5,
help="Number of iterations to train the crew",
)
@click.option(
"-f",
"--filename",
type=str,
default="trained_agents_data.pkl",
help="Path to a custom file for training",
)
def train(n_iterations: int, filename: str):
"""Train the crew."""
click.echo(f"Training the Crew for {n_iterations} iterations")
train_crew(n_iterations, filename)
@crewai.command()
@click.option(
"-t",
"--task_id",
type=str,
help="Replay the crew from this task ID, including all subsequent tasks.",
)
def replay(task_id: str) -> None:
"""
Replay the crew execution from a specific task.
Args:
task_id (str): The ID of the task to replay from.
"""
try:
click.echo(f"Replaying the crew from task {task_id}")
replay_task_command(task_id)
except Exception as e:
click.echo(f"An error occurred while replaying: {e}", err=True)
@crewai.command()
def log_tasks_outputs() -> None:
"""
Retrieve your latest crew.kickoff() task outputs.
"""
try:
storage = KickoffTaskOutputsSQLiteStorage()
tasks = storage.load()
if not tasks:
click.echo(
"No task outputs found. Only crew kickoff task outputs are logged."
)
return
for index, task in enumerate(tasks, 1):
click.echo(f"Task {index}: {task['task_id']}")
click.echo(f"Description: {task['expected_output']}")
click.echo("------")
except Exception as e:
click.echo(f"An error occurred while logging task outputs: {e}", err=True)
@crewai.command()
@click.option("-l", "--long", is_flag=True, help="Reset LONG TERM memory")
@click.option("-s", "--short", is_flag=True, help="Reset SHORT TERM memory")
@click.option("-e", "--entities", is_flag=True, help="Reset ENTITIES memory")
@click.option("-kn", "--knowledge", is_flag=True, help="Reset KNOWLEDGE storage")
@click.option(
"-akn", "--agent-knowledge", is_flag=True, help="Reset AGENT KNOWLEDGE storage"
)
@click.option(
"-k", "--kickoff-outputs", is_flag=True, help="Reset LATEST KICKOFF TASK OUTPUTS"
)
@click.option("-a", "--all", is_flag=True, help="Reset ALL memories")
def reset_memories(
long: bool,
short: bool,
entities: bool,
knowledge: bool,
kickoff_outputs: bool,
agent_knowledge: bool,
all: bool,
) -> None:
"""
Reset the crew memories (long, short, entity, latest_crew_kickoff_ouputs, knowledge, agent_knowledge). This will delete all the data saved.
"""
try:
memory_types = [
long,
short,
entities,
knowledge,
agent_knowledge,
kickoff_outputs,
all,
]
if not any(memory_types):
click.echo(
"Please specify at least one memory type to reset using the appropriate flags."
)
return
reset_memories_command(
long, short, entities, knowledge, agent_knowledge, kickoff_outputs, all
)
except Exception as e:
click.echo(f"An error occurred while resetting memories: {e}", err=True)
@crewai.command()
@click.option(
"-n",
"--n_iterations",
type=int,
default=3,
help="Number of iterations to Test the crew",
)
@click.option(
"-m",
"--model",
type=str,
default="gpt-4o-mini",
help="LLM Model to run the tests on the Crew. For now only accepting only OpenAI models.",
)
def test(n_iterations: int, model: str):
"""Test the crew and evaluate the results."""
click.echo(f"Testing the crew for {n_iterations} iterations with model {model}")
evaluate_crew(n_iterations, model)
@crewai.command(
context_settings=dict(
ignore_unknown_options=True,
allow_extra_args=True,
)
)
@click.pass_context
def install(context):
"""Install the Crew."""
install_crew(context.args)
@crewai.command()
def run():
"""Run the Crew."""
run_crew()
@crewai.command()
def update():
"""Update the pyproject.toml of the Crew project to use uv."""
update_crew()
@crewai.command()
def login():
"""Sign Up/Login to CrewAI AMP."""
Settings().clear_user_settings()
AuthenticationCommand().login()
# DEPLOY CREWAI+ COMMANDS
@crewai.group()
def deploy():
"""Deploy the Crew CLI group."""
@deploy.command(name="create")
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
def deploy_create(yes: bool):
"""Create a Crew deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.create_crew(yes)
@deploy.command(name="list")
def deploy_list():
"""List all deployments."""
deploy_cmd = DeployCommand()
deploy_cmd.list_crews()
@deploy.command(name="push")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_push(uuid: str | None):
"""Deploy the Crew."""
deploy_cmd = DeployCommand()
deploy_cmd.deploy(uuid=uuid)
@deploy.command(name="status")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deply_status(uuid: str | None):
"""Get the status of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_status(uuid=uuid)
@deploy.command(name="logs")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_logs(uuid: str | None):
"""Get the logs of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_logs(uuid=uuid)
@deploy.command(name="remove")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_remove(uuid: str | None):
"""Remove a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.remove_crew(uuid=uuid)
@crewai.group()
def tool():
"""Tool Repository related commands."""
@tool.command(name="create")
@click.argument("handle")
def tool_create(handle: str):
tool_cmd = ToolCommand()
tool_cmd.create(handle)
@tool.command(name="install")
@click.argument("handle")
def tool_install(handle: str):
tool_cmd = ToolCommand()
tool_cmd.login()
tool_cmd.install(handle)
@tool.command(name="publish")
@click.option(
"--force",
is_flag=True,
show_default=True,
default=False,
help="Bypasses Git remote validations",
)
@click.option("--public", "is_public", flag_value=True, default=False)
@click.option("--private", "is_public", flag_value=False)
def tool_publish(is_public: bool, force: bool):
tool_cmd = ToolCommand()
tool_cmd.login()
tool_cmd.publish(is_public, force)
@crewai.group()
def flow():
"""Flow related commands."""
@flow.command(name="kickoff")
def flow_run():
"""Kickoff the Flow."""
click.echo("Running the Flow")
kickoff_flow()
@flow.command(name="plot")
def flow_plot():
"""Plot the Flow."""
click.echo("Plotting the Flow")
plot_flow()
@flow.command(name="add-crew")
@click.argument("crew_name")
def flow_add_crew(crew_name):
"""Add a crew to an existing flow."""
click.echo(f"Adding crew {crew_name} to the flow")
add_crew_to_flow(crew_name)
@crewai.group()
def triggers():
"""Trigger related commands. Use 'crewai triggers list' to see available triggers, or 'crewai triggers run app_slug/trigger_slug' to execute."""
@triggers.command(name="list")
def triggers_list():
"""List all available triggers from integrations."""
triggers_cmd = TriggersCommand()
triggers_cmd.list_triggers()
@triggers.command(name="run")
@click.argument("trigger_path")
def triggers_run(trigger_path: str):
"""Execute crew with trigger payload. Format: app_slug/trigger_slug"""
triggers_cmd = TriggersCommand()
triggers_cmd.execute_with_trigger(trigger_path)
@crewai.command()
def chat():
"""
Start a conversation with the Crew, collecting user-supplied inputs,
and using the Chat LLM to generate responses.
"""
click.secho(
"\nStarting a conversation with the Crew\nType 'exit' or Ctrl+C to quit.\n",
)
run_chat()
@crewai.group(invoke_without_command=True)
def org():
"""Organization management commands."""
@org.command("list")
def org_list():
"""List available organizations."""
org_command = OrganizationCommand()
org_command.list()
@org.command()
@click.argument("id")
def switch(id):
"""Switch to a specific organization."""
org_command = OrganizationCommand()
org_command.switch(id)
@org.command()
def current():
"""Show current organization when 'crewai org' is called without subcommands."""
org_command = OrganizationCommand()
org_command.current()
@crewai.group()
def enterprise():
"""Enterprise Configuration commands."""
@enterprise.command("configure")
@click.argument("enterprise_url")
def enterprise_configure(enterprise_url: str):
"""Configure CrewAI AMP OAuth2 settings from the provided Enterprise URL."""
enterprise_command = EnterpriseConfigureCommand()
enterprise_command.configure(enterprise_url)
@crewai.group()
def config():
"""CLI Configuration commands."""
@config.command("list")
def config_list():
"""List all CLI configuration parameters."""
config_command = SettingsCommand()
config_command.list()
@config.command("set")
@click.argument("key")
@click.argument("value")
def config_set(key: str, value: str):
"""Set a CLI configuration parameter."""
config_command = SettingsCommand()
config_command.set(key, value)
@config.command("reset")
def config_reset():
"""Reset all CLI configuration parameters to default values."""
config_command = SettingsCommand()
config_command.reset_all_settings()
if __name__ == "__main__":
crewai()

View File

@@ -0,0 +1,75 @@
import requests
from requests.exceptions import JSONDecodeError
from rich.console import Console
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI
from crewai.telemetry.telemetry import Telemetry
console = Console()
class BaseCommand:
def __init__(self):
self._telemetry = Telemetry()
self._telemetry.set_tracer()
class PlusAPIMixin:
def __init__(self, telemetry):
try:
telemetry.set_tracer()
self.plus_api_client = PlusAPI(api_key=get_auth_token())
except Exception:
self._deploy_signup_error_span = telemetry.deploy_signup_error_span()
console.print(
"Please sign up/login to CrewAI+ before using the CLI.",
style="bold red",
)
console.print("Run 'crewai login' to sign up/login.", style="bold green")
raise SystemExit from None
def _validate_response(self, response: requests.Response) -> None:
"""
Handle and display error messages from API responses.
Args:
response (requests.Response): The response from the Plus API
"""
try:
json_response = response.json()
except (JSONDecodeError, ValueError):
console.print(
"Failed to parse response from Enterprise API failed. Details:",
style="bold red",
)
console.print(f"Status Code: {response.status_code}")
console.print(
f"Response:\n{response.content.decode('utf-8', errors='replace')}"
)
raise SystemExit from None
if response.status_code == 422:
console.print(
"Failed to complete operation. Please fix the following errors:",
style="bold red",
)
for field, messages in json_response.items():
for message in messages:
console.print(
f"* [bold red]{field.capitalize()}[/bold red] {message}"
)
raise SystemExit
if not response.ok:
console.print(
"Request to Enterprise API failed. Details:", style="bold red"
)
details = (
json_response.get("error")
or json_response.get("message")
or response.content.decode("utf-8", errors="replace")
)
console.print(f"{details}")
raise SystemExit

View File

@@ -0,0 +1,213 @@
import json
from logging import getLogger
from pathlib import Path
import tempfile
from pydantic import BaseModel, Field
from crewai.cli.constants import (
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_AUDIENCE,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_CLIENT_ID,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_PROVIDER,
DEFAULT_CREWAI_ENTERPRISE_URL,
)
from crewai.cli.shared.token_manager import TokenManager
logger = getLogger(__name__)
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
def get_writable_config_path() -> Path | None:
"""
Find a writable location for the config file with fallback options.
Tries in order:
1. Default: ~/.config/crewai/settings.json
2. Temp directory: /tmp/crewai_settings.json (or OS equivalent)
3. Current directory: ./crewai_settings.json
4. In-memory only (returns None)
Returns:
Path object for writable config location, or None if no writable location found
"""
fallback_paths = [
DEFAULT_CONFIG_PATH, # Default location
Path(tempfile.gettempdir()) / "crewai_settings.json", # Temporary directory
Path.cwd() / "crewai_settings.json", # Current working directory
]
for config_path in fallback_paths:
try:
config_path.parent.mkdir(parents=True, exist_ok=True)
test_file = config_path.parent / ".crewai_write_test"
try:
test_file.write_text("test")
test_file.unlink() # Clean up test file
logger.info(f"Using config path: {config_path}")
return config_path
except Exception: # noqa: S112
continue
except Exception: # noqa: S112
continue
return None
# Settings that are related to the user's account
USER_SETTINGS_KEYS = [
"tool_repository_username",
"tool_repository_password",
"org_name",
"org_uuid",
]
# Settings that are related to the CLI
CLI_SETTINGS_KEYS = [
"enterprise_base_url",
"oauth2_provider",
"oauth2_audience",
"oauth2_client_id",
"oauth2_domain",
]
# Default values for CLI settings
DEFAULT_CLI_SETTINGS = {
"enterprise_base_url": DEFAULT_CREWAI_ENTERPRISE_URL,
"oauth2_provider": CREWAI_ENTERPRISE_DEFAULT_OAUTH2_PROVIDER,
"oauth2_audience": CREWAI_ENTERPRISE_DEFAULT_OAUTH2_AUDIENCE,
"oauth2_client_id": CREWAI_ENTERPRISE_DEFAULT_OAUTH2_CLIENT_ID,
"oauth2_domain": CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN,
}
# Readonly settings - cannot be set by the user
READONLY_SETTINGS_KEYS = [
"org_name",
"org_uuid",
]
# Hidden settings - not displayed by the 'list' command and cannot be set by the user
HIDDEN_SETTINGS_KEYS = [
"config_path",
"tool_repository_username",
"tool_repository_password",
]
class Settings(BaseModel):
enterprise_base_url: str | None = Field(
default=DEFAULT_CLI_SETTINGS["enterprise_base_url"],
description="Base URL of the CrewAI AMP instance",
)
tool_repository_username: str | None = Field(
None, description="Username for interacting with the Tool Repository"
)
tool_repository_password: str | None = Field(
None, description="Password for interacting with the Tool Repository"
)
org_name: str | None = Field(
None, description="Name of the currently active organization"
)
org_uuid: str | None = Field(
None, description="UUID of the currently active organization"
)
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, frozen=True, exclude=True)
oauth2_provider: str = Field(
description="OAuth2 provider used for authentication (e.g., workos, okta, auth0).",
default=DEFAULT_CLI_SETTINGS["oauth2_provider"],
)
oauth2_audience: str | None = Field(
description="OAuth2 audience value, typically used to identify the target API or resource.",
default=DEFAULT_CLI_SETTINGS["oauth2_audience"],
)
oauth2_client_id: str = Field(
default=DEFAULT_CLI_SETTINGS["oauth2_client_id"],
description="OAuth2 client ID issued by the provider, used during authentication requests.",
)
oauth2_domain: str = Field(
description="OAuth2 provider's domain (e.g., your-org.auth0.com) used for issuing tokens.",
default=DEFAULT_CLI_SETTINGS["oauth2_domain"],
)
def __init__(self, config_path: Path | None = None, **data):
"""Load Settings from config path with fallback support"""
if config_path is None:
config_path = get_writable_config_path()
# If config_path is None, we're in memory-only mode
if config_path is None:
merged_data = {**data}
# Dummy path for memory-only mode
super().__init__(config_path=Path("/dev/null"), **merged_data)
return
try:
config_path.parent.mkdir(parents=True, exist_ok=True)
except Exception:
merged_data = {**data}
# Dummy path for memory-only mode
super().__init__(config_path=Path("/dev/null"), **merged_data)
return
file_data = {}
if config_path.is_file():
try:
with config_path.open("r") as f:
file_data = json.load(f)
except Exception:
file_data = {}
merged_data = {**file_data, **data}
super().__init__(config_path=config_path, **merged_data)
def clear_user_settings(self) -> None:
"""Clear all user settings"""
self._reset_user_settings()
self.dump()
def reset(self) -> None:
"""Reset all settings to default values"""
self._reset_user_settings()
self._reset_cli_settings()
self._clear_auth_tokens()
self.dump()
def dump(self) -> None:
"""Save current settings to settings.json"""
if str(self.config_path) == "/dev/null":
return
try:
if self.config_path.is_file():
with self.config_path.open("r") as f:
existing_data = json.load(f)
else:
existing_data = {}
updated_data = {**existing_data, **self.model_dump(exclude_unset=True)}
with self.config_path.open("w") as f:
json.dump(updated_data, f, indent=4)
except Exception: # noqa: S110
pass
def _reset_user_settings(self) -> None:
"""Reset all user settings to default values"""
for key in USER_SETTINGS_KEYS:
setattr(self, key, None)
def _reset_cli_settings(self) -> None:
"""Reset all CLI settings to default values"""
for key in CLI_SETTINGS_KEYS:
setattr(self, key, DEFAULT_CLI_SETTINGS.get(key))
def _clear_auth_tokens(self) -> None:
"""Clear all authentication tokens"""
TokenManager().clear_tokens()

View File

@@ -0,0 +1,329 @@
DEFAULT_CREWAI_ENTERPRISE_URL = "https://app.crewai.com"
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_PROVIDER = "workos"
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_AUDIENCE = "client_01JNJQWBJ4SPFN3SWJM5T7BDG8"
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_CLIENT_ID = "client_01JYT06R59SP0NXYGD994NFXXX"
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN = "login.crewai.com"
ENV_VARS = {
"openai": [
{
"prompt": "Enter your OPENAI API key (press Enter to skip)",
"key_name": "OPENAI_API_KEY",
}
],
"anthropic": [
{
"prompt": "Enter your ANTHROPIC API key (press Enter to skip)",
"key_name": "ANTHROPIC_API_KEY",
}
],
"gemini": [
{
"prompt": "Enter your GEMINI API key from https://ai.dev/apikey (press Enter to skip)",
"key_name": "GEMINI_API_KEY",
}
],
"nvidia_nim": [
{
"prompt": "Enter your NVIDIA API key (press Enter to skip)",
"key_name": "NVIDIA_NIM_API_KEY",
}
],
"groq": [
{
"prompt": "Enter your GROQ API key (press Enter to skip)",
"key_name": "GROQ_API_KEY",
}
],
"watson": [
{
"prompt": "Enter your WATSONX URL (press Enter to skip)",
"key_name": "WATSONX_URL",
},
{
"prompt": "Enter your WATSONX API Key (press Enter to skip)",
"key_name": "WATSONX_APIKEY",
},
{
"prompt": "Enter your WATSONX Project Id (press Enter to skip)",
"key_name": "WATSONX_PROJECT_ID",
},
],
"ollama": [
{
"default": True,
"API_BASE": "http://localhost:11434",
}
],
"bedrock": [
{
"prompt": "Enter your AWS Access Key ID (press Enter to skip)",
"key_name": "AWS_ACCESS_KEY_ID",
},
{
"prompt": "Enter your AWS Secret Access Key (press Enter to skip)",
"key_name": "AWS_SECRET_ACCESS_KEY",
},
{
"prompt": "Enter your AWS Region Name (press Enter to skip)",
"key_name": "AWS_REGION_NAME",
},
],
"azure": [
{
"prompt": "Enter your Azure deployment name (must start with 'azure/')",
"key_name": "model",
},
{
"prompt": "Enter your AZURE API key (press Enter to skip)",
"key_name": "AZURE_API_KEY",
},
{
"prompt": "Enter your AZURE API base URL (press Enter to skip)",
"key_name": "AZURE_API_BASE",
},
{
"prompt": "Enter your AZURE API version (press Enter to skip)",
"key_name": "AZURE_API_VERSION",
},
],
"cerebras": [
{
"prompt": "Enter your Cerebras model name (must start with 'cerebras/')",
"key_name": "model",
},
{
"prompt": "Enter your Cerebras API version (press Enter to skip)",
"key_name": "CEREBRAS_API_KEY",
},
],
"huggingface": [
{
"prompt": "Enter your Huggingface API key (HF_TOKEN) (press Enter to skip)",
"key_name": "HF_TOKEN",
},
],
"sambanova": [
{
"prompt": "Enter your SambaNovaCloud API key (press Enter to skip)",
"key_name": "SAMBANOVA_API_KEY",
}
],
}
PROVIDERS = [
"openai",
"anthropic",
"gemini",
"nvidia_nim",
"groq",
"huggingface",
"ollama",
"watson",
"bedrock",
"azure",
"cerebras",
"sambanova",
]
MODELS = {
"openai": [
"gpt-4",
"gpt-4.1",
"gpt-4.1-mini-2025-04-14",
"gpt-4.1-nano-2025-04-14",
"gpt-4o",
"gpt-4o-mini",
"o1-mini",
"o1-preview",
],
"anthropic": [
"claude-3-5-sonnet-20240620",
"claude-3-sonnet-20240229",
"claude-3-opus-20240229",
"claude-3-haiku-20240307",
],
"gemini": [
"gemini/gemini-1.5-flash",
"gemini/gemini-1.5-pro",
"gemini/gemini-2.0-flash-lite-001",
"gemini/gemini-2.0-flash-001",
"gemini/gemini-2.0-flash-thinking-exp-01-21",
"gemini/gemini-2.5-flash-preview-04-17",
"gemini/gemini-2.5-pro-exp-03-25",
"gemini/gemini-gemma-2-9b-it",
"gemini/gemini-gemma-2-27b-it",
"gemini/gemma-3-1b-it",
"gemini/gemma-3-4b-it",
"gemini/gemma-3-12b-it",
"gemini/gemma-3-27b-it",
],
"nvidia_nim": [
"nvidia_nim/nvidia/mistral-nemo-minitron-8b-8k-instruct",
"nvidia_nim/nvidia/nemotron-4-mini-hindi-4b-instruct",
"nvidia_nim/nvidia/llama-3.1-nemotron-70b-instruct",
"nvidia_nim/nvidia/llama3-chatqa-1.5-8b",
"nvidia_nim/nvidia/llama3-chatqa-1.5-70b",
"nvidia_nim/nvidia/vila",
"nvidia_nim/nvidia/neva-22",
"nvidia_nim/nvidia/nemotron-mini-4b-instruct",
"nvidia_nim/nvidia/usdcode-llama3-70b-instruct",
"nvidia_nim/nvidia/nemotron-4-340b-instruct",
"nvidia_nim/meta/codellama-70b",
"nvidia_nim/meta/llama2-70b",
"nvidia_nim/meta/llama3-8b-instruct",
"nvidia_nim/meta/llama3-70b-instruct",
"nvidia_nim/meta/llama-3.1-8b-instruct",
"nvidia_nim/meta/llama-3.1-70b-instruct",
"nvidia_nim/meta/llama-3.1-405b-instruct",
"nvidia_nim/meta/llama-3.2-1b-instruct",
"nvidia_nim/meta/llama-3.2-3b-instruct",
"nvidia_nim/meta/llama-3.2-11b-vision-instruct",
"nvidia_nim/meta/llama-3.2-90b-vision-instruct",
"nvidia_nim/meta/llama-3.1-70b-instruct",
"nvidia_nim/google/gemma-7b",
"nvidia_nim/google/gemma-2b",
"nvidia_nim/google/codegemma-7b",
"nvidia_nim/google/codegemma-1.1-7b",
"nvidia_nim/google/recurrentgemma-2b",
"nvidia_nim/google/gemma-2-9b-it",
"nvidia_nim/google/gemma-2-27b-it",
"nvidia_nim/google/gemma-2-2b-it",
"nvidia_nim/google/deplot",
"nvidia_nim/google/paligemma",
"nvidia_nim/mistralai/mistral-7b-instruct-v0.2",
"nvidia_nim/mistralai/mixtral-8x7b-instruct-v0.1",
"nvidia_nim/mistralai/mistral-large",
"nvidia_nim/mistralai/mixtral-8x22b-instruct-v0.1",
"nvidia_nim/mistralai/mistral-7b-instruct-v0.3",
"nvidia_nim/nv-mistralai/mistral-nemo-12b-instruct",
"nvidia_nim/mistralai/mamba-codestral-7b-v0.1",
"nvidia_nim/microsoft/phi-3-mini-128k-instruct",
"nvidia_nim/microsoft/phi-3-mini-4k-instruct",
"nvidia_nim/microsoft/phi-3-small-8k-instruct",
"nvidia_nim/microsoft/phi-3-small-128k-instruct",
"nvidia_nim/microsoft/phi-3-medium-4k-instruct",
"nvidia_nim/microsoft/phi-3-medium-128k-instruct",
"nvidia_nim/microsoft/phi-3.5-mini-instruct",
"nvidia_nim/microsoft/phi-3.5-moe-instruct",
"nvidia_nim/microsoft/kosmos-2",
"nvidia_nim/microsoft/phi-3-vision-128k-instruct",
"nvidia_nim/microsoft/phi-3.5-vision-instruct",
"nvidia_nim/databricks/dbrx-instruct",
"nvidia_nim/snowflake/arctic",
"nvidia_nim/aisingapore/sea-lion-7b-instruct",
"nvidia_nim/ibm/granite-8b-code-instruct",
"nvidia_nim/ibm/granite-34b-code-instruct",
"nvidia_nim/ibm/granite-3.0-8b-instruct",
"nvidia_nim/ibm/granite-3.0-3b-a800m-instruct",
"nvidia_nim/mediatek/breeze-7b-instruct",
"nvidia_nim/upstage/solar-10.7b-instruct",
"nvidia_nim/writer/palmyra-med-70b-32k",
"nvidia_nim/writer/palmyra-med-70b",
"nvidia_nim/writer/palmyra-fin-70b-32k",
"nvidia_nim/01-ai/yi-large",
"nvidia_nim/deepseek-ai/deepseek-coder-6.7b-instruct",
"nvidia_nim/rakuten/rakutenai-7b-instruct",
"nvidia_nim/rakuten/rakutenai-7b-chat",
"nvidia_nim/baichuan-inc/baichuan2-13b-chat",
],
"groq": [
"groq/llama-3.1-8b-instant",
"groq/llama-3.1-70b-versatile",
"groq/llama-3.1-405b-reasoning",
"groq/gemma2-9b-it",
"groq/gemma-7b-it",
],
"ollama": ["ollama/llama3.1", "ollama/mixtral"],
"watson": [
"watsonx/meta-llama/llama-3-1-70b-instruct",
"watsonx/meta-llama/llama-3-1-8b-instruct",
"watsonx/meta-llama/llama-3-2-11b-vision-instruct",
"watsonx/meta-llama/llama-3-2-1b-instruct",
"watsonx/meta-llama/llama-3-2-90b-vision-instruct",
"watsonx/meta-llama/llama-3-405b-instruct",
"watsonx/mistral/mistral-large",
"watsonx/ibm/granite-3-8b-instruct",
],
"bedrock": [
"bedrock/us.amazon.nova-pro-v1:0",
"bedrock/us.amazon.nova-micro-v1:0",
"bedrock/us.amazon.nova-lite-v1:0",
"bedrock/us.anthropic.claude-3-5-sonnet-20240620-v1:0",
"bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0",
"bedrock/us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0",
"bedrock/us.anthropic.claude-3-sonnet-20240229-v1:0",
"bedrock/us.anthropic.claude-3-opus-20240229-v1:0",
"bedrock/us.anthropic.claude-3-haiku-20240307-v1:0",
"bedrock/us.meta.llama3-2-11b-instruct-v1:0",
"bedrock/us.meta.llama3-2-3b-instruct-v1:0",
"bedrock/us.meta.llama3-2-90b-instruct-v1:0",
"bedrock/us.meta.llama3-2-1b-instruct-v1:0",
"bedrock/us.meta.llama3-1-8b-instruct-v1:0",
"bedrock/us.meta.llama3-1-70b-instruct-v1:0",
"bedrock/us.meta.llama3-3-70b-instruct-v1:0",
"bedrock/us.meta.llama3-1-405b-instruct-v1:0",
"bedrock/eu.anthropic.claude-3-5-sonnet-20240620-v1:0",
"bedrock/eu.anthropic.claude-3-sonnet-20240229-v1:0",
"bedrock/eu.anthropic.claude-3-haiku-20240307-v1:0",
"bedrock/eu.meta.llama3-2-3b-instruct-v1:0",
"bedrock/eu.meta.llama3-2-1b-instruct-v1:0",
"bedrock/apac.anthropic.claude-3-5-sonnet-20240620-v1:0",
"bedrock/apac.anthropic.claude-3-5-sonnet-20241022-v2:0",
"bedrock/apac.anthropic.claude-3-sonnet-20240229-v1:0",
"bedrock/apac.anthropic.claude-3-haiku-20240307-v1:0",
"bedrock/amazon.nova-pro-v1:0",
"bedrock/amazon.nova-micro-v1:0",
"bedrock/amazon.nova-lite-v1:0",
"bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0",
"bedrock/anthropic.claude-3-5-haiku-20241022-v1:0",
"bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
"bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0",
"bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
"bedrock/anthropic.claude-3-opus-20240229-v1:0",
"bedrock/anthropic.claude-3-haiku-20240307-v1:0",
"bedrock/anthropic.claude-v2:1",
"bedrock/anthropic.claude-v2",
"bedrock/anthropic.claude-instant-v1",
"bedrock/meta.llama3-1-405b-instruct-v1:0",
"bedrock/meta.llama3-1-70b-instruct-v1:0",
"bedrock/meta.llama3-1-8b-instruct-v1:0",
"bedrock/meta.llama3-70b-instruct-v1:0",
"bedrock/meta.llama3-8b-instruct-v1:0",
"bedrock/amazon.titan-text-lite-v1",
"bedrock/amazon.titan-text-express-v1",
"bedrock/cohere.command-text-v14",
"bedrock/ai21.j2-mid-v1",
"bedrock/ai21.j2-ultra-v1",
"bedrock/ai21.jamba-instruct-v1:0",
"bedrock/mistral.mistral-7b-instruct-v0:2",
"bedrock/mistral.mixtral-8x7b-instruct-v0:1",
],
"huggingface": [
"huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
"huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1",
"huggingface/tiiuae/falcon-180B-chat",
"huggingface/google/gemma-7b-it",
],
"sambanova": [
"sambanova/Meta-Llama-3.3-70B-Instruct",
"sambanova/QwQ-32B-Preview",
"sambanova/Qwen2.5-72B-Instruct",
"sambanova/Qwen2.5-Coder-32B-Instruct",
"sambanova/Meta-Llama-3.1-405B-Instruct",
"sambanova/Meta-Llama-3.1-70B-Instruct",
"sambanova/Meta-Llama-3.1-8B-Instruct",
"sambanova/Llama-3.2-90B-Vision-Instruct",
"sambanova/Llama-3.2-11B-Vision-Instruct",
"sambanova/Meta-Llama-3.2-3B-Instruct",
"sambanova/Meta-Llama-3.2-1B-Instruct",
],
}
DEFAULT_LLM_MODEL = "gpt-4o-mini"
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
LITELLM_PARAMS = ["api_key", "api_base", "api_version"]

View File

@@ -0,0 +1,273 @@
from pathlib import Path
import shutil
import sys
import click
from crewai.cli.constants import ENV_VARS, MODELS
from crewai.cli.provider import (
get_provider_data,
select_model,
select_provider,
)
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
def create_folder_structure(name, parent_folder=None):
import keyword
import re
name = name.rstrip("/")
if not name.strip():
raise ValueError("Project name cannot be empty or contain only whitespace")
folder_name = name.replace(" ", "_").replace("-", "_").lower()
folder_name = re.sub(r"[^a-zA-Z0-9_]", "", folder_name)
# Check if the name starts with invalid characters or is primarily invalid
if re.match(r"^[^a-zA-Z0-9_-]+", name):
raise ValueError(
f"Project name '{name}' contains no valid characters for a Python module name"
)
if not folder_name:
raise ValueError(
f"Project name '{name}' contains no valid characters for a Python module name"
)
if folder_name[0].isdigit():
raise ValueError(
f"Project name '{name}' would generate folder name '{folder_name}' which cannot start with a digit (invalid Python module name)"
)
if keyword.iskeyword(folder_name):
raise ValueError(
f"Project name '{name}' would generate folder name '{folder_name}' which is a reserved Python keyword"
)
if not folder_name.isidentifier():
raise ValueError(
f"Project name '{name}' would generate invalid Python module name '{folder_name}'"
)
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
class_name = re.sub(r"[^a-zA-Z0-9_]", "", class_name)
if not class_name:
raise ValueError(
f"Project name '{name}' contains no valid characters for a Python class name"
)
if class_name[0].isdigit():
raise ValueError(
f"Project name '{name}' would generate class name '{class_name}' which cannot start with a digit"
)
# Check if the original name (before title casing) is a keyword
original_name_clean = re.sub(
r"[^a-zA-Z0-9_]", "", name.replace("_", "").replace("-", "").lower()
)
if (
keyword.iskeyword(original_name_clean)
or keyword.iskeyword(class_name)
or class_name in ("True", "False", "None")
):
raise ValueError(
f"Project name '{name}' would generate class name '{class_name}' which is a reserved Python keyword"
)
if not class_name.isidentifier():
raise ValueError(
f"Project name '{name}' would generate invalid Python class name '{class_name}'"
)
if parent_folder:
folder_path = Path(parent_folder) / folder_name
else:
folder_path = Path(folder_name)
if folder_path.exists():
if not click.confirm(
f"Folder {folder_name} already exists. Do you want to override it?"
):
click.secho("Operation cancelled.", fg="yellow")
sys.exit(0)
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
shutil.rmtree(folder_path) # Delete the existing folder and its contents
click.secho(
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
fg="green",
bold=True,
)
folder_path.mkdir(parents=True)
(folder_path / "tests").mkdir(exist_ok=True)
(folder_path / "knowledge").mkdir(exist_ok=True)
if not parent_folder:
(folder_path / "src" / folder_name).mkdir(parents=True)
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
return folder_path, folder_name, class_name
def copy_template_files(folder_path, name, class_name, parent_folder):
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew"
root_template_files = (
[
".gitignore",
"pyproject.toml",
"README.md",
"knowledge/user_preference.txt",
]
if not parent_folder
else []
)
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
src_template_files = (
["__init__.py", "main.py", "crew.py"] if not parent_folder else ["crew.py"]
)
for file_name in root_template_files:
src_file = templates_dir / file_name
dst_file = folder_path / file_name
copy_template(src_file, dst_file, name, class_name, folder_path.name)
src_folder = (
folder_path / "src" / folder_path.name if not parent_folder else folder_path
)
for file_name in src_template_files:
src_file = templates_dir / file_name
dst_file = src_folder / file_name
copy_template(src_file, dst_file, name, class_name, folder_path.name)
if not parent_folder:
for file_name in tools_template_files + config_template_files:
src_file = templates_dir / file_name
dst_file = src_folder / file_name
copy_template(src_file, dst_file, name, class_name, folder_path.name)
def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
env_vars = load_env_vars(folder_path)
if not skip_provider:
if not provider:
provider_models = get_provider_data()
if not provider_models:
return
existing_provider = None
for provider, env_keys in ENV_VARS.items():
if any(
"key_name" in details and details["key_name"] in env_vars
for details in env_keys
):
existing_provider = provider
break
if existing_provider:
if not click.confirm(
f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?"
):
click.secho("Keeping existing provider configuration.", fg="yellow")
return
provider_models = get_provider_data()
if not provider_models:
return
while True:
selected_provider = select_provider(provider_models)
if selected_provider is None: # User typed 'q'
click.secho("Exiting...", fg="yellow")
sys.exit(0)
if selected_provider: # Valid selection
break
click.secho(
"No provider selected. Please try again or press 'q' to exit.", fg="red"
)
# Check if the selected provider has predefined models
if MODELS.get(selected_provider):
while True:
selected_model = select_model(selected_provider, provider_models)
if selected_model is None: # User typed 'q'
click.secho("Exiting...", fg="yellow")
sys.exit(0)
if selected_model: # Valid selection
break
click.secho(
"No model selected. Please try again or press 'q' to exit.",
fg="red",
)
env_vars["MODEL"] = selected_model
# Check if the selected provider requires API keys
if selected_provider in ENV_VARS:
provider_env_vars = ENV_VARS[selected_provider]
for details in provider_env_vars:
if details.get("default", False):
# Automatically add default key-value pairs
for key, value in details.items():
if key not in ["prompt", "key_name", "default"]:
env_vars[key] = value
elif "key_name" in details:
# Prompt for non-default key-value pairs
prompt = details["prompt"]
key_name = details["key_name"]
api_key_value = click.prompt(prompt, default="", show_default=False)
if api_key_value.strip():
env_vars[key_name] = api_key_value
if env_vars:
write_env_file(folder_path, env_vars)
click.secho("API keys and model saved to .env file", fg="green")
else:
click.secho(
"No API keys provided. Skipping .env file creation.", fg="yellow"
)
click.secho(f"Selected model: {env_vars.get('MODEL', 'N/A')}", fg="green")
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew"
root_template_files = (
[".gitignore", "pyproject.toml", "README.md", "knowledge/user_preference.txt"]
if not parent_folder
else []
)
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
src_template_files = (
["__init__.py", "main.py", "crew.py"] if not parent_folder else ["crew.py"]
)
for file_name in root_template_files:
src_file = templates_dir / file_name
dst_file = folder_path / file_name
copy_template(src_file, dst_file, name, class_name, folder_name)
src_folder = folder_path / "src" / folder_name if not parent_folder else folder_path
for file_name in src_template_files:
src_file = templates_dir / file_name
dst_file = src_folder / file_name
copy_template(src_file, dst_file, name, class_name, folder_name)
if not parent_folder:
for file_name in tools_template_files + config_template_files:
src_file = templates_dir / file_name
dst_file = src_folder / file_name
copy_template(src_file, dst_file, name, class_name, folder_name)
click.secho(f"Crew {name} created successfully!", fg="green", bold=True)

View File

@@ -0,0 +1,99 @@
from pathlib import Path
import click
from crewai.telemetry import Telemetry
def create_flow(name):
"""Create a new flow."""
folder_name = name.replace(" ", "_").replace("-", "_").lower()
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
click.secho(f"Creating flow {folder_name}...", fg="green", bold=True)
project_root = Path(folder_name)
if project_root.exists():
click.secho(f"Error: Folder {folder_name} already exists.", fg="red")
return
# Initialize telemetry
telemetry = Telemetry()
telemetry.flow_creation_span(class_name)
# Create directory structure
(project_root / "src" / folder_name).mkdir(parents=True)
(project_root / "src" / folder_name / "crews").mkdir(parents=True)
(project_root / "src" / folder_name / "tools").mkdir(parents=True)
(project_root / "tests").mkdir(exist_ok=True)
# Create .env file
with open(project_root / ".env", "w") as file:
file.write("OPENAI_API_KEY=YOUR_API_KEY")
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "flow"
# List of template files to copy
root_template_files = [".gitignore", "pyproject.toml", "README.md"]
src_template_files = ["__init__.py", "main.py"]
tools_template_files = ["tools/__init__.py", "tools/custom_tool.py"]
crew_folders = [
"poem_crew",
]
def process_file(src_file, dst_file):
if src_file.suffix in [".pyc", ".pyo", ".pyd"]:
return
try:
with open(src_file, "r", encoding="utf-8") as file:
content = file.read()
except Exception as e:
click.secho(f"Error processing file {src_file}: {e}", fg="red")
return
content = content.replace("{{name}}", name)
content = content.replace("{{flow_name}}", class_name)
content = content.replace("{{folder_name}}", folder_name)
with open(dst_file, "w") as file:
file.write(content)
# Copy and process root template files
for file_name in root_template_files:
src_file = templates_dir / file_name
dst_file = project_root / file_name
process_file(src_file, dst_file)
# Copy and process src template files
for file_name in src_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy tools files
for file_name in tools_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy crew folders
for crew_folder in crew_folders:
src_crew_folder = templates_dir / "crews" / crew_folder
dst_crew_folder = project_root / "src" / folder_name / "crews" / crew_folder
if src_crew_folder.exists():
for src_file in src_crew_folder.rglob("*"):
if src_file.is_file():
relative_path = src_file.relative_to(src_crew_folder)
dst_file = dst_crew_folder / relative_path
dst_file.parent.mkdir(parents=True, exist_ok=True)
process_file(src_file, dst_file)
else:
click.secho(
f"Warning: Crew folder {crew_folder} not found in template.",
fg="yellow",
)
click.secho(f"Flow {name} created successfully!", fg="green", bold=True)

View File

@@ -0,0 +1,537 @@
import json
from pathlib import Path
import platform
import re
import sys
import threading
import time
from typing import Any, Final, Literal
import click
from packaging import version
import tomli
from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
from crewai.crew import Crew
from crewai.llm import LLM, BaseLLM
from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import Printer
from crewai.utilities.types import LLMMessage
_printer = Printer()
MIN_REQUIRED_VERSION: Final[Literal["0.98.0"]] = "0.98.0"
def check_conversational_crews_version(
crewai_version: str, pyproject_data: dict
) -> bool:
"""
Check if the installed crewAI version supports conversational crews.
Args:
crewai_version: The current version of crewAI.
pyproject_data: Dictionary containing pyproject.toml data.
Returns:
bool: True if version check passes, False otherwise.
"""
try:
if version.parse(crewai_version) < version.parse(MIN_REQUIRED_VERSION):
click.secho(
"You are using an older version of crewAI that doesn't support conversational crews. "
"Run 'uv upgrade crewai' to get the latest version.",
fg="red",
)
return False
except version.InvalidVersion:
click.secho("Invalid crewAI version format detected.", fg="red")
return False
return True
def run_chat():
"""
Runs an interactive chat loop using the Crew's chat LLM with function calling.
Incorporates crew_name, crew_description, and input fields to build a tool schema.
Exits if crew_name or crew_description are missing.
"""
crewai_version = get_crewai_version()
pyproject_data = read_toml()
if not check_conversational_crews_version(crewai_version, pyproject_data):
return
crew, crew_name = load_crew_and_name()
chat_llm = initialize_chat_llm(crew)
if not chat_llm:
return
# Indicate that the crew is being analyzed
click.secho(
"\nAnalyzing crew and required inputs - this may take 3 to 30 seconds "
"depending on the complexity of your crew.",
fg="white",
)
# Start loading indicator
loading_complete = threading.Event()
loading_thread = threading.Thread(target=show_loading, args=(loading_complete,))
loading_thread.start()
try:
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs)
# Call the LLM to generate the introductory message
introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}]
)
finally:
# Stop loading indicator
loading_complete.set()
loading_thread.join()
# Indicate that the analysis is complete
click.secho("\nFinished analyzing crew.\n", fg="white")
click.secho(f"Assistant: {introductory_message}\n", fg="green")
messages = [
{"role": "system", "content": system_message},
{"role": "assistant", "content": introductory_message},
]
available_functions = {
crew_chat_inputs.crew_name: create_tool_function(crew, messages),
}
chat_loop(chat_llm, messages, crew_tool_schema, available_functions)
def show_loading(event: threading.Event):
"""Display animated loading dots while processing."""
while not event.is_set():
_printer.print(".", end="")
time.sleep(1)
_printer.print("")
def initialize_chat_llm(crew: Crew) -> LLM | BaseLLM | None:
"""Initializes the chat LLM and handles exceptions."""
try:
return create_llm(crew.chat_llm)
except Exception as e:
click.secho(
f"Unable to find a Chat LLM. Please make sure you set chat_llm on the crew: {e}",
fg="red",
)
return None
def build_system_message(crew_chat_inputs: ChatInputs) -> str:
"""Builds the initial system message for the chat."""
required_fields_str = (
", ".join(
f"{field.name} (desc: {field.description or 'n/a'})"
for field in crew_chat_inputs.inputs
)
or "(No required fields detected)"
)
return (
"You are a helpful AI assistant for the CrewAI platform. "
"Your primary purpose is to assist users with the crew's specific tasks. "
"You can answer general questions, but should guide users back to the crew's purpose afterward. "
"For example, after answering a general question, remind the user of your main purpose, such as generating a research report, and prompt them to specify a topic or task related to the crew's purpose. "
"You have a function (tool) you can call by name if you have all required inputs. "
f"Those required inputs are: {required_fields_str}. "
"Once you have them, call the function. "
"Please keep your responses concise and friendly. "
"If a user asks a question outside the crew's scope, provide a brief answer and remind them of the crew's purpose. "
"After calling the tool, be prepared to take user feedback and make adjustments as needed. "
"If you are ever unsure about a user's request or need clarification, ask the user for more information. "
"Before doing anything else, introduce yourself with a friendly message like: 'Hey! I'm here to help you with [crew's purpose]. Could you please provide me with [inputs] so we can get started?' "
"For example: 'Hey! I'm here to help you with uncovering and reporting cutting-edge developments through thorough research and detailed analysis. Could you please provide me with a topic you're interested in? This will help us generate a comprehensive research report and detailed analysis.'"
f"\nCrew Name: {crew_chat_inputs.crew_name}"
f"\nCrew Description: {crew_chat_inputs.crew_description}"
)
def create_tool_function(crew: Crew, messages: list[dict[str, str]]) -> Any:
"""Creates a wrapper function for running the crew tool with messages."""
def run_crew_tool_with_messages(**kwargs):
return run_crew_tool(crew, messages, **kwargs)
return run_crew_tool_with_messages
def flush_input():
"""Flush any pending input from the user."""
if platform.system() == "Windows":
# Windows platform
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
else:
# Unix-like platforms (Linux, macOS)
import termios
termios.tcflush(sys.stdin, termios.TCIFLUSH)
def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
"""Main chat loop for interacting with the user."""
while True:
try:
# Flush any pending input before accepting new input
flush_input()
user_input = get_user_input()
handle_user_input(
user_input, chat_llm, messages, crew_tool_schema, available_functions
)
except KeyboardInterrupt: # noqa: PERF203
click.echo("\nExiting chat. Goodbye!")
break
except Exception as e:
click.secho(f"An error occurred: {e}", fg="red")
break
def get_user_input() -> str:
"""Collect multi-line user input with exit handling."""
click.secho(
"\nYou (type your message below. Press 'Enter' twice when you're done):",
fg="blue",
)
user_input_lines = []
while True:
line = input()
if line.strip().lower() == "exit":
return "exit"
if line == "":
break
user_input_lines.append(line)
return "\n".join(user_input_lines)
def handle_user_input(
user_input: str,
chat_llm: LLM,
messages: list[LLMMessage],
crew_tool_schema: dict[str, Any],
available_functions: dict[str, Any],
) -> None:
if user_input.strip().lower() == "exit":
click.echo("Exiting chat. Goodbye!")
return
if not user_input.strip():
click.echo("Empty message. Please provide input or type 'exit' to quit.")
return
messages.append({"role": "user", "content": user_input})
# Indicate that assistant is processing
click.echo()
click.secho("Assistant is processing your input. Please wait...", fg="green")
# Process assistant's response
final_response = chat_llm.call(
messages=messages,
tools=[crew_tool_schema],
available_functions=available_functions,
)
messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green")
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
"""
Dynamically build a Littellm 'function' schema for the given crew.
crew_name: The name of the crew (used for the function 'name').
crew_inputs: A ChatInputs object containing crew_description
and a list of input fields (each with a name & description).
"""
properties = {}
for field in crew_inputs.inputs:
properties[field.name] = {
"type": "string",
"description": field.description or "No description provided",
}
required_fields = [field.name for field in crew_inputs.inputs]
return {
"type": "function",
"function": {
"name": crew_inputs.crew_name,
"description": crew_inputs.crew_description or "No crew description",
"parameters": {
"type": "object",
"properties": properties,
"required": required_fields,
},
},
}
def run_crew_tool(crew: Crew, messages: list[dict[str, str]], **kwargs):
"""
Runs the crew using crew.kickoff(inputs=kwargs) and returns the output.
Args:
crew (Crew): The crew instance to run.
messages (List[Dict[str, str]]): The chat messages up to this point.
**kwargs: The inputs collected from the user.
Returns:
str: The output from the crew's execution.
Raises:
SystemExit: Exits the chat if an error occurs during crew execution.
"""
try:
# Serialize 'messages' to JSON string before adding to kwargs
kwargs["crew_chat_messages"] = json.dumps(messages)
# Run the crew with the provided inputs
crew_output = crew.kickoff(inputs=kwargs)
# Convert CrewOutput to a string to send back to the user
return str(crew_output)
except Exception as e:
# Exit the chat and show the error message
click.secho("An error occurred while running the crew:", fg="red")
click.secho(str(e), fg="red")
sys.exit(1)
def load_crew_and_name() -> tuple[Crew, str]:
"""
Loads the crew by importing the crew class from the user's project.
Returns:
Tuple[Crew, str]: A tuple containing the Crew instance and the name of the crew.
"""
# Get the current working directory
cwd = Path.cwd()
# Path to the pyproject.toml file
pyproject_path = cwd / "pyproject.toml"
if not pyproject_path.exists():
raise FileNotFoundError("pyproject.toml not found in the current directory.")
# Load the pyproject.toml file using 'tomli'
with pyproject_path.open("rb") as f:
pyproject_data = tomli.load(f)
# Get the project name from the 'project' section
project_name = pyproject_data["project"]["name"]
folder_name = project_name
# Derive the crew class name from the project name
# E.g., if project_name is 'my_project', crew_class_name is 'MyProject'
crew_class_name = project_name.replace("_", " ").title().replace(" ", "")
# Add the 'src' directory to sys.path
src_path = cwd / "src"
if str(src_path) not in sys.path:
sys.path.insert(0, str(src_path))
# Import the crew module
crew_module_name = f"{folder_name}.crew"
try:
crew_module = __import__(crew_module_name, fromlist=[crew_class_name])
except ImportError as e:
raise ImportError(
f"Failed to import crew module {crew_module_name}: {e}"
) from e
# Get the crew class from the module
try:
crew_class = getattr(crew_module, crew_class_name)
except AttributeError as e:
raise AttributeError(
f"Crew class {crew_class_name} not found in module {crew_module_name}"
) from e
# Instantiate the crew
crew_instance = crew_class().crew()
return crew_instance, crew_class_name
def generate_crew_chat_inputs(crew: Crew, crew_name: str, chat_llm) -> ChatInputs:
"""
Generates the ChatInputs required for the crew by analyzing the tasks and agents.
Args:
crew (Crew): The crew object containing tasks and agents.
crew_name (str): The name of the crew.
chat_llm: The chat language model to use for AI calls.
Returns:
ChatInputs: An object containing the crew's name, description, and input fields.
"""
# Extract placeholders from tasks and agents
required_inputs = fetch_required_inputs(crew)
# Generate descriptions for each input using AI
input_fields = []
for input_name in required_inputs:
description = generate_input_description_with_ai(input_name, crew, chat_llm)
input_fields.append(ChatInputField(name=input_name, description=description))
# Generate crew description using AI
crew_description = generate_crew_description_with_ai(crew, chat_llm)
return ChatInputs(
crew_name=crew_name, crew_description=crew_description, inputs=input_fields
)
def fetch_required_inputs(crew: Crew) -> set[str]:
"""Extracts placeholders from the crew's tasks and agents.
Args:
crew (Crew): The crew object.
Returns:
Set[str]: A set of placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)}")
required_inputs: set[str] = set()
# Scan tasks
for task in crew.tasks:
text = f"{task.description or ''} {task.expected_output or ''}"
required_inputs.update(placeholder_pattern.findall(text))
# Scan agents
for agent in crew.agents:
text = f"{agent.role or ''} {agent.goal or ''} {agent.backstory or ''}"
required_inputs.update(placeholder_pattern.findall(text))
return required_inputs
def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) -> str:
"""
Generates an input description using AI based on the context of the crew.
Args:
input_name (str): The name of the input placeholder.
crew (Crew): The crew object.
chat_llm: The chat language model to use for AI calls.
Returns:
str: A concise description of the input.
"""
# Gather context from tasks and agents where the input is used
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)}")
for task in crew.tasks:
if (
f"{{{input_name}}}" in task.description
or f"{{{input_name}}}" in task.expected_output
):
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
if (
f"{{{input_name}}}" in agent.role
or f"{{{input_name}}}" in agent.goal
or f"{{{input_name}}}" in agent.backstory
):
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")
context = "\n".join(context_texts)
if not context:
# If no context is found for the input, raise an exception as per instruction
raise ValueError(f"No context found for input '{input_name}'.")
prompt = (
f"Based on the following context, write a concise description (15 words or less) of the input '{input_name}'.\n"
"Provide only the description, without any extra text or labels. Do not include placeholders like '{topic}' in the description.\n"
"Context:\n"
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
return response.strip()
def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
"""
Generates a brief description of the crew using AI.
Args:
crew (Crew): The crew object.
chat_llm: The chat language model to use for AI calls.
Returns:
str: A concise description of the crew's purpose (15 words or less).
"""
# Gather context from tasks and agents
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)}")
for task in crew.tasks:
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")
context = "\n".join(context_texts)
if not context:
raise ValueError("No context found for generating crew description.")
prompt = (
"Based on the following context, write a concise, action-oriented description (15 words or less) of the crew's purpose.\n"
"Provide only the description, without any extra text or labels. Do not include placeholders like '{topic}' in the description.\n"
"Context:\n"
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
return response.strip()

View File

@@ -0,0 +1,278 @@
from typing import Any
from rich.console import Console
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.utils import fetch_and_json_env_file, get_project_name
console = Console()
class DeployCommand(BaseCommand, PlusAPIMixin):
"""
A class to handle deployment-related operations for CrewAI projects.
"""
def __init__(self):
"""
Initialize the DeployCommand with project name and API client.
"""
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
self.project_name = get_project_name(require=True)
def _standard_no_param_error_message(self) -> None:
"""
Display a standard error message when no UUID or project name is available.
"""
console.print(
"No UUID provided, project pyproject.toml not found or with error.",
style="bold red",
)
def _display_deployment_info(self, json_response: dict[str, Any]) -> None:
"""
Display deployment information.
Args:
json_response (Dict[str, Any]): The deployment information to display.
"""
console.print("Deploying the crew...\n", style="bold blue")
for key, value in json_response.items():
console.print(f"{key.title()}: [green]{value}[/green]")
console.print("\nTo check the status of the deployment, run:")
console.print("crewai deploy status")
console.print(" or")
console.print(f'crewai deploy status --uuid "{json_response["uuid"]}"')
def _display_logs(self, log_messages: list[dict[str, Any]]) -> None:
"""
Display log messages.
Args:
log_messages (List[Dict[str, Any]]): The log messages to display.
"""
for log_message in log_messages:
console.print(
f"{log_message['timestamp']} - {log_message['level']}: {log_message['message']}"
)
def deploy(self, uuid: str | None = None) -> None:
"""
Deploy a crew using either UUID or project name.
Args:
uuid (Optional[str]): The UUID of the crew to deploy.
"""
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue")
if uuid:
response = self.plus_api_client.deploy_by_uuid(uuid)
elif self.project_name:
response = self.plus_api_client.deploy_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
self._validate_response(response)
self._display_deployment_info(response.json())
def create_crew(self, confirm: bool = False) -> None:
"""
Create a new crew deployment.
"""
self._create_crew_deployment_span = (
self._telemetry.create_crew_deployment_span()
)
console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file()
try:
remote_repo_url = git.Repository().origin_url()
except ValueError:
remote_repo_url = None
if remote_repo_url is None:
console.print("No remote repository URL found.", style="bold red")
console.print(
"Please ensure your project has a valid remote repository.",
style="yellow",
)
return
self._confirm_input(env_vars, remote_repo_url, confirm)
payload = self._create_payload(env_vars, remote_repo_url)
response = self.plus_api_client.create_crew(payload)
self._validate_response(response)
self._display_creation_success(response.json())
def _confirm_input(
self, env_vars: dict[str, str], remote_repo_url: str, confirm: bool
) -> None:
"""
Confirm input parameters with the user.
Args:
env_vars (Dict[str, str]): Environment variables.
remote_repo_url (str): Remote repository URL.
confirm (bool): Whether to confirm input.
"""
if not confirm:
input(f"Press Enter to continue with the following Env vars: {env_vars}")
input(
f"Press Enter to continue with the following remote repository: {remote_repo_url}\n"
)
def _create_payload(
self,
env_vars: dict[str, str],
remote_repo_url: str,
) -> dict[str, Any]:
"""
Create the payload for crew creation.
Args:
remote_repo_url (str): Remote repository URL.
env_vars (Dict[str, str]): Environment variables.
Returns:
Dict[str, Any]: The payload for crew creation.
"""
return {
"deploy": {
"name": self.project_name,
"repo_clone_url": remote_repo_url,
"env": env_vars,
}
}
def _display_creation_success(self, json_response: dict[str, Any]) -> None:
"""
Display success message after crew creation.
Args:
json_response (Dict[str, Any]): The response containing crew information.
"""
console.print("Deployment created successfully!\n", style="bold green")
console.print(
f"Name: {self.project_name} ({json_response['uuid']})", style="bold green"
)
console.print(f"Status: {json_response['status']}", style="bold green")
console.print("\nTo (re)deploy the crew, run:")
console.print("crewai deploy push")
console.print(" or")
console.print(f"crewai deploy push --uuid {json_response['uuid']}")
def list_crews(self) -> None:
"""
List all available crews.
"""
console.print("Listing all Crews\n", style="bold blue")
response = self.plus_api_client.list_crews()
json_response = response.json()
if response.status_code == 200:
self._display_crews(json_response)
else:
self._display_no_crews_message()
def _display_crews(self, crews_data: list[dict[str, Any]]) -> None:
"""
Display the list of crews.
Args:
crews_data (List[Dict[str, Any]]): List of crew data to display.
"""
for crew_data in crews_data:
console.print(
f"- {crew_data['name']} ({crew_data['uuid']}) [blue]{crew_data['status']}[/blue]"
)
def _display_no_crews_message(self) -> None:
"""
Display a message when no crews are available.
"""
console.print("You don't have any Crews yet. Let's create one!", style="yellow")
console.print(" crewai create crew <crew_name>", style="green")
def get_crew_status(self, uuid: str | None = None) -> None:
"""
Get the status of a crew.
Args:
uuid (Optional[str]): The UUID of the crew to check.
"""
console.print("Fetching deployment status...", style="bold blue")
if uuid:
response = self.plus_api_client.crew_status_by_uuid(uuid)
elif self.project_name:
response = self.plus_api_client.crew_status_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
self._validate_response(response)
self._display_crew_status(response.json())
def _display_crew_status(self, status_data: dict[str, str]) -> None:
"""
Display the status of a crew.
Args:
status_data (Dict[str, str]): The status data to display.
"""
console.print(f"Name:\t {status_data['name']}")
console.print(f"Status:\t {status_data['status']}")
def get_crew_logs(self, uuid: str | None, log_type: str = "deployment") -> None:
"""
Get logs for a crew.
Args:
uuid (Optional[str]): The UUID of the crew to get logs for.
log_type (str): The type of logs to retrieve (default: "deployment").
"""
self._get_crew_logs_span = self._telemetry.get_crew_logs_span(uuid, log_type)
console.print(f"Fetching {log_type} logs...", style="bold blue")
if uuid:
response = self.plus_api_client.crew_by_uuid(uuid, log_type)
elif self.project_name:
response = self.plus_api_client.crew_by_name(self.project_name, log_type)
else:
self._standard_no_param_error_message()
return
self._validate_response(response)
self._display_logs(response.json())
def remove_crew(self, uuid: str | None) -> None:
"""
Remove a crew deployment.
Args:
uuid (Optional[str]): The UUID of the crew to remove.
"""
self._remove_crew_span = self._telemetry.remove_crew_span(uuid)
console.print("Removing deployment...", style="bold blue")
if uuid:
response = self.plus_api_client.delete_crew_by_uuid(uuid)
elif self.project_name:
response = self.plus_api_client.delete_crew_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
if response.status_code == 204:
console.print(
f"Crew '{self.project_name}' removed successfully.", style="green"
)
else:
console.print(
f"Failed to remove crew '{self.project_name}'", style="bold red"
)

View File

@@ -0,0 +1,101 @@
from typing import Any
import requests
from requests.exceptions import JSONDecodeError, RequestException
from rich.console import Console
from crewai.cli.command import BaseCommand
from crewai.cli.settings.main import SettingsCommand
from crewai.cli.version import get_crewai_version
console = Console()
class EnterpriseConfigureCommand(BaseCommand):
def __init__(self):
super().__init__()
self.settings_command = SettingsCommand()
def configure(self, enterprise_url: str) -> None:
try:
enterprise_url = enterprise_url.rstrip("/")
oauth_config = self._fetch_oauth_config(enterprise_url)
self._update_oauth_settings(enterprise_url, oauth_config)
console.print(
f"✅ Successfully configured CrewAI AMP with OAuth2 settings from {enterprise_url}",
style="bold green",
)
except Exception as e:
console.print(
f"❌ Failed to configure Enterprise settings: {e!s}", style="bold red"
)
raise SystemExit(1) from e
def _fetch_oauth_config(self, enterprise_url: str) -> dict[str, Any]:
oauth_endpoint = f"{enterprise_url}/auth/parameters"
try:
console.print(f"🔄 Fetching OAuth2 configuration from {oauth_endpoint}...")
headers = {
"Content-Type": "application/json",
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
"X-Crewai-Version": get_crewai_version(),
}
response = requests.get(oauth_endpoint, timeout=30, headers=headers)
response.raise_for_status()
try:
oauth_config = response.json()
except JSONDecodeError as e:
raise ValueError(f"Invalid JSON response from {oauth_endpoint}") from e
required_fields = [
"audience",
"domain",
"device_authorization_client_id",
"provider",
]
missing_fields = [
field for field in required_fields if field not in oauth_config
]
if missing_fields:
raise ValueError(
f"Missing required fields in OAuth2 configuration: {', '.join(missing_fields)}"
)
console.print(
"✅ Successfully retrieved OAuth2 configuration", style="green"
)
return oauth_config
except RequestException as e:
raise ValueError(f"Failed to connect to enterprise URL: {e!s}") from e
except Exception as e:
raise ValueError(f"Error fetching OAuth2 configuration: {e!s}") from e
def _update_oauth_settings(
self, enterprise_url: str, oauth_config: dict[str, Any]
) -> None:
try:
config_mapping = {
"enterprise_base_url": enterprise_url,
"oauth2_provider": oauth_config["provider"],
"oauth2_audience": oauth_config["audience"],
"oauth2_client_id": oauth_config["device_authorization_client_id"],
"oauth2_domain": oauth_config["domain"],
}
console.print("🔄 Updating local OAuth2 configuration...")
for key, value in config_mapping.items():
self.settings_command.set(key, value)
console.print(f" ✓ Set {key}: {value}", style="dim")
except Exception as e:
raise ValueError(f"Failed to update OAuth2 settings: {e!s}") from e

View File

@@ -0,0 +1,30 @@
import subprocess
import click
def evaluate_crew(n_iterations: int, model: str) -> None:
"""
Test and Evaluate the crew by running a command in the UV environment.
Args:
n_iterations (int): The number of iterations to test the crew.
model (str): The model to test the crew with.
"""
command = ["uv", "run", "test", str(n_iterations), model]
try:
if n_iterations <= 0:
raise ValueError("The number of iterations must be a positive integer.")
result = subprocess.run(command, capture_output=False, text=True, check=True) # noqa: S603
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while testing the crew: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -0,0 +1,89 @@
from functools import lru_cache
import subprocess
class Repository:
def __init__(self, path="."):
self.path = path
if not self.is_git_installed():
raise ValueError("Git is not installed or not found in your PATH.")
if not self.is_git_repo():
raise ValueError(f"{self.path} is not a Git repository.")
self.fetch()
@staticmethod
def is_git_installed() -> bool:
"""Check if Git is installed and available in the system."""
try:
subprocess.run(
["git", "--version"], # noqa: S607
capture_output=True,
check=True,
text=True,
)
return True
except (subprocess.CalledProcessError, FileNotFoundError):
return False
def fetch(self) -> None:
"""Fetch latest updates from the remote."""
subprocess.run(["git", "fetch"], cwd=self.path, check=True) # noqa: S607
def status(self) -> str:
"""Get the git status in porcelain format."""
return subprocess.check_output(
["git", "status", "--branch", "--porcelain"], # noqa: S607
cwd=self.path,
encoding="utf-8",
).strip()
@lru_cache(maxsize=None) # noqa: B019
def is_git_repo(self) -> bool:
"""Check if the current directory is a git repository.
Notes:
- TODO: This method is cached to avoid redundant checks, but using lru_cache on methods can lead to memory leaks
"""
try:
subprocess.check_output(
["git", "rev-parse", "--is-inside-work-tree"], # noqa: S607
cwd=self.path,
encoding="utf-8",
)
return True
except subprocess.CalledProcessError:
return False
def has_uncommitted_changes(self) -> bool:
"""Check if the repository has uncommitted changes."""
return len(self.status().splitlines()) > 1
def is_ahead_or_behind(self) -> bool:
"""Check if the repository is ahead or behind the remote."""
for line in self.status().splitlines():
if line.startswith("##") and ("ahead" in line or "behind" in line):
return True
return False
def is_synced(self) -> bool:
"""Return True if the Git repository is fully synced with the remote, False otherwise."""
if self.has_uncommitted_changes() or self.is_ahead_or_behind():
return False
return True
def origin_url(self) -> str | None:
"""Get the Git repository's remote URL."""
try:
result = subprocess.run(
["git", "remote", "get-url", "origin"], # noqa: S607
cwd=self.path,
capture_output=True,
text=True,
check=True,
)
return result.stdout.strip()
except subprocess.CalledProcessError:
return None

View File

@@ -0,0 +1,23 @@
import subprocess
import click
# Be mindful about changing this.
# on some environments we don't use this command but instead uv sync directly
# so if you expect this to support more things you will need to replicate it there
# ask @joaomdmoura if you are unsure
def install_crew(proxy_options: list[str]) -> None:
"""
Install the crew by running the UV command to lock and install.
"""
try:
command = ["uv", "sync", *proxy_options]
subprocess.run(command, check=True, capture_output=False, text=True) # noqa: S603
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -0,0 +1,23 @@
import subprocess
import click
def kickoff_flow() -> None:
"""
Kickoff the flow by running a command in the UV environment.
"""
command = ["uv", "run", "kickoff"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True) # noqa: S603
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the flow: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,107 @@
from requests import HTTPError
from rich.console import Console
from rich.table import Table
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
console = Console()
class OrganizationCommand(BaseCommand, PlusAPIMixin):
def __init__(self):
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
def list(self):
try:
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
if not orgs:
console.print(
"You don't belong to any organizations yet.", style="yellow"
)
return
table = Table(title="Your Organizations")
table.add_column("Name", style="cyan")
table.add_column("ID", style="green")
for org in orgs:
table.add_row(org["name"], org["uuid"])
console.print(table)
except HTTPError as e:
if e.response.status_code == 401:
console.print(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red",
)
return
console.print(
f"Failed to retrieve organization list: {e!s}", style="bold red"
)
raise SystemExit(1) from e
except Exception as e:
console.print(
f"Failed to retrieve organization list: {e!s}", style="bold red"
)
raise SystemExit(1) from e
def switch(self, org_id):
try:
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
org = next((o for o in orgs if o["uuid"] == org_id), None)
if not org:
console.print(
f"Organization with id '{org_id}' not found.", style="bold red"
)
return
settings = Settings()
settings.org_name = org["name"]
settings.org_uuid = org["uuid"]
settings.dump()
console.print(
f"Successfully switched to {org['name']} ({org['uuid']})",
style="bold green",
)
except HTTPError as e:
if e.response.status_code == 401:
console.print(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red",
)
return
console.print(
f"Failed to retrieve organization list: {e!s}", style="bold red"
)
raise SystemExit(1) from e
except Exception as e:
console.print(f"Failed to switch organization: {e!s}", style="bold red")
raise SystemExit(1) from e
def current(self):
settings = Settings()
if settings.org_uuid:
console.print(
f"Currently logged in to organization {settings.org_name} ({settings.org_uuid})",
style="bold green",
)
else:
console.print(
"You're not currently logged in to any organization.", style="yellow"
)
console.print(
"Use 'crewai org list' to see available organizations.", style="yellow"
)
console.print(
"Use 'crewai org switch <id>' to switch to an organization.",
style="yellow",
)

View File

@@ -0,0 +1,23 @@
import subprocess
import click
def plot_flow() -> None:
"""
Plot the flow by running a command in the UV environment.
"""
command = ["uv", "run", "plot"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True) # noqa: S603
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while plotting the flow: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -0,0 +1,191 @@
from urllib.parse import urljoin
import requests
from crewai.cli.config import Settings
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.version import get_crewai_version
class PlusAPI:
"""
This class exposes methods for working with the CrewAI+ API.
"""
TOOLS_RESOURCE = "/crewai_plus/api/v1/tools"
ORGANIZATIONS_RESOURCE = "/crewai_plus/api/v1/me/organizations"
CREWS_RESOURCE = "/crewai_plus/api/v1/crews"
AGENTS_RESOURCE = "/crewai_plus/api/v1/agents"
TRACING_RESOURCE = "/crewai_plus/api/v1/tracing"
EPHEMERAL_TRACING_RESOURCE = "/crewai_plus/api/v1/tracing/ephemeral"
INTEGRATIONS_RESOURCE = "/crewai_plus/api/v1/integrations"
def __init__(self, api_key: str) -> None:
self.api_key = api_key
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
"X-Crewai-Version": get_crewai_version(),
}
settings = Settings()
if settings.org_uuid:
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
self.base_url = (
str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
)
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
url = urljoin(self.base_url, endpoint)
session = requests.Session()
session.trust_env = False
return session.request(method, url, headers=self.headers, **kwargs)
def login_to_tool_repository(self):
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
def get_tool(self, handle: str):
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
def get_agent(self, handle: str):
return self._make_request("GET", f"{self.AGENTS_RESOURCE}/{handle}")
def publish_tool(
self,
handle: str,
is_public: bool,
version: str,
description: str | None,
encoded_file: str,
available_exports: list[str] | None = None,
):
params = {
"handle": handle,
"public": is_public,
"version": version,
"file": encoded_file,
"description": description,
"available_exports": available_exports,
}
return self._make_request("POST", f"{self.TOOLS_RESOURCE}", json=params)
def deploy_by_name(self, project_name: str) -> requests.Response:
return self._make_request(
"POST", f"{self.CREWS_RESOURCE}/by-name/{project_name}/deploy"
)
def deploy_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("POST", f"{self.CREWS_RESOURCE}/{uuid}/deploy")
def crew_status_by_name(self, project_name: str) -> requests.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/status"
)
def crew_status_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("GET", f"{self.CREWS_RESOURCE}/{uuid}/status")
def crew_by_name(
self, project_name: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/logs/{log_type}"
)
def crew_by_uuid(
self, uuid: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/{uuid}/logs/{log_type}"
)
def delete_crew_by_name(self, project_name: str) -> requests.Response:
return self._make_request(
"DELETE", f"{self.CREWS_RESOURCE}/by-name/{project_name}"
)
def delete_crew_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("DELETE", f"{self.CREWS_RESOURCE}/{uuid}")
def list_crews(self) -> requests.Response:
return self._make_request("GET", self.CREWS_RESOURCE)
def create_crew(self, payload) -> requests.Response:
return self._make_request("POST", self.CREWS_RESOURCE, json=payload)
def get_organizations(self) -> requests.Response:
return self._make_request("GET", self.ORGANIZATIONS_RESOURCE)
def initialize_trace_batch(self, payload) -> requests.Response:
return self._make_request(
"POST",
f"{self.TRACING_RESOURCE}/batches",
json=payload,
timeout=30,
)
def initialize_ephemeral_trace_batch(self, payload) -> requests.Response:
return self._make_request(
"POST",
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches",
json=payload,
)
def send_trace_events(self, trace_batch_id: str, payload) -> requests.Response:
return self._make_request(
"POST",
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/events",
json=payload,
timeout=30,
)
def send_ephemeral_trace_events(
self, trace_batch_id: str, payload
) -> requests.Response:
return self._make_request(
"POST",
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches/{trace_batch_id}/events",
json=payload,
timeout=30,
)
def finalize_trace_batch(self, trace_batch_id: str, payload) -> requests.Response:
return self._make_request(
"PATCH",
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/finalize",
json=payload,
timeout=30,
)
def finalize_ephemeral_trace_batch(
self, trace_batch_id: str, payload
) -> requests.Response:
return self._make_request(
"PATCH",
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches/{trace_batch_id}/finalize",
json=payload,
timeout=30,
)
def mark_trace_batch_as_failed(
self, trace_batch_id: str, error_message: str
) -> requests.Response:
return self._make_request(
"PATCH",
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}",
json={"status": "failed", "failure_reason": error_message},
timeout=30,
)
def get_triggers(self) -> requests.Response:
"""Get all available triggers from integrations."""
return self._make_request("GET", f"{self.INTEGRATIONS_RESOURCE}/apps")
def get_trigger_payload(
self, app_slug: str, trigger_slug: str
) -> requests.Response:
"""Get sample payload for a specific trigger."""
return self._make_request(
"GET", f"{self.INTEGRATIONS_RESOURCE}/{app_slug}/{trigger_slug}/payload"
)

View File

@@ -0,0 +1,230 @@
from collections import defaultdict
import json
import os
from pathlib import Path
import time
import certifi
import click
import requests
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS
def select_choice(prompt_message, choices):
"""
Presents a list of choices to the user and prompts them to select one.
Args:
- prompt_message (str): The message to display to the user before presenting the choices.
- choices (list): A list of options to present to the user.
Returns:
- str: The selected choice from the list, or None if the user chooses to quit.
"""
provider_models = get_provider_data()
if not provider_models:
return None
click.secho(prompt_message, fg="cyan")
for idx, choice in enumerate(choices, start=1):
click.secho(f"{idx}. {choice}", fg="cyan")
click.secho("q. Quit", fg="cyan")
while True:
choice = click.prompt(
"Enter the number of your choice or 'q' to quit", type=str
)
if choice.lower() == "q":
return None
try:
selected_index = int(choice) - 1
if 0 <= selected_index < len(choices):
return choices[selected_index]
except ValueError:
pass
click.secho(
"Invalid selection. Please select a number between 1 and 6 or 'q' to quit.",
fg="red",
)
def select_provider(provider_models):
"""
Presents a list of providers to the user and prompts them to select one.
Args:
- provider_models (dict): A dictionary of provider models.
Returns:
- str: The selected provider
- None: If user explicitly quits
"""
predefined_providers = [p.lower() for p in PROVIDERS]
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
provider = select_choice(
"Select a provider to set up:", [*predefined_providers, "other"]
)
if provider is None: # User typed 'q'
return None
if provider == "other":
provider = select_choice("Select a provider from the full list:", all_providers)
if provider is None: # User typed 'q'
return None
return provider.lower() if provider else False
def select_model(provider, provider_models):
"""
Presents a list of models for a given provider to the user and prompts them to select one.
Args:
- provider (str): The provider for which to select a model.
- provider_models (dict): A dictionary of provider models.
Returns:
- str: The selected model, or None if the operation is aborted or an invalid selection is made.
"""
predefined_providers = [p.lower() for p in PROVIDERS]
if provider in predefined_providers:
available_models = MODELS.get(provider, [])
else:
available_models = provider_models.get(provider, [])
if not available_models:
click.secho(f"No models available for provider '{provider}'.", fg="red")
return None
return select_choice(
f"Select a model to use for {provider.capitalize()}:", available_models
)
def load_provider_data(cache_file, cache_expiry):
"""
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
Args:
- cache_file (Path): The path to the cache file.
- cache_expiry (int): The cache expiry time in seconds.
Returns:
- dict or None: The loaded provider data or None if the operation fails.
"""
current_time = time.time()
if (
cache_file.exists()
and (current_time - cache_file.stat().st_mtime) < cache_expiry
):
data = read_cache_file(cache_file)
if data:
return data
click.secho(
"Cache is corrupted. Fetching provider data from the web...", fg="yellow"
)
else:
click.secho(
"Cache expired or not found. Fetching provider data from the web...",
fg="cyan",
)
return fetch_provider_data(cache_file)
def read_cache_file(cache_file):
"""
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
Args:
- cache_file (Path): The path to the cache file.
Returns:
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
"""
try:
with open(cache_file, "r") as f:
return json.load(f)
except json.JSONDecodeError:
return None
def fetch_provider_data(cache_file):
"""
Fetches provider data from a specified URL and caches it to a file.
Args:
- cache_file (Path): The path to the cache file.
Returns:
- dict or None: The fetched provider data or None if the operation fails.
"""
ssl_config = os.environ["SSL_CERT_FILE"] = certifi.where()
try:
response = requests.get(JSON_URL, stream=True, timeout=60, verify=ssl_config)
response.raise_for_status()
data = download_data(response)
with open(cache_file, "w") as f:
json.dump(data, f)
return data
except requests.RequestException as e:
click.secho(f"Error fetching provider data: {e}", fg="red")
except json.JSONDecodeError:
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
return None
def download_data(response):
"""
Downloads data from a given HTTP response and returns the JSON content.
Args:
- response (requests.Response): The HTTP response object.
Returns:
- dict: The JSON content of the response.
"""
total_size = int(response.headers.get("content-length", 0))
block_size = 8192
data_chunks = []
with click.progressbar(
length=total_size, label="Downloading", show_pos=True
) as progress_bar:
for chunk in response.iter_content(block_size):
if chunk:
data_chunks.append(chunk)
progress_bar.update(len(chunk))
data_content = b"".join(data_chunks)
return json.loads(data_content.decode("utf-8"))
def get_provider_data():
"""
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
Returns:
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
"""
cache_dir = Path.home() / ".crewai"
cache_dir.mkdir(exist_ok=True)
cache_file = cache_dir / "provider_cache.json"
cache_expiry = 24 * 3600
data = load_provider_data(cache_file, cache_expiry)
if not data:
return None
provider_models = defaultdict(list)
for model_name, properties in data.items():
provider = properties.get("litellm_provider", "").strip().lower()
if "http" in provider or provider == "other":
continue
if provider:
provider_models[provider].append(model_name)
return provider_models

View File

@@ -0,0 +1,25 @@
import subprocess
import click
def replay_task_command(task_id: str) -> None:
"""
Replay the crew execution from a specific task.
Args:
task_id (str): The ID of the task to replay from.
"""
command = ["uv", "run", "replay", task_id]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True) # noqa: S603
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while replaying the task: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -0,0 +1,85 @@
import subprocess
import click
from crewai.cli.utils import get_crews
def reset_memories_command(
long,
short,
entity,
knowledge,
agent_knowledge,
kickoff_outputs,
all,
) -> None:
"""
Reset the crew memories.
Args:
long (bool): Whether to reset the long-term memory.
short (bool): Whether to reset the short-term memory.
entity (bool): Whether to reset the entity memory.
kickoff_outputs (bool): Whether to reset the latest kickoff task outputs.
all (bool): Whether to reset all memories.
knowledge (bool): Whether to reset the knowledge.
agent_knowledge (bool): Whether to reset the agents knowledge.
"""
try:
if not any(
[long, short, entity, kickoff_outputs, knowledge, agent_knowledge, all]
):
click.echo(
"No memory type specified. Please specify at least one type to reset."
)
return
crews = get_crews()
if not crews:
raise ValueError("No crew found.")
for crew in crews:
if all:
crew.reset_memories(command_type="all")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Reset memories command has been completed."
)
continue
if long:
crew.reset_memories(command_type="long")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Long term memory has been reset."
)
if short:
crew.reset_memories(command_type="short")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Short term memory has been reset."
)
if entity:
crew.reset_memories(command_type="entity")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Entity memory has been reset."
)
if kickoff_outputs:
crew.reset_memories(command_type="kickoff_outputs")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Latest Kickoff outputs stored has been reset."
)
if knowledge:
crew.reset_memories(command_type="knowledge")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Knowledge has been reset."
)
if agent_knowledge:
crew.reset_memories(command_type="agent_knowledge")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Agents knowledge has been reset."
)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while resetting the memories: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -0,0 +1,103 @@
from enum import Enum
import os
import subprocess
import click
from packaging import version
from crewai.cli.utils import build_env_with_tool_repository_credentials, read_toml
from crewai.cli.version import get_crewai_version
class CrewType(Enum):
STANDARD = "standard"
FLOW = "flow"
def run_crew() -> None:
"""
Run the crew or flow by running a command in the UV environment.
Starting from version 0.103.0, this command can be used to run both
standard crews and flows. For flows, it detects the type from pyproject.toml
and automatically runs the appropriate command.
"""
crewai_version = get_crewai_version()
min_required_version = "0.71.0"
pyproject_data = read_toml()
# Check for legacy poetry configuration
if pyproject_data.get("tool", {}).get("poetry") and (
version.parse(crewai_version) < version.parse(min_required_version)
):
click.secho(
f"You are running an older version of crewAI ({crewai_version}) that uses poetry pyproject.toml. "
f"Please run `crewai update` to update your pyproject.toml to use uv.",
fg="red",
)
# Determine crew type
is_flow = pyproject_data.get("tool", {}).get("crewai", {}).get("type") == "flow"
crew_type = CrewType.FLOW if is_flow else CrewType.STANDARD
# Display appropriate message
click.echo(f"Running the {'Flow' if is_flow else 'Crew'}")
# Execute the appropriate command
execute_command(crew_type)
def execute_command(crew_type: CrewType) -> None:
"""
Execute the appropriate command based on crew type.
Args:
crew_type: The type of crew to run
"""
command = ["uv", "run", "kickoff" if crew_type == CrewType.FLOW else "run_crew"]
env = os.environ.copy()
try:
pyproject_data = read_toml()
sources = pyproject_data.get("tool", {}).get("uv", {}).get("sources", {})
for source_config in sources.values():
if isinstance(source_config, dict):
index = source_config.get("index")
if index:
index_env = build_env_with_tool_repository_credentials(index)
env.update(index_env)
except Exception: # noqa: S110
pass
try:
subprocess.run(command, capture_output=False, text=True, check=True, env=env) # noqa: S603
except subprocess.CalledProcessError as e:
handle_error(e, crew_type)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)
def handle_error(error: subprocess.CalledProcessError, crew_type: CrewType) -> None:
"""
Handle subprocess errors with appropriate messaging.
Args:
error: The subprocess error that occurred
crew_type: The type of crew that was being run
"""
entity_type = "flow" if crew_type == CrewType.FLOW else "crew"
click.echo(f"An error occurred while running the {entity_type}: {error}", err=True)
if error.output:
click.echo(error.output, err=True, nl=True)
pyproject_data = read_toml()
if pyproject_data.get("tool", {}).get("poetry"):
click.secho(
"It's possible that you are using an old version of crewAI that uses poetry, "
"please run `crewai update` to update your pyproject.toml to use uv.",
fg="yellow",
)

View File

@@ -0,0 +1,71 @@
from typing import Any
from rich.console import Console
from rich.table import Table
from crewai.cli.command import BaseCommand
from crewai.cli.config import HIDDEN_SETTINGS_KEYS, READONLY_SETTINGS_KEYS, Settings
console = Console()
class SettingsCommand(BaseCommand):
"""A class to handle CLI configuration commands."""
def __init__(self, settings_kwargs: dict[str, Any] | None = None):
super().__init__()
settings_kwargs = settings_kwargs or {}
self.settings = Settings(**settings_kwargs)
def list(self) -> None:
"""List all CLI configuration parameters."""
table = Table(title="CrewAI CLI Configuration")
table.add_column("Setting", style="cyan", no_wrap=True)
table.add_column("Value", style="green")
table.add_column("Description", style="yellow")
# Add all settings to the table
for field_name, field_info in Settings.model_fields.items():
if field_name in HIDDEN_SETTINGS_KEYS:
# Do not display hidden settings
continue
current_value = getattr(self.settings, field_name)
description = field_info.description or "No description available"
display_value = (
str(current_value) if current_value is not None else "Not set"
)
table.add_row(field_name, display_value, description)
console.print(table)
def set(self, key: str, value: str) -> None:
"""Set a CLI configuration parameter."""
readonly_settings = READONLY_SETTINGS_KEYS + HIDDEN_SETTINGS_KEYS
if not hasattr(self.settings, key) or key in readonly_settings:
console.print(
f"Error: Unknown or readonly configuration key '{key}'",
style="bold red",
)
console.print("Available keys:", style="yellow")
for field_name in Settings.model_fields.keys():
if field_name not in readonly_settings:
console.print(f" - {field_name}", style="yellow")
raise SystemExit(1)
setattr(self.settings, key, value)
self.settings.dump()
console.print(f"Successfully set '{key}' to '{value}'", style="bold green")
def reset_all_settings(self) -> None:
"""Reset all CLI configuration parameters to default values."""
self.settings.reset()
console.print(
"Successfully reset all configuration parameters to default values. It is recommended to run [bold yellow]'crewai login'[/bold yellow] to re-authenticate.",
style="bold green",
)

View File

@@ -0,0 +1,141 @@
from datetime import datetime
import json
import os
from pathlib import Path
import sys
from cryptography.fernet import Fernet
class TokenManager:
def __init__(self, file_path: str = "tokens.enc") -> None:
"""
Initialize the TokenManager class.
:param file_path: The file path to store the encrypted tokens. Default is "tokens.enc".
"""
self.file_path = file_path
self.key = self._get_or_create_key()
self.fernet = Fernet(self.key)
def _get_or_create_key(self) -> bytes:
"""
Get or create the encryption key.
:return: The encryption key.
"""
key_filename = "secret.key"
key = self.read_secure_file(key_filename)
if key is not None:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
return new_key
def save_tokens(self, access_token: str, expires_at: int) -> None:
"""
Save the access token and its expiration time.
:param access_token: The access token to save.
:param expires_at: The UNIX timestamp of the expiration time.
"""
expiration_time = datetime.fromtimestamp(expires_at)
data = {
"access_token": access_token,
"expiration": expiration_time.isoformat(),
}
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
self.save_secure_file(self.file_path, encrypted_data)
def get_token(self) -> str | None:
"""
Get the access token if it is valid and not expired.
:return: The access token if valid and not expired, otherwise None.
"""
encrypted_data = self.read_secure_file(self.file_path)
if encrypted_data is None:
return None
decrypted_data = self.fernet.decrypt(encrypted_data) # type: ignore
data = json.loads(decrypted_data)
expiration = datetime.fromisoformat(data["expiration"])
if expiration <= datetime.now():
return None
return data["access_token"]
def clear_tokens(self) -> None:
"""
Clear the tokens.
"""
self.delete_secure_file(self.file_path)
def get_secure_storage_path(self) -> Path:
"""
Get the secure storage path based on the operating system.
:return: The secure storage path.
"""
if sys.platform == "win32":
# Windows: Use %LOCALAPPDATA%
base_path = os.environ.get("LOCALAPPDATA")
elif sys.platform == "darwin":
# macOS: Use ~/Library/Application Support
base_path = os.path.expanduser("~/Library/Application Support")
else:
# Linux and other Unix-like: Use ~/.local/share
base_path = os.path.expanduser("~/.local/share")
app_name = "crewai/credentials"
storage_path = Path(base_path) / app_name
storage_path.mkdir(parents=True, exist_ok=True)
return storage_path
def save_secure_file(self, filename: str, content: bytes) -> None:
"""
Save the content to a secure file.
:param filename: The name of the file.
:param content: The content to save.
"""
storage_path = self.get_secure_storage_path()
file_path = storage_path / filename
with open(file_path, "wb") as f:
f.write(content)
# Set appropriate permissions (read/write for owner only)
os.chmod(file_path, 0o600)
def read_secure_file(self, filename: str) -> bytes | None:
"""
Read the content of a secure file.
:param filename: The name of the file.
:return: The content of the file if it exists, otherwise None.
"""
storage_path = self.get_secure_storage_path()
file_path = storage_path / filename
if not file_path.exists():
return None
with open(file_path, "rb") as f:
return f.read()
def delete_secure_file(self, filename: str) -> None:
"""
Delete the secure file.
:param filename: The name of the file.
"""
storage_path = self.get_secure_storage_path()
file_path = storage_path / filename
if file_path.exists():
file_path.unlink(missing_ok=True)

View File

@@ -0,0 +1,3 @@
.env
__pycache__/
.DS_Store

View File

@@ -0,0 +1,54 @@
# {{crew_name}} Crew
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <3.14 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:
```bash
pip install uv
```
Next, navigate to your project directory and install the dependencies:
(Optional) Lock the dependencies and install them by using the CLI command:
```bash
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
$ crewai run
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -0,0 +1,19 @@
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.

View File

@@ -0,0 +1,17 @@
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is {current_year}.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledged report with the main topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst

View File

@@ -0,0 +1,64 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
@CrewBase
class {{crew_name}}():
"""{{crew_name}} crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# If you would like to add tools to your agents, you can learn more about it here:
# https://docs.crewai.com/concepts/agents#agent-tools
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'], # type: ignore[index]
verbose=True
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'], # type: ignore[index]
verbose=True
)
# To learn more about structured task outputs,
# task dependencies, and task callbacks, check out the documentation:
# https://docs.crewai.com/concepts/tasks#overview-of-a-task
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'], # type: ignore[index]
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'], # type: ignore[index]
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the {{crew_name}} crew"""
# To learn how to add knowledge sources to your crew, check out the documentation:
# https://docs.crewai.com/concepts/knowledge#what-is-knowledge
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
)

View File

@@ -0,0 +1,4 @@
User name is John Doe.
User is an AI Engineer.
User is interested in AI Agents.
User is based in San Francisco, California.

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python
import sys
import warnings
from datetime import datetime
from {{folder_name}}.crew import {{crew_name}}
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
# This main file is intended to be a way for you to run your
# crew locally, so refrain from adding unnecessary logic into this file.
# Replace with inputs you want to test with, it will automatically
# interpolate any tasks and agents information
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI LLMs',
'current_year': str(datetime.now().year)
}
try:
{{crew_name}}().crew().kickoff(inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while running the crew: {e}")
def train():
"""
Train the crew for a given number of iterations.
"""
inputs = {
"topic": "AI LLMs",
'current_year': str(datetime.now().year)
}
try:
{{crew_name}}().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
def replay():
"""
Replay the crew execution from a specific task.
"""
try:
{{crew_name}}().crew().replay(task_id=sys.argv[1])
except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}")
def test():
"""
Test the crew execution and returns the results.
"""
inputs = {
"topic": "AI LLMs",
"current_year": str(datetime.now().year)
}
try:
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), eval_llm=sys.argv[2], inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while testing the crew: {e}")
def run_with_trigger():
"""
Run the crew with trigger payload.
"""
import json
if len(sys.argv) < 2:
raise Exception("No trigger payload provided. Please provide JSON payload as argument.")
try:
trigger_payload = json.loads(sys.argv[1])
except json.JSONDecodeError:
raise Exception("Invalid JSON payload provided as argument")
inputs = {
"crewai_trigger_payload": trigger_payload,
"topic": "",
"current_year": ""
}
try:
result = {{crew_name}}().crew().kickoff(inputs=inputs)
return result
except Exception as e:
raise Exception(f"An error occurred while running the crew with trigger: {e}")

View File

@@ -0,0 +1,24 @@
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.203.1,<1.0.0"
]
[project.scripts]
{{folder_name}} = "{{folder_name}}.main:run"
run_crew = "{{folder_name}}.main:run"
train = "{{folder_name}}.main:train"
replay = "{{folder_name}}.main:replay"
test = "{{folder_name}}.main:test"
run_with_trigger = "{{folder_name}}.main:run_with_trigger"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.crewai]
type = "crew"

View File

@@ -0,0 +1,19 @@
from crewai.tools import BaseTool
from typing import Type
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, your agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here
return "this is an example of a tool output, ignore it and move along."

View File

@@ -0,0 +1,4 @@
.env
__pycache__/
lib/
.DS_Store

View File

@@ -0,0 +1,56 @@
# {{crew_name}} Crew
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <3.14 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:
```bash
pip install uv
```
Next, navigate to your project directory and install the dependencies:
(Optional) Lock the dependencies and install them by using the CLI command:
```bash
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your flow and begin execution, run this from the root folder of your project:
```bash
crewai run
```
This command initializes the {{name}} Flow as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -0,0 +1 @@
"""Poem crew template."""

View File

@@ -0,0 +1,11 @@
poem_writer:
role: >
CrewAI Poem Writer
goal: >
Generate a funny, light heartedpoem about how CrewAI
is awesome with a sentence count of {sentence_count}
backstory: >
You're a creative poet with a talent for capturing the essence of any topic
in a beautiful and engaging way. Known for your ability to craft poems that
resonate with readers, you bring a unique perspective and artistic flair to
every piece you write.

View File

@@ -0,0 +1,7 @@
write_poem:
description: >
Write a poem about how CrewAI is awesome.
Ensure the poem is engaging and adheres to the specified sentence count of {sentence_count}.
expected_output: >
A beautifully crafted poem about CrewAI, with exactly {sentence_count} sentences.
agent: poem_writer

View File

@@ -0,0 +1,53 @@
from typing import List
from crewai import Agent, Crew, Process, Task
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.project import CrewBase, agent, crew, task
# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
@CrewBase
class PoemCrew:
"""Poem Crew"""
agents: List[BaseAgent]
tasks: List[Task]
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
# If you would lik to add tools to your crew, you can learn more about it here:
# https://docs.crewai.com/concepts/agents#agent-tools
@agent
def poem_writer(self) -> Agent:
return Agent(
config=self.agents_config["poem_writer"], # type: ignore[index]
)
# To learn more about structured task outputs,
# task dependencies, and task callbacks, check out the documentation:
# https://docs.crewai.com/concepts/tasks#overview-of-a-task
@task
def write_poem(self) -> Task:
return Task(
config=self.tasks_config["write_poem"], # type: ignore[index]
)
@crew
def crew(self) -> Crew:
"""Creates the Research Crew"""
# To learn how to add knowledge sources to your crew, check out the documentation:
# https://docs.crewai.com/concepts/knowledge#what-is-knowledge
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -0,0 +1,87 @@
#!/usr/bin/env python
from random import randint
from pydantic import BaseModel
from crewai.flow import Flow, listen, start
from {{folder_name}}.crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):
sentence_count: int = 1
poem: str = ""
class PoemFlow(Flow[PoemState]):
@start()
def generate_sentence_count(self, crewai_trigger_payload: dict = None):
print("Generating sentence count")
# Use trigger payload if available
if crewai_trigger_payload:
# Example: use trigger data to influence sentence count
self.state.sentence_count = crewai_trigger_payload.get('sentence_count', randint(1, 5))
print(f"Using trigger payload: {crewai_trigger_payload}")
else:
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count)
def generate_poem(self):
print("Generating poem")
result = (
PoemCrew()
.crew()
.kickoff(inputs={"sentence_count": self.state.sentence_count})
)
print("Poem generated", result.raw)
self.state.poem = result.raw
@listen(generate_poem)
def save_poem(self):
print("Saving poem")
with open("poem.txt", "w") as f:
f.write(self.state.poem)
def kickoff():
poem_flow = PoemFlow()
poem_flow.kickoff()
def plot():
poem_flow = PoemFlow()
poem_flow.plot()
def run_with_trigger():
"""
Run the flow with trigger payload.
"""
import json
import sys
# Get trigger payload from command line argument
if len(sys.argv) < 2:
raise Exception("No trigger payload provided. Please provide JSON payload as argument.")
try:
trigger_payload = json.loads(sys.argv[1])
except json.JSONDecodeError:
raise Exception("Invalid JSON payload provided as argument")
# Create flow and kickoff with trigger payload
# The @start() methods will automatically receive crewai_trigger_payload parameter
poem_flow = PoemFlow()
try:
result = poem_flow.kickoff({"crewai_trigger_payload": trigger_payload})
return result
except Exception as e:
raise Exception(f"An error occurred while running the flow with trigger: {e}")
if __name__ == "__main__":
kickoff()

View File

@@ -0,0 +1,22 @@
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.203.1,<1.0.0",
]
[project.scripts]
kickoff = "{{folder_name}}.main:kickoff"
run_crew = "{{folder_name}}.main:kickoff"
plot = "{{folder_name}}.main:plot"
run_with_trigger = "{{folder_name}}.main:run_with_trigger"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.crewai]
type = "flow"

View File

@@ -0,0 +1,21 @@
from typing import Type
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here
return "this is an example of a tool output, ignore it and move along."

View File

@@ -0,0 +1,10 @@
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info
# Virtual environments
.venv

View File

@@ -0,0 +1,48 @@
# {{folder_name}}
{{folder_name}} is a CrewAI Tool. This template is designed to help you create
custom tools to power up your crews.
## Installing
Ensure you have Python >=3.10 <3.14 installed on your system. This project
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
handling, offering a seamless setup and execution experience.
First, if you haven't already, install `uv`:
```bash
pip install uv
```
Next, navigate to your project directory and install the dependencies with:
```bash
crewai install
```
## Publishing
Collaborate by sharing tools within your organization, or publish them publicly
to contribute with the community.
```bash
crewai tool publish {{tool_name}}
```
Others may install your tool in their crews running:
```bash
crewai tool install {{tool_name}}
```
## Support
For support, questions, or feedback regarding the {{crew_name}} tool or CrewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -0,0 +1,12 @@
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.203.1"
]
[tool.crewai]
type = "tool"

View File

@@ -0,0 +1,3 @@
from .tool import {{class_name}}
__all__ = ["{{class_name}}"]

View File

@@ -0,0 +1,10 @@
from crewai.tools import BaseTool
class {{class_name}}(BaseTool):
name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization."
def _run(self, argument: str) -> str:
# Your tool's logic here
return "Tool's result"

View File

@@ -0,0 +1,236 @@
import base64
import os
from pathlib import Path
import subprocess
import tempfile
from typing import Any
import click
from rich.console import Console
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
from crewai.cli.utils import (
build_env_with_tool_repository_credentials,
extract_available_exports,
get_project_description,
get_project_name,
get_project_version,
tree_copy,
tree_find_and_replace,
)
console = Console()
class ToolCommand(BaseCommand, PlusAPIMixin):
"""
A class to handle tool repository related operations for CrewAI projects.
"""
def __init__(self):
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
def create(self, handle: str):
self._ensure_not_in_project()
folder_name = handle.replace(" ", "_").replace("-", "_").lower()
class_name = handle.replace("_", " ").replace("-", " ").title().replace(" ", "")
project_root = Path(folder_name)
if project_root.exists():
click.secho(f"Folder {folder_name} already exists.", fg="red")
raise SystemExit
os.makedirs(project_root)
click.secho(f"Creating custom tool {folder_name}...", fg="green", bold=True)
template_dir = Path(__file__).parent.parent / "templates" / "tool"
tree_copy(template_dir, project_root)
tree_find_and_replace(project_root, "{{folder_name}}", folder_name)
tree_find_and_replace(project_root, "{{class_name}}", class_name)
old_directory = os.getcwd()
os.chdir(project_root)
try:
self.login()
subprocess.run(["git", "init"], check=True) # noqa: S607
console.print(
f"[green]Created custom tool [bold]{folder_name}[/bold]. Run [bold]cd {project_root}[/bold] to start working.[/green]"
)
finally:
os.chdir(old_directory)
def publish(self, is_public: bool, force: bool = False):
if not git.Repository().is_synced() and not force:
console.print(
"[bold red]Failed to publish tool.[/bold red]\n"
"Local changes need to be resolved before publishing. Please do the following:\n"
"* [bold]Commit[/bold] your changes.\n"
"* [bold]Push[/bold] to sync with the remote.\n"
"* [bold]Pull[/bold] the latest changes from the remote.\n"
"\nOnce your repository is up-to-date, retry publishing the tool."
)
raise SystemExit()
project_name = get_project_name(require=True)
assert isinstance(project_name, str) # noqa: S101
project_version = get_project_version(require=True)
assert isinstance(project_version, str) # noqa: S101
project_description = get_project_description(require=False)
encoded_tarball = None
console.print("[bold blue]Discovering tools from your project...[/bold blue]")
available_exports = extract_available_exports()
if available_exports:
console.print(
f"[green]Found these tools to publish: {', '.join([e['name'] for e in available_exports])}[/green]"
)
self._print_current_organization()
with tempfile.TemporaryDirectory() as temp_build_dir:
subprocess.run( # noqa: S603
["uv", "build", "--sdist", "--out-dir", temp_build_dir], # noqa: S607
check=True,
capture_output=False,
)
tarball_filename = next(
(f for f in os.listdir(temp_build_dir) if f.endswith(".tar.gz")), None
)
if not tarball_filename:
console.print(
"Project build failed. Please ensure that the command `uv build --sdist` completes successfully.",
style="bold red",
)
raise SystemExit
tarball_path = os.path.join(temp_build_dir, tarball_filename)
with open(tarball_path, "rb") as file:
tarball_contents = file.read()
encoded_tarball = base64.b64encode(tarball_contents).decode("utf-8")
console.print("[bold blue]Publishing tool to repository...[/bold blue]")
publish_response = self.plus_api_client.publish_tool(
handle=project_name,
is_public=is_public,
version=project_version,
description=project_description,
encoded_file=f"data:application/x-gzip;base64,{encoded_tarball}",
available_exports=available_exports,
)
self._validate_response(publish_response)
published_handle = publish_response.json()["handle"]
console.print(
f"Successfully published `{published_handle}` ({project_version}).\n\n"
+ "⚠️ Security checks are running in the background. Your tool will be available once these are complete.\n"
+ f"You can monitor the status or access your tool here:\nhttps://app.crewai.com/crewai_plus/tools/{published_handle}",
style="bold green",
)
def install(self, handle: str):
self._print_current_organization()
get_response = self.plus_api_client.get_tool(handle)
if get_response.status_code == 404:
console.print(
"No tool found with this name. Please ensure the tool was published and you have access to it.",
style="bold red",
)
raise SystemExit
if get_response.status_code != 200:
console.print(
"Failed to get tool details. Please try again later.", style="bold red"
)
raise SystemExit
self._add_package(get_response.json())
console.print(f"Successfully installed {handle}", style="bold green")
def login(self) -> None:
login_response = self.plus_api_client.login_to_tool_repository()
if login_response.status_code != 200:
console.print(
"Authentication failed. Verify access to the tool repository, or try `crewai login`. ",
style="bold red",
)
raise SystemExit
login_response_json = login_response.json()
settings = Settings()
settings.tool_repository_username = login_response_json["credential"][
"username"
]
settings.tool_repository_password = login_response_json["credential"][
"password"
]
settings.org_uuid = login_response_json["current_organization"]["uuid"]
settings.org_name = login_response_json["current_organization"]["name"]
settings.dump()
def _add_package(self, tool_details: dict[str, Any]):
is_from_pypi = tool_details.get("source", None) == "pypi"
tool_handle = tool_details["handle"]
repository_handle = tool_details["repository"]["handle"]
repository_url = tool_details["repository"]["url"]
index = f"{repository_handle}={repository_url}"
add_package_command = [
"uv",
"add",
]
if is_from_pypi:
add_package_command.append(tool_handle)
else:
add_package_command.extend(["--index", index, tool_handle])
add_package_result = subprocess.run( # noqa: S603
add_package_command,
capture_output=False,
env=build_env_with_tool_repository_credentials(repository_handle),
text=True,
check=True,
)
if add_package_result.stderr:
click.echo(add_package_result.stderr, err=True)
raise SystemExit
def _ensure_not_in_project(self):
if os.path.isfile("./pyproject.toml"):
console.print(
"[bold red]Oops! It looks like you're inside a project.[/bold red]"
)
console.print(
"You can't create a new tool while inside an existing project."
)
console.print(
"[bold yellow]Tip:[/bold yellow] Navigate to a different directory and try again."
)
raise SystemExit
def _print_current_organization(self) -> None:
settings = Settings()
if settings.org_uuid:
console.print(
f"Current organization: {settings.org_name} ({settings.org_uuid})",
style="bold blue",
)
else:
console.print(
"No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.",
style="yellow",
)

Some files were not shown because too many files have changed in this diff Show More