adjust aop to amp docs lang (#4179)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled

* adjust aop to amp docs lang

* whoop no print
This commit is contained in:
Lorenze Jay
2026-01-05 15:30:21 -08:00
committed by GitHub
parent f8deb0fd18
commit 25c0c030ce
203 changed files with 5176 additions and 2715 deletions

View File

@@ -8,6 +8,7 @@ mode: "wide"
## Overview of an Agent
In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Perform specific tasks
- Make decisions based on its role and goal
- Use tools to accomplish objectives
@@ -16,15 +17,19 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
- Delegate tasks when allowed
<Tip>
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
Think of an agent as a specialized team member with specific skills,
expertise, and responsibilities. For example, a `Researcher` agent might excel
at gathering and analyzing information, while a `Writer` agent might be better
at creating content.
</Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI AOP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
CrewAI AMP includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces
- Real-time testing and validation
- Template library with pre-configured agent types
@@ -33,36 +38,36 @@ The Visual Agent Builder enables:
## Agent Attributes
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
| Attribute | Parameter | Type | Description |
| :-------------------------------------- | :----------------------- | :------------------------------------ | :------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Multimodal** _(optional)_ | `multimodal` | `bool` | Whether the agent supports multimodal capabilities. Default is False. |
| **Inject Date** _(optional)_ | `inject_date` | `bool` | Whether to automatically inject the current date into tasks. Default is False. |
| **Date Format** _(optional)_ | `date_format` | `str` | Format string for date when inject_date is enabled. Default is "%Y-%m-%d" (ISO format). |
| **Reasoning** _(optional)_ | `reasoning` | `bool` | Whether the agent should reflect and create a plan before executing a task. Default is False. |
| **Max Reasoning Attempts** _(optional)_ | `max_reasoning_attempts` | `Optional[int]` | Maximum number of reasoning attempts before executing the task. If None, will try until ready. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
## Creating Agents
@@ -137,7 +142,8 @@ class LatestAiDevelopmentCrew():
```
<Note>
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
The names you use in your YAML files (`agents.yaml`) should match the method
names in your Python code.
</Note>
### Direct Code Definition
@@ -184,6 +190,7 @@ agent = Agent(
Let's break down some key parameter combinations for common use cases:
#### Basic Research Agent
```python Code
research_agent = Agent(
role="Research Analyst",
@@ -195,6 +202,7 @@ research_agent = Agent(
```
#### Code Development Agent
```python Code
dev_agent = Agent(
role="Senior Python Developer",
@@ -208,6 +216,7 @@ dev_agent = Agent(
```
#### Long-Running Analysis Agent
```python Code
analysis_agent = Agent(
role="Data Analyst",
@@ -221,6 +230,7 @@ analysis_agent = Agent(
```
#### Custom Template Agent
```python Code
custom_agent = Agent(
role="Customer Service Representative",
@@ -236,6 +246,7 @@ custom_agent = Agent(
```
#### Date-Aware Agent with Reasoning
```python Code
strategic_agent = Agent(
role="Market Analyst",
@@ -250,6 +261,7 @@ strategic_agent = Agent(
```
#### Reasoning Agent
```python Code
reasoning_agent = Agent(
role="Strategic Planner",
@@ -263,6 +275,7 @@ reasoning_agent = Agent(
```
#### Multimodal Agent
```python Code
multimodal_agent = Agent(
role="Visual Content Analyst",
@@ -276,52 +289,64 @@ multimodal_agent = Agent(
### Parameter Details
#### Critical Parameters
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
- `llm` determines the language model used (default: OpenAI's GPT-4)
#### Memory and Context
- `memory`: Enable to maintain conversation history
- `respect_context_window`: Prevents token limit issues
- `knowledge_sources`: Add domain-specific knowledge bases
#### Execution Control
- `max_iter`: Maximum attempts before giving best answer
- `max_execution_time`: Timeout in seconds
- `max_rpm`: Rate limiting for API calls
- `max_retry_limit`: Retries on error
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
Add the code interpreter tool as a tool in the agent as a tool parameter.
</Note>
This runs a default Docker image. If you want to configure the docker image,
the checkout the Code Interpreter Tool in the tools section. Add the code
interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks
- `inject_date`: Automatically inject current date into task descriptions
#### Templates
- `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format
- `response_template`: Formats agent responses
<Note>
When using custom templates, ensure that both `system_template` and `prompt_template` are defined. The `response_template` is optional but recommended for consistent output formatting.
When using custom templates, ensure that both `system_template` and
`prompt_template` are defined. The `response_template` is optional but
recommended for consistent output formatting.
</Note>
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{backstory}` in your templates. These will be automatically populated during execution.
When using custom templates, you can use variables like `{role}`, `{goal}`,
and `{backstory}` in your templates. These will be automatically populated
during execution.
</Note>
## Agent Tools
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
@@ -360,7 +385,8 @@ analyst = Agent(
```
<Note>
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
When `memory` is enabled, the agent will maintain context across multiple
interactions, improving its ability to handle complex, multi-step tasks.
</Note>
## Context Window Management
@@ -390,6 +416,7 @@ smart_agent = Agent(
```
**What happens when context limits are exceeded:**
- ⚠️ **Warning message**: `"Context length exceeded. Summarizing content to fit the model context window."`
- 🔄 **Automatic summarization**: CrewAI intelligently summarizes the conversation history
- ✅ **Continued execution**: Task execution continues seamlessly with the summarized context
@@ -411,6 +438,7 @@ strict_agent = Agent(
```
**What happens when context limits are exceeded:**
- ❌ **Error message**: `"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools."`
- 🛑 **Execution stops**: Task execution halts immediately
- 🔧 **Manual intervention required**: You need to modify your approach
@@ -418,6 +446,7 @@ strict_agent = Agent(
### Choosing the Right Setting
#### Use `respect_context_window=True` (Default) when:
- **Processing large documents** that might exceed context limits
- **Long-running conversations** where some summarization is acceptable
- **Research tasks** where general context is more important than exact details
@@ -436,6 +465,7 @@ document_processor = Agent(
```
#### Use `respect_context_window=False` when:
- **Precision is critical** and information loss is unacceptable
- **Legal or medical tasks** requiring complete context
- **Code review** where missing details could introduce bugs
@@ -458,6 +488,7 @@ precision_agent = Agent(
When dealing with very large datasets, consider these strategies:
#### 1. Use RAG Tools
```python Code
from crewai_tools import RagTool
@@ -475,6 +506,7 @@ rag_agent = Agent(
```
#### 2. Use Knowledge Sources
```python Code
# Use knowledge sources instead of large prompts
knowledge_agent = Agent(
@@ -498,6 +530,7 @@ knowledge_agent = Agent(
### Troubleshooting Context Issues
**If you're getting context limit errors:**
```python Code
# Quick fix: Enable automatic handling
agent.respect_context_window = True
@@ -511,6 +544,7 @@ agent.tools = [RagTool()]
```
**If automatic summarization loses important information:**
```python Code
# Disable auto-summarization and use RAG instead
agent = Agent(
@@ -524,7 +558,10 @@ agent = Agent(
```
<Note>
The context window management feature works automatically in the background. You don't need to call any special functions - just set `respect_context_window` to your preferred behavior and CrewAI handles the rest!
The context window management feature works automatically in the background.
You don't need to call any special functions - just set
`respect_context_window` to your preferred behavior and CrewAI handles the
rest!
</Note>
## Direct Agent Interaction with `kickoff()`
@@ -556,10 +593,10 @@ print(result.raw)
### Parameters and Return Values
| Parameter | Type | Description |
| :---------------- | :---------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
| Parameter | Type | Description |
| :---------------- | :--------------------------------- | :------------------------------------------------------------------------ |
| `messages` | `Union[str, List[Dict[str, str]]]` | Either a string query or a list of message dictionaries with role/content |
| `response_format` | `Optional[Type[Any]]` | Optional Pydantic model for structured output |
The method returns a `LiteAgentOutput` object with the following properties:
@@ -621,28 +658,34 @@ asyncio.run(main())
```
<Note>
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler execution flow while preserving all of the agent's configuration (role, goal, backstory, tools, etc.).
The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler
execution flow while preserving all of the agent's configuration (role, goal,
backstory, tools, etc.).
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
### Performance Optimization
- Use `respect_context_window: true` to prevent token limit issues
- Set appropriate `max_rpm` to avoid rate limiting
- Enable `cache: true` to improve performance for repetitive tasks
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Advanced Features
- Enable `reasoning: true` for agents that need to plan and reflect before executing complex tasks
- Set appropriate `max_reasoning_attempts` to control planning iterations (None for unlimited attempts)
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
@@ -650,6 +693,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Enable `multimodal: true` for agents that need to process both text and visual content
### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions
- Consider using different LLMs for different purposes:
@@ -657,6 +701,7 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- `function_calling_llm` for efficient tool usage
### Date Awareness and Reasoning
- Use `inject_date: true` to provide agents with current date awareness for time-sensitive tasks
- Customize the date format with `date_format` using standard Python datetime format codes
- Valid format codes include: %Y (year), %m (month), %d (day), %B (full month name), etc.
@@ -664,22 +709,26 @@ The `kickoff()` method uses a `LiteAgent` internally, which provides a simpler e
- Enable `reasoning: true` for complex tasks that benefit from upfront planning and reflection
### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling)
## Troubleshooting Common Issues
1. **Rate Limiting**: If you're hitting API rate limits:
- Implement appropriate `max_rpm`
- Use caching for repetitive operations
- Consider batching requests
2. **Context Window Errors**: If you're exceeding context limits:
- Enable `respect_context_window`
- Use more efficient prompts
- Clear agent memory periodically
3. **Code Execution Issues**: If code execution fails:
- Verify Docker is installed for safe mode
- Check execution permissions
- Review code sandbox settings

View File

@@ -5,7 +5,12 @@ icon: terminal
mode: "wide"
---
<Warning>Since release 0.140.0, CrewAI AOP started a process of migrating their login provider. As such, the authentication flow via CLI was updated. Users that use Google to login, or that created their account after July 3rd, 2025 will be unable to log in with older versions of the `crewai` library.</Warning>
<Warning>
Since release 0.140.0, CrewAI AMP started a process of migrating their login
provider. As such, the authentication flow via CLI was updated. Users that use
Google to login, or that created their account after July 3rd, 2025 will be
unable to log in with older versions of the `crewai` library.
</Warning>
## Overview
@@ -41,6 +46,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow
Example:
```shell Terminal
crewai create crew my_new_crew
crewai create flow my_new_flow
@@ -57,6 +63,7 @@ crewai version [OPTIONS]
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell Terminal
crewai version
crewai version --tools
@@ -74,6 +81,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell Terminal
crewai train -n 10 -f my_training_data.pkl
```
@@ -89,6 +97,7 @@ crewai replay [OPTIONS]
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell Terminal
crewai replay -t task_123456
```
@@ -118,6 +127,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories
Example:
```shell Terminal
crewai reset-memories --long --short
crewai reset-memories --all
@@ -135,6 +145,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -148,12 +159,16 @@ crewai run
```
<Note>
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.
Starting from version 0.103.0, the `crewai run` command can be used to run
both standard crews and flows. For flows, it automatically detects the type
from pyproject.toml and runs the appropriate command. This is now the
recommended way to run both crews and flows.
</Note>
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
Make sure to run these commands from the directory where your CrewAI project
is set up. Some commands may require additional configuration or setup within
your project structure.
</Note>
### 9. Chat
@@ -165,6 +180,7 @@ After receiving the results, you can continue interacting with the assistant for
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
@@ -182,28 +198,30 @@ def crew(self) -> Crew:
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. Deploy
Deploy the crew or flow to [CrewAI AOP](https://app.crewai.com).
Deploy the crew or flow to [CrewAI AMP](https://app.crewai.com).
- **Authentication**: You need to be authenticated to deploy to CrewAI AOP.
You can login or create an account with:
```shell Terminal
crewai login
```
- **Authentication**: You need to be authenticated to deploy to CrewAI AMP.
You can login or create an account with:
```shell Terminal
crewai login
```
- **Create a deployment**: Once you are authenticated, you can create a deployment for your crew or flow from the root of your localproject.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
```shell Terminal
crewai deploy create
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
Manage your CrewAI AOP organizations.
Manage your CrewAI AMP organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
@@ -212,65 +230,80 @@ crewai org [COMMAND] [OPTIONS]
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI AOP to use these organization management commands.
You must be authenticated to CrewAI AMP to use these organization management
commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AOP.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI AOP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI AMP.
```shell Terminal
crewai deploy push
```
- Initiates the deployment process on the CrewAI AMP platform.
- Upon successful initiation, it will output the Deployment created successfully! message along with the Deployment Name and a unique Deployment ID (UUID).
- **Deployment Status**: You can check the status of your deployment with:
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
```shell Terminal
crewai deploy status
```
This fetches the latest deployment status of your most recent deployment attempt (e.g., `Building Images for Crew`, `Deploy Enqueued`, `Online`).
- **Deployment Logs**: You can check the logs of your deployment with:
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
```shell Terminal
crewai deploy logs
```
This streams the deployment logs to your terminal.
- **List deployments**: You can list all your deployments with:
```shell Terminal
crewai deploy list
```
This lists all your deployments.
```shell Terminal
crewai deploy list
```
This lists all your deployments.
- **Delete a deployment**: You can delete a deployment with:
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI AOP platform.
```shell Terminal
crewai deploy remove
```
This deletes the deployment from the CrewAI AMP platform.
- **Help Command**: You can get help with the CLI with:
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
```shell Terminal
crewai deploy --help
```
This shows the help message for the CrewAI Deploy CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AOP](http://app.crewai.com) using the CLI.
Watch this video tutorial for a step-by-step demonstration of deploying your crew to [CrewAI AMP](http://app.crewai.com) using the CLI.
<iframe
className="w-full aspect-video rounded-xl"
@@ -283,25 +316,27 @@ Watch this video tutorial for a step-by-step demonstration of deploying your cre
### 11. Login
Authenticate with CrewAI AOP using a secure device code flow (no email entry required).
Authenticate with CrewAI AMP using a secure device code flow (no email entry required).
```shell Terminal
crewai login
```
What happens:
- A verification URL and short code are displayed in your terminal
- Your browser opens to the verification URL
- Enter/confirm the code to complete authentication
Notes:
- The OAuth2 provider and domain are configured via `crewai config` (defaults use `login.crewai.com`)
- After successful login, the CLI also attempts to authenticate to the Tool Repository automatically
- If you reset your configuration, run `crewai login` again to re-authenticate
### 12. API Keys
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
When running `crewai create crew` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider and model, you will be prompted for API keys.
@@ -309,11 +344,11 @@ Once you've selected an LLM provider and model, you will be prompted for API key
Here's a list of the most popular LLM providers suggested by the CLI:
* OpenAI
* Groq
* Anthropic
* Google Gemini
* SambaNova
- OpenAI
- Groq
- Anthropic
- Google Gemini
- SambaNova
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
@@ -325,7 +360,7 @@ When you select a provider, the CLI will prompt you to enter the Key name and th
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
- [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
### 13. Configuration Management
@@ -338,23 +373,26 @@ crewai config [COMMAND] [OPTIONS]
#### Commands:
- `list`: Display all CLI configuration parameters
```shell Terminal
crewai config list
```
- `set`: Set a CLI configuration parameter
```shell Terminal
crewai config set <key> <value>
```
- `reset`: Reset all CLI configuration parameters to default values
```shell Terminal
crewai config reset
```
#### Available Configuration Parameters
- `enterprise_base_url`: Base URL of the CrewAI AOP instance
- `enterprise_base_url`: Base URL of the CrewAI AMP instance
- `oauth2_provider`: OAuth2 provider used for authentication (e.g., workos, okta, auth0)
- `oauth2_audience`: OAuth2 audience value, typically used to identify the target API or resource
- `oauth2_client_id`: OAuth2 client ID issued by the provider, used during authentication requests
@@ -363,43 +401,48 @@ crewai config reset
#### Examples
Display current configuration:
```shell Terminal
crewai config list
```
Example output:
| Setting | Value | Description |
| Setting | Value | Description |
| :------------------ | :----------------------- | :---------------------------------------------------------- |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AOP instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
| enterprise_base_url | https://app.crewai.com | Base URL of the CrewAI AMP instance |
| org_name | Not set | Name of the currently active organization |
| org_uuid | Not set | UUID of the currently active organization |
| oauth2_provider | workos | OAuth2 provider (e.g., workos, okta, auth0) |
| oauth2_audience | client_01YYY | Audience identifying the target API/resource |
| oauth2_client_id | client_01XXX | OAuth2 client ID issued by the provider |
| oauth2_domain | login.crewai.com | Provider domain (e.g., your-org.auth0.com) |
Set the enterprise base URL:
```shell Terminal
crewai config set enterprise_base_url https://my-enterprise.crewai.com
```
Set OAuth2 provider:
```shell Terminal
crewai config set oauth2_provider auth0
```
Set OAuth2 domain:
```shell Terminal
crewai config set oauth2_domain my-company.auth0.com
```
Reset all configuration to defaults:
```shell Terminal
crewai config reset
```
<Tip>
After resetting configuration, re-run `crewai login` to authenticate again.
After resetting configuration, re-run `crewai login` to authenticate again.
</Tip>
### 14. Trace Management
@@ -413,16 +456,19 @@ crewai traces [COMMAND]
#### Commands:
- `enable`: Enable trace collection for crew/flow executions
```shell Terminal
crewai traces enable
```
- `disable`: Disable trace collection for crew/flow executions
```shell Terminal
crewai traces disable
```
- `status`: Show current trace collection status
```shell Terminal
crewai traces status
```
@@ -432,19 +478,23 @@ crewai traces status
Trace collection is controlled by checking three settings in priority order:
1. **Explicit flag in code** (highest priority - can enable OR disable):
```python
crew = Crew(agents=[...], tasks=[...], tracing=True) # Always enable
crew = Crew(agents=[...], tasks=[...], tracing=False) # Always disable
crew = Crew(agents=[...], tasks=[...]) # Check lower priorities (default)
```
- `tracing=True` will **always enable** tracing (overrides everything)
- `tracing=False` will **always disable** tracing (overrides everything)
- `tracing=None` or omitted will check lower priority settings
2. **Environment variable** (second priority):
```env
CREWAI_TRACING_ENABLED=true
```
- Checked only if `tracing` is not explicitly set to `True` or `False` in code
- Set to `true` or `1` to enable tracing
@@ -462,21 +512,30 @@ Trace collection is controlled by checking three settings in priority order:
- Run `crewai traces enable`
**To disable tracing**, use any ONE of these methods:
- Set `tracing=False` in your Crew/Flow code (overrides everything), OR
- Remove or set to `false` the `CREWAI_TRACING_ENABLED` env var, OR
- Run `crewai traces disable`
Higher priority settings override lower ones.
</Note>
<Tip>
For more information about tracing, see the [Tracing documentation](/observability/tracing).
For more information about tracing, see the [Tracing
documentation](/observability/tracing).
</Tip>
<Tip>
CrewAI CLI handles authentication to the Tool Repository automatically when adding packages to your project. Just append `crewai` before any `uv` command to use it. E.g. `crewai uv add requests`. For more information, see [Tool Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
CrewAI CLI handles authentication to the Tool Repository automatically when
adding packages to your project. Just append `crewai` before any `uv` command
to use it. E.g. `crewai uv add requests`. For more information, see [Tool
Repository](https://docs.crewai.com/enterprise/features/tool-repository) docs.
</Tip>
<Note>
Configuration settings are stored in `~/.config/crewai/settings.json`. Some settings like organization name and UUID are read-only and managed through authentication and organization commands. Tool repository related settings are hidden and cannot be set directly by users.
Configuration settings are stored in `~/.config/crewai/settings.json`. Some
settings like organization name and UUID are read-only and managed through
authentication and organization commands. Tool repository related settings are
hidden and cannot be set directly by users.
</Note>

View File

@@ -1,6 +1,6 @@
---
title: 'Event Listeners'
description: 'Tap into CrewAI events to build custom integrations and monitoring'
title: "Event Listeners"
description: "Tap into CrewAI events to build custom integrations and monitoring"
icon: spinner
mode: "wide"
---
@@ -20,11 +20,12 @@ CrewAI uses an event bus architecture to emit events throughout the execution li
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
CrewAI AOP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
CrewAI AMP provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
![Prompt Tracing Dashboard](/images/enterprise/traces-overview.png)
With Prompt Tracing you can:
- View the complete history of all prompts sent to your LLM
- Track token usage and costs
- Debug agent reasoning failures
@@ -274,7 +275,6 @@ The structure of the event object depends on the event type, but all events inhe
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
## Advanced Usage: Scoped Handlers
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:

View File

@@ -14,11 +14,12 @@ Tasks provide all necessary details for execution, such as a description, the ag
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
<Note type="info" title="Enterprise Enhancement: Visual Task Builder">
CrewAI AOP includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
CrewAI AMP includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
![Task Builder Screenshot](/images/enterprise/crew-studio-interface.png)
The Visual Task Builder enables:
- Drag-and-drop task creation
- Visual task dependencies and flow
- Real-time testing and validation
@@ -28,10 +29,12 @@ The Visual Task Builder enables:
### Task Execution Flow
Tasks can be executed in two ways:
- **Sequential**: Tasks are executed in the order they are defined
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
The execution flow is defined when creating the crew:
```python Code
crew = Crew(
agents=[agent1, agent2],
@@ -42,30 +45,31 @@ crew = Crew(
## Task Attributes
| Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
| Attribute | Parameters | Type | Description |
| :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Markdown** _(optional)_ | `markdown` | `Optional[bool]` | Whether the task should instruct the agent to return the final answer formatted in Markdown. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Create Directory** _(optional)_ | `create_directory` | `Optional[bool]` | Whether to create the directory for output_file if it doesn't exist. Defaults to True. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
<Note type="warning" title="Deprecated: max_retries">
The task attribute `max_retries` is deprecated and will be removed in v1.0.0.
Use `guardrail_max_retries` instead to control retry attempts when a guardrail fails.
Use `guardrail_max_retries` instead to control retry attempts when a guardrail
fails.
</Note>
## Creating Tasks
@@ -87,7 +91,7 @@ crew.kickoff(inputs={'topic': 'AI Agents'})
Here's an example of how to configure tasks using YAML:
```yaml tasks.yaml
````yaml tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
@@ -107,7 +111,7 @@ reporting_task:
agent: reporting_analyst
markdown: true
output_file: report.md
```
````
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
@@ -165,7 +169,8 @@ class LatestAiDevelopmentCrew():
```
<Note>
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should
match the method names in your Python code.
</Note>
### Direct Code Definition (Alternative)
@@ -202,7 +207,8 @@ reporting_task = Task(
```
<Tip>
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc.
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's
process decide based on roles, availability, etc.
</Tip>
## Task Output
@@ -224,7 +230,7 @@ By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput`
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
| **Agent** | `agent` | `str` | The agent that executed the task. |
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
| **Messages** | `messages` | `list[LLMMessage]` | The messages from the last task execution. |
| **Messages** | `messages` | `list[LLMMessage]` | The messages from the last task execution. |
### Task Methods and Properties
@@ -287,12 +293,13 @@ formatted_task = Task(
```
When `markdown=True`, the agent will receive additional instructions to format the output using:
- `#` for headers
- `**text**` for bold text
- `*text*` for italic text
- `-` or `*` for bullet points
- `` `code` `` for inline code
- ``` ```language ``` for code blocks
- ` `language ``` for code blocks
### YAML Configuration with Markdown
@@ -303,7 +310,7 @@ analysis_task:
expected_output: >
A comprehensive analysis with charts and key findings
agent: analyst
markdown: true # Enable markdown formatting
markdown: true # Enable markdown formatting
output_file: analysis.md
```
@@ -315,7 +322,9 @@ analysis_task:
- **Cross-Platform Compatibility**: Markdown is universally supported
<Note>
The markdown formatting instructions are automatically added to the task prompt when `markdown=True`, so you don't need to specify formatting requirements in your task description.
The markdown formatting instructions are automatically added to the task
prompt when `markdown=True`, so you don't need to specify formatting
requirements in your task description.
</Note>
## Task Dependencies and Context
@@ -383,6 +392,7 @@ blog_task = Task(
Instead of writing custom validation functions, you can use string descriptions that leverage LLM-based validation. When you provide a string to the `guardrail` or `guardrails` parameter, CrewAI automatically creates an `LLMGuardrail` that uses the agent's LLM to validate the output based on your description.
**Requirements**:
- The task must have an `agent` assigned (the guardrail uses the agent's LLM)
- Provide a clear, descriptive string explaining the validation criteria
@@ -399,11 +409,13 @@ blog_task = Task(
```
LLM-based guardrails are particularly useful for:
- **Complex validation logic** that's difficult to express programmatically
- **Subjective criteria** like tone, style, or quality assessments
- **Natural language requirements** that are easier to describe than code
The LLM guardrail will:
1. Analyze the task output against your description
2. Return `(True, output)` if the output complies with the criteria
3. Return `(False, feedback)` with specific feedback if validation fails
@@ -431,6 +443,7 @@ research_task = Task(
You can apply multiple guardrails to a task using the `guardrails` parameter. Multiple guardrails are executed sequentially, with each guardrail receiving the output from the previous one. This allows you to chain validation and transformation steps.
The `guardrails` parameter accepts:
- A list of guardrail functions or string descriptions
- A single guardrail function or string (same as `guardrail`)
@@ -480,6 +493,7 @@ blog_task = Task(
```
In this example, the guardrails execute in order:
1. `validate_word_count` checks the word count
2. `validate_no_profanity` checks for inappropriate language (using the output from step 1)
3. `format_output` formats the final result (using the output from step 2)
@@ -522,6 +536,7 @@ This approach combines the precision of programmatic validation with the flexibi
### Guardrail Function Requirements
1. **Function Signature**:
- Must accept exactly one parameter (the task output)
- Should return a tuple of `(bool, Any)`
- Type hints are recommended but optional
@@ -530,11 +545,10 @@ This approach combines the precision of programmatic validation with the flexibi
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
from crewai import TaskOutput, LLMGuardrail
@@ -550,11 +564,13 @@ def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
```
2. **Error Categories**:
- Use specific error codes
- Include relevant context
- Provide actionable feedback
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
from crewai import TaskOutput
@@ -581,6 +597,7 @@ def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
### Handling Guardrail Results
When a guardrail returns `(False, error)`:
1. The error is sent back to the agent
2. The agent attempts to fix the issue
3. The process repeats until:
@@ -588,6 +605,7 @@ When a guardrail returns `(False, error)`:
- Maximum retries are reached (`guardrail_max_retries`)
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
from crewai import TaskOutput, Task
@@ -613,10 +631,12 @@ task = Task(
## Getting Structured Consistent Outputs from Tasks
<Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
It's also important to note that the output of the final task of a crew
becomes the final output of the actual crew itself.
</Note>
### Using `output_pydantic`
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
Here's an example demonstrating how to use output_pydantic:
@@ -686,18 +706,22 @@ print("Accessing Properties - Option 5")
print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields.
* The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
* After executing the crew, you can access the structured output in multiple ways as shown.
- A Pydantic model Blog is defined with title and content fields.
- The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
- After executing the crew, you can access the structured output in multiple ways as shown.
#### Explanation of Accessing the Output
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the __getitem__ method.
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the **getitem** method.
2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object.
3. Using to_dict() Method: Convert the output to a dictionary and access the fields.
4. Printing the Entire Object: Simply print the result object to see the structured output.
### Using `output_json`
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
Here's an example demonstrating how to use `output_json`:
@@ -757,14 +781,15 @@ print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
* The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
* After executing the crew, you can access the structured JSON output in two ways as shown.
- A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
- The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
- After executing the crew, you can access the structured JSON output in two ways as shown.
#### Explanation of Accessing the Output
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the __getitem__ method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the __str__ method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the **getitem** method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the **str** method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
---
@@ -954,8 +979,6 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Creating Directories when Saving Files
The `create_directory` parameter controls whether CrewAI should automatically create directories when saving task outputs to files. This feature is particularly useful for organizing outputs and ensuring that file paths are correctly structured, especially when working with complex project hierarchies.
@@ -1002,7 +1025,7 @@ analysis_task:
A comprehensive financial report with quarterly insights
agent: financial_analyst
output_file: reports/quarterly/q4_2024_analysis.pdf
create_directory: true # Automatically create 'reports/quarterly/' directory
create_directory: true # Automatically create 'reports/quarterly/' directory
audit_task:
description: >
@@ -1011,18 +1034,20 @@ audit_task:
A compliance audit report
agent: auditor
output_file: audit/compliance_report.md
create_directory: false # Directory must already exist
create_directory: false # Directory must already exist
```
### Use Cases
**Automatic Directory Creation (`create_directory=True`):**
- Development and prototyping environments
- Dynamic report generation with date-based folders
- Automated workflows where directory structure may vary
- Multi-tenant applications with user-specific folders
**Manual Directory Management (`create_directory=False`):**
- Production environments with strict file system controls
- Security-sensitive applications where directories must be pre-configured
- Systems with specific permission requirements

View File

@@ -17,9 +17,10 @@ This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/cre
enabling everything from simple searches to complex interactions and effective teamwork among agents.
<Note type="info" title="Enterprise Enhancement: Tools Repository">
CrewAI AOP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
CrewAI AMP provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
The Enterprise Tools Repository includes:
- Pre-built connectors for popular enterprise systems
- Custom tool creation interface
- Version control and sharing capabilities