Compare commits

...

24 Commits

Author SHA1 Message Date
Lucas Gomide
ccd98cc511 docs: update Python version requirement from <=3.13 to <3.14
This correctly reflects support for all 3.13.x patch version
2025-06-10 13:36:36 -03:00
Lucas Gomide
5c51349a85 Support async tool executions (#2983)
* test: fix structured tool tests

No tests were being executed from this file

* feat: support to run async tool

Some Tool requires async execution. This commit allow us to collect tool result from coroutines

* docs: add docs about asynchronous tool support
2025-06-10 12:17:06 -04:00
Richard Luo
5b740467cb docs: fix the guide on persistence (#2849)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 14:09:56 -04:00
hegasz
e9d9dd2a79 Fix missing manager_agent tokens in usage_metrics from kickoff (#2848)
* fix(metrics): prevent usage_metrics from dropping manager_agent tokens

* Add test to verify hierarchical kickoff aggregates manager and agent usage metrics

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 13:16:05 -04:00
Lorenze Jay
3e74cb4832 docs: add integrations documentation and images for enterprise features (#2981)
- Introduced a new documentation file for Integrations, detailing supported services and setup instructions.
- Updated the main docs.json to include the new "integrations" feature in the contextual options.
- Added several images related to integrations to enhance the documentation.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 12:46:09 -04:00
Lucas Gomide
db3c8a49bd feat: improve docs and logging for Multi-Org actions in CLI (#2980)
* docs: add organization management in our CLI docs

* feat: improve user feedback when user is not authenticated

* feat: improve logging about current organization while publishing/install a Tool

* feat: improve logging when Agent repository is not found during fetch

* fix linter offences

* test: fix auth token error
2025-06-09 12:21:12 -04:00
Lucas Gomide
8a37b535ed docs: improve docs about planning LLM usage (#2977) 2025-06-09 10:17:04 -04:00
Lucas Gomide
e6ac1311e7 build: upgrade LiteLLM to support latest Openai version (#2963)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 08:55:12 -04:00
Akshit Madan
b0d89698fd docs: added Maxim support for Agent Observability (#2861)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs: added Maxim support for Agent Observability

* enhanced the maxim integration doc page as per the github PR reviewer bot suggestions

* Update maxim-observability.mdx

* Update maxim-observability.mdx

- Fixed Python version, >=3.10
- added expected_output field in Task
- Removed marketing links and added github link

* added maxim in observability

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-08 13:39:01 -04:00
Lucas Gomide
21d063a46c Support multi org in CLI (#2969)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: support to list, switch and see your current organization

* feat: store the current org after logged in

* feat: filtering agents, tools and their actions by organization_uuid if present

* fix linter offenses

* refactor: propagate the current org thought Header instead of params

* refactor: rename org column name to ID instead of Handle

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-06 15:28:09 -04:00
Mike Plachta
02912a653e Increasing the default X-axis spacing for flow plotting (#2967)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Increasing the default X-axis spacing for flow plotting

* removing unused imports
2025-06-06 09:43:38 -07:00
Greyson LaLonde
f1cfba7527 docs: update hallucination guardrail examples
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Add basic usage example showing guardrail uses task's expected_output as default context
- Add explicit context example for custom reference content
2025-06-05 12:34:52 -04:00
Lucas Gomide
3e075cd48d docs: add minimum UV version required to use the Tool repository (#2965)
* docs: add minimum UV version required to use the Tool repository

* docs: remove memory from Agent docs

The Agent does not support `memory` attribute
2025-06-05 11:37:19 -04:00
Lucas Gomide
e03ec4d60f fix: remove duplicated message about Tool result (#2964)
We are currently inserting tool results into LLM messages twice, which may unnecessarily increase processing costs, especially for longer outputs.
2025-06-05 09:42:10 -04:00
Lorenze Jay
ba740c6157 Update version to 0.126.0 and dependencies in pyproject.toml and lock files (#2961)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-04 17:49:07 -07:00
Tony Kipkemboi
34c813ed79 Add enterprise testing image (#2960) 2025-06-04 15:05:35 -07:00
Tony Kipkemboi
545cc2ffe4 docs: Fix missing await keywords in async crew kickoff methods and add llm selection guide (#2959) 2025-06-04 14:12:52 -07:00
Mike Plachta
47b97d9b7f Azure embeddings documentation for knowledge (#2957)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-04 13:27:50 -07:00
Lucas Gomide
bf8fbb0a44 Minor adjustments on Tool publish and docs (#2958)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: fix tool publisher logger when available_exports is found

* docs: update docs and templates since we support Python 3.13
2025-06-04 11:58:26 -04:00
Lucas Gomide
552921cf83 feat: load Tool from Agent repository by their own module (#2940)
Previously, we only supported tools from the crewai-tools open-source repository. Now, we're introducing improved support for private tool repositories.
2025-06-04 09:53:25 -04:00
Lorenze Jay
372874fb3a agent add knowledge sources fix and test (#2948) 2025-06-04 06:47:15 -07:00
Lucas Gomide
2bd6b72aae Persist available tools from a Tool repository (#2851)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add capability to see and expose public Tool classes

* feat: persist available Tools from repository on publish

* ci: ignore explictly templates from ruff check

Ruff only applies --exclude to files it discovers itself. So we have to skip manually the same files excluded from `ruff.toml`

* sytle: fix linter issues

* refactor: renaming available_tools_classes by available_exports

* feat: provide more context about exportable tools

* feat: allow to install a Tool from pypi

* test: fix tests

* feat: add env_vars attribute to BaseTool

* remove TODO: security check since we are handle that on enterprise side
2025-06-03 10:09:02 -04:00
siddharth Sambharia
f02e0060fa feat/portkey-ai-docs-udpated (#2936) 2025-06-03 08:15:28 -04:00
Lucas Gomide
66b7628972 Support Python 3.13 (#2844)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* ci: support python 3.13 on CI

* docs: update docs about support python version

* build: adds requires python <3.14

* build: explicit tokenizers dependency

Added explicit tokenizers dependency: Added tokenizers>=0.20.3 to ensure a version compatible with Python 3.13 is used.

* build: drop fastembed is not longer used

* build: attempt to build PyTorch on Python 3.13

* feat: upgrade fastavro, pyarrow and lancedb

* build: ensure tiktoken greather than 0.8.0 due Python 3.13 compatibility
2025-06-02 18:12:24 -04:00
62 changed files with 5275 additions and 1236 deletions

View File

@@ -30,4 +30,7 @@ jobs:
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" | tr " " "\n" | xargs -I{} ruff check "{}"
echo "${{ steps.changed-files.outputs.files }}" \
| tr ' ' '\n' \
| grep -v 'src/crewai/cli/templates/' \
| xargs -I{} ruff check "{}"

View File

@@ -14,7 +14,7 @@ jobs:
timeout-minutes: 15
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
python-version: ['3.10', '3.11', '3.12', '3.13']
steps:
- name: Checkout code
uses: actions/checkout@v4

View File

@@ -161,7 +161,7 @@ To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:

View File

@@ -43,7 +43,6 @@ The Visual Agent Builder enables:
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Memory** _(optional)_ | `memory` | `bool` | Whether the agent should maintain memory of interactions. Default is True. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
@@ -156,7 +155,6 @@ agent = Agent(
"you excel at finding patterns in complex datasets.",
llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4"
function_calling_llm=None, # Optional: Separate LLM for tool calling
memory=True, # Default: True
verbose=False, # Default: False
allow_delegation=False, # Default: False
max_iter=20, # Default: 20 iterations
@@ -537,7 +535,6 @@ The context window management feature works automatically in the background. You
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Use `memory: true` for tasks requiring historical context
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
@@ -585,7 +582,6 @@ The context window management feature works automatically in the background. You
- Review code sandbox settings
4. **Memory Issues**: If agent responses seem inconsistent:
- Verify memory is enabled
- Check knowledge source configuration
- Review conversation history management

View File

@@ -200,6 +200,37 @@ Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
Manage your CrewAI Enterprise organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI Enterprise to use these organization management commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.

View File

@@ -325,12 +325,12 @@ for result in results:
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = my_crew.kickoff_async(inputs=inputs)
async_result = await my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
async_results = await my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```

View File

@@ -602,6 +602,30 @@ agent = Agent(
)
```
#### Configuring Azure OpenAI Embeddings
When using Azure OpenAI embeddings:
1. Make sure you deploy the embedding model in Azure platform first
2. Then you need to use the following configuration:
```python
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
knowledge_sources=[knowledge_source],
embedder={
"provider": "azure",
"config": {
"api_key": "your-azure-api-key",
"model": "text-embedding-ada-002", # change to the model you are using and is deployed in Azure
"api_base": "https://your-azure-endpoint.openai.azure.com/",
"api_version": "2024-02-01"
}
}
)
```
## Advanced Features
### Query Rewriting

View File

@@ -29,6 +29,10 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.

View File

@@ -32,6 +32,7 @@ The Enterprise Tools Repository includes:
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
- **Asynchronous Support**: Handles both synchronous and asynchronous tools, enabling non-blocking operations.
## Using CrewAI Tools
@@ -177,6 +178,62 @@ class MyCustomTool(BaseTool):
return "Tool's result"
```
## Asynchronous Tool Support
CrewAI supports asynchronous tools, allowing you to implement tools that perform non-blocking operations like network requests, file I/O, or other async operations without blocking the main execution thread.
### Creating Async Tools
You can create async tools in two ways:
#### 1. Using the `tool` Decorator with Async Functions
```python Code
from crewai.tools import tool
@tool("fetch_data_async")
async def fetch_data_async(query: str) -> str:
"""Asynchronously fetch data based on the query."""
# Simulate async operation
await asyncio.sleep(1)
return f"Data retrieved for {query}"
```
#### 2. Implementing Async Methods in Custom Tool Classes
```python Code
from crewai.tools import BaseTool
class AsyncCustomTool(BaseTool):
name: str = "async_custom_tool"
description: str = "An asynchronous custom tool"
async def _run(self, query: str = "") -> str:
"""Asynchronously run the tool"""
# Your async implementation here
await asyncio.sleep(1)
return f"Processed {query} asynchronously"
```
### Using Async Tools
Async tools work seamlessly in both standard Crew workflows and Flow-based workflows:
```python Code
# In standard Crew
agent = Agent(role="researcher", tools=[async_custom_tool])
# In Flow
class MyFlow(Flow):
@start()
async def begin(self):
crew = Crew(agents=[agent])
result = await crew.kickoff_async()
return result
```
The CrewAI framework automatically handles the execution of both synchronous and asynchronous tools, so you don't need to worry about how to call them differently.
### Utilizing the `tool` Decorator
```python Code

View File

@@ -9,7 +9,12 @@
},
"favicon": "images/favicon.svg",
"contextual": {
"options": ["copy", "view", "chatgpt", "claude"]
"options": [
"copy",
"view",
"chatgpt",
"claude"
]
},
"navigation": {
"tabs": [
@@ -201,6 +206,7 @@
"observability/arize-phoenix",
"observability/langfuse",
"observability/langtrace",
"observability/maxim",
"observability/mlflow",
"observability/openlit",
"observability/opik",
@@ -213,6 +219,7 @@
"group": "Learn",
"pages": [
"learn/overview",
"learn/llm-selection-guide",
"learn/conditional-tasks",
"learn/coding-agents",
"learn/create-custom-tools",
@@ -255,7 +262,8 @@
"enterprise/features/tool-repository",
"enterprise/features/webhook-streaming",
"enterprise/features/traces",
"enterprise/features/hallucination-guardrail"
"enterprise/features/hallucination-guardrail",
"enterprise/features/integrations"
]
},
{

View File

@@ -25,8 +25,13 @@ AI hallucinations occur when language models generate content that appears plaus
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
from crewai import LLM
# Initialize the guardrail with reference context
# Basic usage - will use task's expected_output as context
guardrail = HallucinationGuardrail(
llm=LLM(model="gpt-4o-mini")
)
# With explicit reference context
context_guardrail = HallucinationGuardrail(
context="AI helps with various tasks including analysis and generation.",
llm=LLM(model="gpt-4o-mini")
)

View File

@@ -0,0 +1,185 @@
---
title: Integrations
description: "Connected applications for your agents to take actions."
icon: "plug"
---
## Overview
Enable your agents to authenticate with any OAuth enabled provider and take actions. From Salesforce and HubSpot to Google and GitHub, we've got you covered with 16+ integrated services.
<Frame>
![Integrations](/images/enterprise/crew_connectors.png)
</Frame>
## Supported Integrations
### **Communication & Collaboration**
- **Gmail** - Manage emails and drafts
- **Slack** - Workspace notifications and alerts
- **Microsoft** - Office 365 and Teams integration
### **Project Management**
- **Jira** - Issue tracking and project management
- **ClickUp** - Task and productivity management
- **Asana** - Team task and project coordination
- **Notion** - Page and database management
- **Linear** - Software project and bug tracking
- **GitHub** - Repository and issue management
### **Customer Relationship Management**
- **Salesforce** - CRM account and opportunity management
- **HubSpot** - Sales pipeline and contact management
- **Zendesk** - Customer support ticket management
### **Business & Finance**
- **Stripe** - Payment processing and customer management
- **Shopify** - E-commerce store and product management
### **Productivity & Storage**
- **Google Sheets** - Spreadsheet data synchronization
- **Google Calendar** - Event and schedule management
- **Box** - File storage and document management
and more to come!
## Prerequisites
Before using Authentication Integrations, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account. You can get started with a free trial.
## Setting Up Integrations
### 1. Connect Your Account
1. Navigate to [CrewAI Enterprise](https://app.crewai.com)
2. Go to **Integrations** tab - https://app.crewai.com/crewai_plus/connectors
3. Click **Connect** on your desired service from the Authentication Integrations section
4. Complete the OAuth authentication flow
5. Grant necessary permissions for your use case
6. Get your Enterprise Token from your [CrewAI Enterprise](https://app.crewai.com) account page - https://app.crewai.com/crewai_plus/settings/account
<Frame>
![Integrations](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
### 2. Install Integration Tools
All you need is the latest version of `crewai-tools` package.
```bash
uv add crewai-tools
```
## Usage Examples
### Basic Usage
<Tip>
All the services you are authenticated into will be available as tools. So all you need to do is add the `CrewaiEnterpriseTools` to your agent and you are good to go.
</Tip>
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Gmail tool will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# print the tools
print(enterprise_tools)
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
tools=[enterprise_tools]
)
# Task to send an email
email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent,
expected_output="Confirmation that email was sent successfully"
)
# Run the task
crew = Crew(
agents=[email_agent],
tasks=[email_task]
)
# Run the crew
crew.kickoff()
```
### Filtering Tools
```python
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
)
gmail_tool = enterprise_tools[0]
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
tools=[gmail_tool]
)
notification_task = Task(
description="Find the email from john@example.com",
agent=gmail_agent,
expected_output="Email found from john@example.com"
)
# Run the task
crew = Crew(
agents=[slack_agent],
tasks=[notification_task]
)
```
## Best Practices
### Security
- **Principle of Least Privilege**: Only grant the minimum permissions required for your agents' tasks
- **Regular Audits**: Periodically review connected integrations and their permissions
- **Secure Credentials**: Never hardcode credentials; use CrewAI's secure authentication flow
### Filtering Tools
On a deployed crew, you can specify which actions are avialbel for each integration from the settings page of the service you connected to.
<Frame>
![Integrations](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
### Scoped Deployments for multi user organizations
You can deploy your crew and scope each integration to a specific user. For example, a crew that connects to google can use a specific user's gmail account.
<Tip>
This is useful for multi user organizations where you want to scope the integration to a specific user.
</Tip>
Use the `user_bearer_token` to scope the integration to a specific user so that when the crew is kicked off, it will use the user's bearer token to authenticate with the integration. If user is not logged in, then the crew will not use any connected integrations. Use the default bearer token to authenticate with the integrations thats deployed with the crew.
<Frame>
![Integrations](/images/enterprise/user_bearer_token.png)
</Frame>
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with integration setup or troubleshooting.
</Card>

View File

@@ -21,6 +21,7 @@ Before using the Tool Repository, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account
- [CrewAI CLI](https://docs.crewai.com/concepts/cli#cli) installed
- uv>=0.5.0 installed. Check out [how to upgrade](https://docs.astral.sh/uv/getting-started/installation/#upgrading-uv)
- [Git](https://git-scm.com) installed and configured
- Access permissions to publish or install tools in your CrewAI Enterprise organization

View File

@@ -277,22 +277,23 @@ This pattern allows you to combine direct data passing with state updates for ma
One of CrewAI's most powerful features is the ability to persist flow state across executions. This enables workflows that can be paused, resumed, and even recovered after failures.
### The @persist Decorator
### The @persist() Decorator
The `@persist` decorator automates state persistence, saving your flow's state at key points in execution.
The `@persist()` decorator automates state persistence, saving your flow's state at key points in execution.
#### Class-Level Persistence
When applied at the class level, `@persist` saves state after every method execution:
When applied at the class level, `@persist()` saves state after every method execution:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
value: int = 0
@persist # Apply to the entire flow class
@persist() # Apply to the entire flow class
class PersistentCounterFlow(Flow[CounterState]):
@start()
def increment(self):
@@ -319,10 +320,11 @@ print(f"Second run result: {result2}") # Will be higher due to persisted state
#### Method-Level Persistence
For more granular control, you can apply `@persist` to specific methods:
For more granular control, you can apply `@persist()` to specific methods:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
class SelectivePersistFlow(Flow):
@start()
@@ -330,7 +332,7 @@ class SelectivePersistFlow(Flow):
self.state["count"] = 1
return "First step"
@persist # Only persist after this method
@persist() # Only persist after this method
@listen(first_step)
def important_step(self, prev_result):
self.state["count"] += 1

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

View File

@@ -22,7 +22,7 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <3.13`. Here's how to check your version:
CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version:
```bash
python3 --version
```

View File

@@ -108,6 +108,7 @@ crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
# Async function to kickoff multiple crews asynchronously and wait for all to finish
async def async_multiple_crews():
# Create coroutines for concurrent execution
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})

View File

@@ -0,0 +1,729 @@
---
title: 'Strategic LLM Selection Guide'
description: 'Strategic framework for choosing the right LLM for your CrewAI AI agents and writing effective task and agent definitions'
icon: 'brain-circuit'
---
## The CrewAI Approach to LLM Selection
Rather than prescriptive model recommendations, we advocate for a **thinking framework** that helps you make informed decisions based on your specific use case, constraints, and requirements. The LLM landscape evolves rapidly, with new models emerging regularly and existing ones being updated frequently. What matters most is developing a systematic approach to evaluation that remains relevant regardless of which specific models are available.
<Note>
This guide focuses on strategic thinking rather than specific model recommendations, as the LLM landscape evolves rapidly.
</Note>
## Quick Decision Framework
<Steps>
<Step title="Analyze Your Tasks">
Begin by deeply understanding what your tasks actually require. Consider the cognitive complexity involved, the depth of reasoning needed, the format of expected outputs, and the amount of context the model will need to process. This foundational analysis will guide every subsequent decision.
</Step>
<Step title="Map Model Capabilities">
Once you understand your requirements, map them to model strengths. Different model families excel at different types of work; some are optimized for reasoning and analysis, others for creativity and content generation, and others for speed and efficiency.
</Step>
<Step title="Consider Constraints">
Factor in your real-world operational constraints including budget limitations, latency requirements, data privacy needs, and infrastructure capabilities. The theoretically best model may not be the practically best choice for your situation.
</Step>
<Step title="Test and Iterate">
Start with reliable, well-understood models and optimize based on actual performance in your specific use case. Real-world results often differ from theoretical benchmarks, so empirical testing is crucial.
</Step>
</Steps>
## Core Selection Framework
### a. Task-First Thinking
The most critical step in LLM selection is understanding what your task actually demands. Too often, teams select models based on general reputation or benchmark scores without carefully analyzing their specific requirements. This approach leads to either over-engineering simple tasks with expensive, complex models, or under-powering sophisticated work with models that lack the necessary capabilities.
<Tabs>
<Tab title="Reasoning Complexity">
- **Simple Tasks** represent the majority of everyday AI work and include basic instruction following, straightforward data processing, and simple formatting operations. These tasks typically have clear inputs and outputs with minimal ambiguity. The cognitive load is low, and the model primarily needs to follow explicit instructions rather than engage in complex reasoning.
- **Complex Tasks** require multi-step reasoning, strategic thinking, and the ability to handle ambiguous or incomplete information. These might involve analyzing multiple data sources, developing comprehensive strategies, or solving problems that require breaking down into smaller components. The model needs to maintain context across multiple reasoning steps and often must make inferences that aren't explicitly stated.
- **Creative Tasks** demand a different type of cognitive capability focused on generating novel, engaging, and contextually appropriate content. This includes storytelling, marketing copy creation, and creative problem-solving. The model needs to understand nuance, tone, and audience while producing content that feels authentic and engaging rather than formulaic.
</Tab>
<Tab title="Output Requirements">
- **Structured Data** tasks require precision and consistency in format adherence. When working with JSON, XML, or database formats, the model must reliably produce syntactically correct output that can be programmatically processed. These tasks often have strict validation requirements and little tolerance for format errors, making reliability more important than creativity.
- **Creative Content** outputs demand a balance of technical competence and creative flair. The model needs to understand audience, tone, and brand voice while producing content that engages readers and achieves specific communication goals. Quality here is often subjective and requires models that can adapt their writing style to different contexts and purposes.
- **Technical Content** sits between structured data and creative content, requiring both precision and clarity. Documentation, code generation, and technical analysis need to be accurate and comprehensive while remaining accessible to the intended audience. The model must understand complex technical concepts and communicate them effectively.
</Tab>
<Tab title="Context Needs">
- **Short Context** scenarios involve focused, immediate tasks where the model needs to process limited information quickly. These are often transactional interactions where speed and efficiency matter more than deep understanding. The model doesn't need to maintain extensive conversation history or process large documents.
- **Long Context** requirements emerge when working with substantial documents, extended conversations, or complex multi-part tasks. The model needs to maintain coherence across thousands of tokens while referencing earlier information accurately. This capability becomes crucial for document analysis, comprehensive research, and sophisticated dialogue systems.
- **Very Long Context** scenarios push the boundaries of what's currently possible, involving massive document processing, extensive research synthesis, or complex multi-session interactions. These use cases require models specifically designed for extended context handling and often involve trade-offs between context length and processing speed.
</Tab>
</Tabs>
### b. Model Capability Mapping
Understanding model capabilities requires looking beyond marketing claims and benchmark scores to understand the fundamental strengths and limitations of different model architectures and training approaches.
<AccordionGroup>
<Accordion title="Reasoning Models" icon="brain">
Reasoning models represent a specialized category designed specifically for complex, multi-step thinking tasks. These models excel when problems require careful analysis, strategic planning, or systematic problem decomposition. They typically employ techniques like chain-of-thought reasoning or tree-of-thought processing to work through complex problems step by step.
The strength of reasoning models lies in their ability to maintain logical consistency across extended reasoning chains and to break down complex problems into manageable components. They're particularly valuable for strategic planning, complex analysis, and situations where the quality of reasoning matters more than speed of response.
However, reasoning models often come with trade-offs in terms of speed and cost. They may also be less suitable for creative tasks or simple operations where their sophisticated reasoning capabilities aren't needed. Consider these models when your tasks involve genuine complexity that benefits from systematic, step-by-step analysis.
</Accordion>
<Accordion title="General Purpose Models" icon="microchip">
General purpose models offer the most balanced approach to LLM selection, providing solid performance across a wide range of tasks without extreme specialization in any particular area. These models are trained on diverse datasets and optimized for versatility rather than peak performance in specific domains.
The primary advantage of general purpose models is their reliability and predictability across different types of work. They handle most standard business tasks competently, from research and analysis to content creation and data processing. This makes them excellent choices for teams that need consistent performance across varied workflows.
While general purpose models may not achieve the peak performance of specialized alternatives in specific domains, they offer operational simplicity and reduced complexity in model management. They're often the best starting point for new projects, allowing teams to understand their specific needs before potentially optimizing with more specialized models.
</Accordion>
<Accordion title="Fast & Efficient Models" icon="bolt">
Fast and efficient models prioritize speed, cost-effectiveness, and resource efficiency over sophisticated reasoning capabilities. These models are optimized for high-throughput scenarios where quick responses and low operational costs are more important than nuanced understanding or complex reasoning.
These models excel in scenarios involving routine operations, simple data processing, function calling, and high-volume tasks where the cognitive requirements are relatively straightforward. They're particularly valuable for applications that need to process many requests quickly or operate within tight budget constraints.
The key consideration with efficient models is ensuring that their capabilities align with your task requirements. While they can handle many routine operations effectively, they may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation. They're best used for well-defined, routine operations where speed and cost matter more than sophistication.
</Accordion>
<Accordion title="Creative Models" icon="pen">
Creative models are specifically optimized for content generation, writing quality, and creative thinking tasks. These models typically excel at understanding nuance, tone, and style while producing engaging, contextually appropriate content that feels natural and authentic.
The strength of creative models lies in their ability to adapt writing style to different audiences, maintain consistent voice and tone, and generate content that engages readers effectively. They often perform better on tasks involving storytelling, marketing copy, brand communications, and other content where creativity and engagement are primary goals.
When selecting creative models, consider not just their ability to generate text, but their understanding of audience, context, and purpose. The best creative models can adapt their output to match specific brand voices, target different audience segments, and maintain consistency across extended content pieces.
</Accordion>
<Accordion title="Open Source Models" icon="code">
Open source models offer unique advantages in terms of cost control, customization potential, data privacy, and deployment flexibility. These models can be run locally or on private infrastructure, providing complete control over data handling and model behavior.
The primary benefits of open source models include elimination of per-token costs, ability to fine-tune for specific use cases, complete data privacy, and independence from external API providers. They're particularly valuable for organizations with strict data privacy requirements, budget constraints, or specific customization needs.
However, open source models require more technical expertise to deploy and maintain effectively. Teams need to consider infrastructure costs, model management complexity, and the ongoing effort required to keep models updated and optimized. The total cost of ownership may be higher than cloud-based alternatives when factoring in technical overhead.
</Accordion>
</AccordionGroup>
## Strategic Configuration Patterns
### a. Multi-Model Approach
<Tip>
Use different models for different purposes within the same crew to optimize both performance and cost.
</Tip>
The most sophisticated CrewAI implementations often employ multiple models strategically, assigning different models to different agents based on their specific roles and requirements. This approach allows teams to optimize for both performance and cost by using the most appropriate model for each type of work.
Planning agents benefit from reasoning models that can handle complex strategic thinking and multi-step analysis. These agents often serve as the "brain" of the operation, developing strategies and coordinating other agents' work. Content agents, on the other hand, perform best with creative models that excel at writing quality and audience engagement. Processing agents handling routine operations can use efficient models that prioritize speed and cost-effectiveness.
**Example: Research and Analysis Crew**
```python
from crewai import Agent, Task, Crew, LLM
# High-capability reasoning model for strategic planning
manager_llm = LLM(model="gemini-2.5-flash-preview-05-20", temperature=0.1)
# Creative model for content generation
content_llm = LLM(model="claude-3-5-sonnet-20241022", temperature=0.7)
# Efficient model for data processing
processing_llm = LLM(model="gpt-4o-mini", temperature=0)
research_manager = Agent(
role="Research Strategy Manager",
goal="Develop comprehensive research strategies and coordinate team efforts",
backstory="Expert research strategist with deep analytical capabilities",
llm=manager_llm, # High-capability model for complex reasoning
verbose=True
)
content_writer = Agent(
role="Research Content Writer",
goal="Transform research findings into compelling, well-structured reports",
backstory="Skilled writer who excels at making complex topics accessible",
llm=content_llm, # Creative model for engaging content
verbose=True
)
data_processor = Agent(
role="Data Analysis Specialist",
goal="Extract and organize key data points from research sources",
backstory="Detail-oriented analyst focused on accuracy and efficiency",
llm=processing_llm, # Fast, cost-effective model for routine tasks
verbose=True
)
crew = Crew(
agents=[research_manager, content_writer, data_processor],
tasks=[...], # Your specific tasks
manager_llm=manager_llm, # Manager uses the reasoning model
verbose=True
)
```
The key to successful multi-model implementation is understanding how different agents interact and ensuring that model capabilities align with agent responsibilities. This requires careful planning but can result in significant improvements in both output quality and operational efficiency.
### b. Component-Specific Selection
<Tabs>
<Tab title="Manager LLM">
The manager LLM plays a crucial role in hierarchical CrewAI processes, serving as the coordination point for multiple agents and tasks. This model needs to excel at delegation, task prioritization, and maintaining context across multiple concurrent operations.
Effective manager LLMs require strong reasoning capabilities to make good delegation decisions, consistent performance to ensure predictable coordination, and excellent context management to track the state of multiple agents simultaneously. The model needs to understand the capabilities and limitations of different agents while optimizing task allocation for efficiency and quality.
Cost considerations are particularly important for manager LLMs since they're involved in every operation. The model needs to provide sufficient capability for effective coordination while remaining cost-effective for frequent use. This often means finding models that offer good reasoning capabilities without the premium pricing of the most sophisticated options.
</Tab>
<Tab title="Function Calling LLM">
Function calling LLMs handle tool usage across all agents, making them critical for crews that rely heavily on external tools and APIs. These models need to excel at understanding tool capabilities, extracting parameters accurately, and handling tool responses effectively.
The most important characteristics for function calling LLMs are precision and reliability rather than creativity or sophisticated reasoning. The model needs to consistently extract the correct parameters from natural language requests and handle tool responses appropriately. Speed is also important since tool usage often involves multiple round trips that can impact overall performance.
Many teams find that specialized function calling models or general purpose models with strong tool support work better than creative or reasoning-focused models for this role. The key is ensuring that the model can reliably bridge the gap between natural language instructions and structured tool calls.
</Tab>
<Tab title="Agent-Specific Overrides">
Individual agents can override crew-level LLM settings when their specific needs differ significantly from the general crew requirements. This capability allows for fine-tuned optimization while maintaining operational simplicity for most agents.
Consider agent-specific overrides when an agent's role requires capabilities that differ substantially from other crew members. For example, a creative writing agent might benefit from a model optimized for content generation, while a data analysis agent might perform better with a reasoning-focused model.
The challenge with agent-specific overrides is balancing optimization with operational complexity. Each additional model adds complexity to deployment, monitoring, and cost management. Teams should focus overrides on agents where the performance improvement justifies the additional complexity.
</Tab>
</Tabs>
## Task Definition Framework
### a. Focus on Clarity Over Complexity
Effective task definition is often more important than model selection in determining the quality of CrewAI outputs. Well-defined tasks provide clear direction and context that enable even modest models to perform well, while poorly defined tasks can cause even sophisticated models to produce unsatisfactory results.
<AccordionGroup>
<Accordion title="Effective Task Descriptions" icon="list-check">
The best task descriptions strike a balance between providing sufficient detail and maintaining clarity. They should define the specific objective clearly enough that there's no ambiguity about what success looks like, while explaining the approach or methodology in enough detail that the agent understands how to proceed.
Effective task descriptions include relevant context and constraints that help the agent understand the broader purpose and any limitations they need to work within. They break complex work into focused steps that can be executed systematically, rather than presenting overwhelming, multi-faceted objectives that are difficult to approach systematically.
Common mistakes include being too vague about objectives, failing to provide necessary context, setting unclear success criteria, or combining multiple unrelated tasks into a single description. The goal is to provide enough information for the agent to succeed while maintaining focus on a single, clear objective.
</Accordion>
<Accordion title="Expected Output Guidelines" icon="bullseye">
Expected output guidelines serve as a contract between the task definition and the agent, clearly specifying what the deliverable should look like and how it will be evaluated. These guidelines should describe both the format and structure needed, as well as the key elements that must be included for the output to be considered complete.
The best output guidelines provide concrete examples of quality indicators and define completion criteria clearly enough that both the agent and human reviewers can assess whether the task has been completed successfully. This reduces ambiguity and helps ensure consistent results across multiple task executions.
Avoid generic output descriptions that could apply to any task, missing format specifications that leave agents guessing about structure, unclear quality standards that make evaluation difficult, or failing to provide examples or templates that help agents understand expectations.
</Accordion>
</AccordionGroup>
### b. Task Sequencing Strategy
<Tabs>
<Tab title="Sequential Dependencies">
Sequential task dependencies are essential when tasks build upon previous outputs, information flows from one task to another, or quality depends on the completion of prerequisite work. This approach ensures that each task has access to the information and context it needs to succeed.
Implementing sequential dependencies effectively requires using the context parameter to chain related tasks, building complexity gradually through task progression, and ensuring that each task produces outputs that serve as meaningful inputs for subsequent tasks. The goal is to maintain logical flow between dependent tasks while avoiding unnecessary bottlenecks.
Sequential dependencies work best when there's a clear logical progression from one task to another and when the output of one task genuinely improves the quality or feasibility of subsequent tasks. However, they can create bottlenecks if not managed carefully, so it's important to identify which dependencies are truly necessary versus those that are merely convenient.
</Tab>
<Tab title="Parallel Execution">
Parallel execution becomes valuable when tasks are independent of each other, time efficiency is important, or different expertise areas are involved that don't require coordination. This approach can significantly reduce overall execution time while allowing specialized agents to work on their areas of strength simultaneously.
Successful parallel execution requires identifying tasks that can truly run independently, grouping related but separate work streams effectively, and planning for result integration when parallel tasks need to be combined into a final deliverable. The key is ensuring that parallel tasks don't create conflicts or redundancies that reduce overall quality.
Consider parallel execution when you have multiple independent research streams, different types of analysis that don't depend on each other, or content creation tasks that can be developed simultaneously. However, be mindful of resource allocation and ensure that parallel execution doesn't overwhelm your available model capacity or budget.
</Tab>
</Tabs>
## Optimizing Agent Configuration for LLM Performance
### a. Role-Driven LLM Selection
<Warning>
Generic agent roles make it impossible to select the right LLM. Specific roles enable targeted model optimization.
</Warning>
The specificity of your agent roles directly determines which LLM capabilities matter most for optimal performance. This creates a strategic opportunity to match precise model strengths with agent responsibilities.
**Generic vs. Specific Role Impact on LLM Choice:**
When defining roles, think about the specific domain knowledge, working style, and decision-making frameworks that would be most valuable for the tasks the agent will handle. The more specific and contextual the role definition, the better the model can embody that role effectively.
```python
# ✅ Specific role - clear LLM requirements
specific_agent = Agent(
role="SaaS Revenue Operations Analyst", # Clear domain expertise needed
goal="Analyze recurring revenue metrics and identify growth opportunities",
backstory="Specialist in SaaS business models with deep understanding of ARR, churn, and expansion revenue",
llm=LLM(model="gpt-4o") # Reasoning model justified for complex analysis
)
```
**Role-to-Model Mapping Strategy:**
- **"Research Analyst"** → Reasoning model (GPT-4o, Claude Sonnet) for complex analysis
- **"Content Editor"** → Creative model (Claude, GPT-4o) for writing quality
- **"Data Processor"** → Efficient model (GPT-4o-mini, Gemini Flash) for structured tasks
- **"API Coordinator"** → Function-calling optimized model (GPT-4o, Claude) for tool usage
### b. Backstory as Model Context Amplifier
<Info>
Strategic backstories multiply your chosen LLM's effectiveness by providing domain-specific context that generic prompting cannot achieve.
</Info>
A well-crafted backstory transforms your LLM choice from generic capability to specialized expertise. This is especially crucial for cost optimization - a well-contextualized efficient model can outperform a premium model without proper context.
**Context-Driven Performance Example:**
```python
# Context amplifies model effectiveness
domain_expert = Agent(
role="B2B SaaS Marketing Strategist",
goal="Develop comprehensive go-to-market strategies for enterprise software",
backstory="""
You have 10+ years of experience scaling B2B SaaS companies from Series A to IPO.
You understand the nuances of enterprise sales cycles, the importance of product-market
fit in different verticals, and how to balance growth metrics with unit economics.
You've worked with companies like Salesforce, HubSpot, and emerging unicorns, giving
you perspective on both established and disruptive go-to-market strategies.
""",
llm=LLM(model="claude-3-5-sonnet", temperature=0.3) # Balanced creativity with domain knowledge
)
# This context enables Claude to perform like a domain expert
# Without it, even it would produce generic marketing advice
```
**Backstory Elements That Enhance LLM Performance:**
- **Domain Experience**: "10+ years in enterprise SaaS sales"
- **Specific Expertise**: "Specializes in technical due diligence for Series B+ rounds"
- **Working Style**: "Prefers data-driven decisions with clear documentation"
- **Quality Standards**: "Insists on citing sources and showing analytical work"
### c. Holistic Agent-LLM Optimization
The most effective agent configurations create synergy between role specificity, backstory depth, and LLM selection. Each element reinforces the others to maximize model performance.
**Optimization Framework:**
```python
# Example: Technical Documentation Agent
tech_writer = Agent(
role="API Documentation Specialist", # Specific role for clear LLM requirements
goal="Create comprehensive, developer-friendly API documentation",
backstory="""
You're a technical writer with 8+ years documenting REST APIs, GraphQL endpoints,
and SDK integration guides. You've worked with developer tools companies and
understand what developers need: clear examples, comprehensive error handling,
and practical use cases. You prioritize accuracy and usability over marketing fluff.
""",
llm=LLM(
model="claude-3-5-sonnet", # Excellent for technical writing
temperature=0.1 # Low temperature for accuracy
),
tools=[code_analyzer_tool, api_scanner_tool],
verbose=True
)
```
**Alignment Checklist:**
- ✅ **Role Specificity**: Clear domain and responsibilities
- ✅ **LLM Match**: Model strengths align with role requirements
- ✅ **Backstory Depth**: Provides domain context the LLM can leverage
- ✅ **Tool Integration**: Tools support the agent's specialized function
- ✅ **Parameter Tuning**: Temperature and settings optimize for role needs
The key is creating agents where every configuration choice reinforces your LLM selection strategy, maximizing performance while optimizing costs.
## Practical Implementation Checklist
Rather than repeating the strategic framework, here's a tactical checklist for implementing your LLM selection decisions in CrewAI:
<Steps>
<Step title="Audit Your Current Setup" icon="clipboard-check">
**What to Review:**
- Are all agents using the same LLM by default?
- Which agents handle the most complex reasoning tasks?
- Which agents primarily do data processing or formatting?
- Are any agents heavily tool-dependent?
**Action**: Document current agent roles and identify optimization opportunities.
</Step>
<Step title="Implement Crew-Level Strategy" icon="users-gear">
**Set Your Baseline:**
```python
# Start with a reliable default for the crew
default_crew_llm = LLM(model="gpt-4o-mini") # Cost-effective baseline
crew = Crew(
agents=[...],
tasks=[...],
memory=True
)
```
**Action**: Establish your crew's default LLM before optimizing individual agents.
</Step>
<Step title="Optimize High-Impact Agents" icon="star">
**Identify and Upgrade Key Agents:**
```python
# Manager or coordination agents
manager_agent = Agent(
role="Project Manager",
llm=LLM(model="gemini-2.5-flash-preview-05-20"), # Premium for coordination
# ... rest of config
)
# Creative or customer-facing agents
content_agent = Agent(
role="Content Creator",
llm=LLM(model="claude-3-5-sonnet"), # Best for writing
# ... rest of config
)
```
**Action**: Upgrade 20% of your agents that handle 80% of the complexity.
</Step>
<Step title="Validate with Enterprise Testing" icon="test-tube">
**Once you deploy your agents to production:**
- Use [CrewAI Enterprise platform](https://app.crewai.com) to A/B test your model selections
- Run multiple iterations with real inputs to measure consistency and performance
- Compare cost vs. performance across your optimized setup
- Share results with your team for collaborative decision-making
**Action**: Replace guesswork with data-driven validation using the testing platform.
</Step>
</Steps>
### When to Use Different Model Types
<Tabs>
<Tab title="Reasoning Models">
Reasoning models become essential when tasks require genuine multi-step logical thinking, strategic planning, or high-level decision making that benefits from systematic analysis. These models excel when problems need to be broken down into components and analyzed systematically rather than handled through pattern matching or simple instruction following.
Consider reasoning models for business strategy development, complex data analysis that requires drawing insights from multiple sources, multi-step problem solving where each step depends on previous analysis, and strategic planning tasks that require considering multiple variables and their interactions.
However, reasoning models often come with higher costs and slower response times, so they're best reserved for tasks where their sophisticated capabilities provide genuine value rather than being used for simple operations that don't require complex reasoning.
</Tab>
<Tab title="Creative Models">
Creative models become valuable when content generation is the primary output and the quality, style, and engagement level of that content directly impact success. These models excel when writing quality and style matter significantly, creative ideation or brainstorming is needed, or brand voice and tone are important considerations.
Use creative models for blog post writing and article creation, marketing copy that needs to engage and persuade, creative storytelling and narrative development, and brand communications where voice and tone are crucial. These models often understand nuance and context better than general purpose alternatives.
Creative models may be less suitable for technical or analytical tasks where precision and factual accuracy are more important than engagement and style. They're best used when the creative and communicative aspects of the output are primary success factors.
</Tab>
<Tab title="Efficient Models">
Efficient models are ideal for high-frequency, routine operations where speed and cost optimization are priorities. These models work best when tasks have clear, well-defined parameters and don't require sophisticated reasoning or creative capabilities.
Consider efficient models for data processing and transformation tasks, simple formatting and organization operations, function calling and tool usage where precision matters more than sophistication, and high-volume operations where cost per operation is a significant factor.
The key with efficient models is ensuring that their capabilities align with task requirements. They can handle many routine operations effectively but may struggle with tasks requiring nuanced understanding, complex reasoning, or sophisticated content generation.
</Tab>
<Tab title="Open Source Models">
Open source models become attractive when budget constraints are significant, data privacy requirements exist, customization needs are important, or local deployment is required for operational or compliance reasons.
Consider open source models for internal company tools where data privacy is paramount, privacy-sensitive applications that can't use external APIs, cost-optimized deployments where per-token pricing is prohibitive, and situations requiring custom model modifications or fine-tuning.
However, open source models require more technical expertise to deploy and maintain effectively. Consider the total cost of ownership including infrastructure, technical overhead, and ongoing maintenance when evaluating open source options.
</Tab>
</Tabs>
## Common CrewAI Model Selection Pitfalls
<AccordionGroup>
<Accordion title="The 'One Model Fits All' Trap" icon="triangle-exclamation">
**The Problem**: Using the same LLM for all agents in a crew, regardless of their specific roles and responsibilities. This is often the default approach but rarely optimal.
**Real Example**: Using GPT-4o for both a strategic planning manager and a data extraction agent. The manager needs reasoning capabilities worth the premium cost, but the data extractor could perform just as well with GPT-4o-mini at a fraction of the price.
**CrewAI Solution**: Leverage agent-specific LLM configuration to match model capabilities with agent roles:
```python
# Strategic agent gets premium model
manager = Agent(role="Strategy Manager", llm=LLM(model="gpt-4o"))
# Processing agent gets efficient model
processor = Agent(role="Data Processor", llm=LLM(model="gpt-4o-mini"))
```
</Accordion>
<Accordion title="Ignoring Crew-Level vs Agent-Level LLM Hierarchy" icon="shuffle">
**The Problem**: Not understanding how CrewAI's LLM hierarchy works - crew LLM, manager LLM, and agent LLM settings can conflict or be poorly coordinated.
**Real Example**: Setting a crew to use Claude, but having agents configured with GPT models, creating inconsistent behavior and unnecessary model switching overhead.
**CrewAI Solution**: Plan your LLM hierarchy strategically:
```python
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
manager_llm=LLM(model="gpt-4o"), # For crew coordination
process=Process.hierarchical # When using manager_llm
)
# Agents inherit crew LLM unless specifically overridden
agent1 = Agent(llm=LLM(model="claude-3-5-sonnet")) # Override for specific needs
```
</Accordion>
<Accordion title="Function Calling Model Mismatch" icon="screwdriver-wrench">
**The Problem**: Choosing models based on general capabilities while ignoring function calling performance for tool-heavy CrewAI workflows.
**Real Example**: Selecting a creative-focused model for an agent that primarily needs to call APIs, search tools, or process structured data. The agent struggles with tool parameter extraction and reliable function calls.
**CrewAI Solution**: Prioritize function calling capabilities for tool-heavy agents:
```python
# For agents that use many tools
tool_agent = Agent(
role="API Integration Specialist",
tools=[search_tool, api_tool, data_tool],
llm=LLM(model="gpt-4o"), # Excellent function calling
# OR
llm=LLM(model="claude-3-5-sonnet") # Also strong with tools
)
```
</Accordion>
<Accordion title="Premature Optimization Without Testing" icon="gear">
**The Problem**: Making complex model selection decisions based on theoretical performance without validating with actual CrewAI workflows and tasks.
**Real Example**: Implementing elaborate model switching logic based on task types without testing if the performance gains justify the operational complexity.
**CrewAI Solution**: Start simple, then optimize based on real performance data:
```python
# Start with this
crew = Crew(agents=[...], tasks=[...], llm=LLM(model="gpt-4o-mini"))
# Test performance, then optimize specific agents as needed
# Use Enterprise platform testing to validate improvements
```
</Accordion>
<Accordion title="Overlooking Context and Memory Limitations" icon="brain">
**The Problem**: Not considering how model context windows interact with CrewAI's memory and context sharing between agents.
**Real Example**: Using a short-context model for agents that need to maintain conversation history across multiple task iterations, or in crews with extensive agent-to-agent communication.
**CrewAI Solution**: Match context capabilities to crew communication patterns.
</Accordion>
</AccordionGroup>
## Testing and Iteration Strategy
<Steps>
<Step title="Start Simple" icon="play">
Begin with reliable, general-purpose models that are well-understood and widely supported. This provides a stable foundation for understanding your specific requirements and performance expectations before optimizing for specialized needs.
</Step>
<Step title="Measure What Matters" icon="chart-line">
Develop metrics that align with your specific use case and business requirements rather than relying solely on general benchmarks. Focus on measuring outcomes that directly impact your success rather than theoretical performance indicators.
</Step>
<Step title="Iterate Based on Results" icon="arrows-rotate">
Make model changes based on observed performance in your specific context rather than theoretical considerations or general recommendations. Real-world performance often differs significantly from benchmark results or general reputation.
</Step>
<Step title="Consider Total Cost" icon="calculator">
Evaluate the complete cost of ownership including model costs, development time, maintenance overhead, and operational complexity. The cheapest model per token may not be the most cost-effective choice when considering all factors.
</Step>
</Steps>
<Tip>
Focus on understanding your requirements first, then select models that best match those needs. The best LLM choice is the one that consistently delivers the results you need within your operational constraints.
</Tip>
### Enterprise-Grade Model Validation
For teams serious about optimizing their LLM selection, the **CrewAI Enterprise platform** provides sophisticated testing capabilities that go far beyond basic CLI testing. The platform enables comprehensive model evaluation that helps you make data-driven decisions about your LLM strategy.
<Frame>
![Enterprise Testing Interface](/images/enterprise/enterprise-testing.png)
</Frame>
**Advanced Testing Features:**
- **Multi-Model Comparison**: Test multiple LLMs simultaneously across the same tasks and inputs. Compare performance between GPT-4o, Claude, Llama, Groq, Cerebras, and other leading models in parallel to identify the best fit for your specific use case.
- **Statistical Rigor**: Configure multiple iterations with consistent inputs to measure reliability and performance variance. This helps identify models that not only perform well but do so consistently across runs.
- **Real-World Validation**: Use your actual crew inputs and scenarios rather than synthetic benchmarks. The platform allows you to test with your specific industry context, company information, and real use cases for more accurate evaluation.
- **Comprehensive Analytics**: Access detailed performance metrics, execution times, and cost analysis across all tested models. This enables data-driven decision making rather than relying on general model reputation or theoretical capabilities.
- **Team Collaboration**: Share testing results and model performance data across your team, enabling collaborative decision-making and consistent model selection strategies across projects.
Go to [app.crewai.com](https://app.crewai.com) to get started!
<Info>
The Enterprise platform transforms model selection from guesswork into a data-driven process, enabling you to validate the principles in this guide with your actual use cases and requirements.
</Info>
## Key Principles Summary
<CardGroup cols={2}>
<Card title="Task-Driven Selection" icon="bullseye">
Choose models based on what the task actually requires, not theoretical capabilities or general reputation.
</Card>
<Card title="Capability Matching" icon="puzzle-piece">
Align model strengths with agent roles and responsibilities for optimal performance.
</Card>
<Card title="Strategic Consistency" icon="link">
Maintain coherent model selection strategy across related components and workflows.
</Card>
<Card title="Practical Testing" icon="flask">
Validate choices through real-world usage rather than benchmarks alone.
</Card>
<Card title="Iterative Improvement" icon="arrow-up">
Start simple and optimize based on actual performance and needs.
</Card>
<Card title="Operational Balance" icon="scale-balanced">
Balance performance requirements with cost and complexity constraints.
</Card>
</CardGroup>
<Check>
Remember: The best LLM choice is the one that consistently delivers the results you need within your operational constraints. Focus on understanding your requirements first, then select models that best match those needs.
</Check>
## Current Model Landscape (June 2025)
<Warning>
**Snapshot in Time**: The following model rankings represent current leaderboard standings as of June 2025, compiled from [LMSys Arena](https://arena.lmsys.org/), [Artificial Analysis](https://artificialanalysis.ai/), and other leading benchmarks. LLM performance, availability, and pricing change rapidly. Always conduct your own evaluations with your specific use cases and data.
</Warning>
### Leading Models by Category
The tables below show a representative sample of current top-performing models across different categories, with guidance on their suitability for CrewAI agents:
<Note>
These tables/metrics showcase selected leading models in each category and are not exhaustive. Many excellent models exist beyond those listed here. The goal is to illustrate the types of capabilities to look for rather than provide a complete catalog.
</Note>
<Tabs>
<Tab title="Reasoning & Planning">
**Best for Manager LLMs and Complex Analysis**
| Model | Intelligence Score | Cost ($/M tokens) | Speed | Best Use in CrewAI |
|:------|:------------------|:------------------|:------|:------------------|
| **o3** | 70 | $17.50 | Fast | Manager LLM for complex multi-agent coordination |
| **Gemini 2.5 Pro** | 69 | $3.44 | Fast | Strategic planning agents, research coordination |
| **DeepSeek R1** | 68 | $0.96 | Moderate | Cost-effective reasoning for budget-conscious crews |
| **Claude 4 Sonnet** | 53 | $6.00 | Fast | Analysis agents requiring nuanced understanding |
| **Qwen3 235B (Reasoning)** | 62 | $2.63 | Moderate | Open-source alternative for reasoning tasks |
These models excel at multi-step reasoning and are ideal for agents that need to develop strategies, coordinate other agents, or analyze complex information.
</Tab>
<Tab title="Coding & Technical">
**Best for Development and Tool-Heavy Workflows**
| Model | Coding Performance | Tool Use Score | Cost ($/M tokens) | Best Use in CrewAI |
|:------|:------------------|:---------------|:------------------|:------------------|
| **Claude 4 Sonnet** | Excellent | 72.7% | $6.00 | Primary coding agent, technical documentation |
| **Claude 4 Opus** | Excellent | 72.5% | $30.00 | Complex software architecture, code review |
| **DeepSeek V3** | Very Good | High | $0.48 | Cost-effective coding for routine development |
| **Qwen2.5 Coder 32B** | Very Good | Medium | $0.15 | Budget-friendly coding agent |
| **Llama 3.1 405B** | Good | 81.1% | $3.50 | Function calling LLM for tool-heavy workflows |
These models are optimized for code generation, debugging, and technical problem-solving, making them ideal for development-focused crews.
</Tab>
<Tab title="Speed & Efficiency">
**Best for High-Throughput and Real-Time Applications**
| Model | Speed (tokens/s) | Latency (TTFT) | Cost ($/M tokens) | Best Use in CrewAI |
|:------|:-----------------|:---------------|:------------------|:------------------|
| **Llama 4 Scout** | 2,600 | 0.33s | $0.27 | High-volume processing agents |
| **Gemini 2.5 Flash** | 376 | 0.30s | $0.26 | Real-time response agents |
| **DeepSeek R1 Distill** | 383 | Variable | $0.04 | Cost-optimized high-speed processing |
| **Llama 3.3 70B** | 2,500 | 0.52s | $0.60 | Balanced speed and capability |
| **Nova Micro** | High | 0.30s | $0.04 | Simple, fast task execution |
These models prioritize speed and efficiency, perfect for agents handling routine operations or requiring quick responses. **Pro tip**: Pairing these models with fast inference providers like Groq can achieve even better performance, especially for open-source models like Llama.
</Tab>
<Tab title="Balanced Performance">
**Best All-Around Models for General Crews**
| Model | Overall Score | Versatility | Cost ($/M tokens) | Best Use in CrewAI |
|:------|:--------------|:------------|:------------------|:------------------|
| **GPT-4.1** | 53 | Excellent | $3.50 | General-purpose crew LLM |
| **Claude 3.7 Sonnet** | 48 | Very Good | $6.00 | Balanced reasoning and creativity |
| **Gemini 2.0 Flash** | 48 | Good | $0.17 | Cost-effective general use |
| **Llama 4 Maverick** | 51 | Good | $0.37 | Open-source general purpose |
| **Qwen3 32B** | 44 | Good | $1.23 | Budget-friendly versatility |
These models offer good performance across multiple dimensions, suitable for crews with diverse task requirements.
</Tab>
</Tabs>
### Selection Framework for Current Models
<AccordionGroup>
<Accordion title="High-Performance Crews" icon="rocket">
**When performance is the priority**: Use top-tier models like **o3**, **Gemini 2.5 Pro**, or **Claude 4 Sonnet** for manager LLMs and critical agents. These models excel at complex reasoning and coordination but come with higher costs.
**Strategy**: Implement a multi-model approach where premium models handle strategic thinking while efficient models handle routine operations.
</Accordion>
<Accordion title="Cost-Conscious Crews" icon="dollar-sign">
**When budget is a primary constraint**: Focus on models like **DeepSeek R1**, **Llama 4 Scout**, or **Gemini 2.0 Flash**. These provide strong performance at significantly lower costs.
**Strategy**: Use cost-effective models for most agents, reserving premium models only for the most critical decision-making roles.
</Accordion>
<Accordion title="Specialized Workflows" icon="screwdriver-wrench">
**For specific domain expertise**: Choose models optimized for your primary use case. **Claude 4** series for coding, **Gemini 2.5 Pro** for research, **Llama 405B** for function calling.
**Strategy**: Select models based on your crew's primary function, ensuring the core capability aligns with model strengths.
</Accordion>
<Accordion title="Enterprise & Privacy" icon="shield">
**For data-sensitive operations**: Consider open-source models like **Llama 4** series, **DeepSeek V3**, or **Qwen3** that can be deployed locally while maintaining competitive performance.
**Strategy**: Deploy open-source models on private infrastructure, accepting potential performance trade-offs for data control.
</Accordion>
</AccordionGroup>
### Key Considerations for Model Selection
- **Performance Trends**: The current landscape shows strong competition between reasoning-focused models (o3, Gemini 2.5 Pro) and balanced models (Claude 4, GPT-4.1). Specialized models like DeepSeek R1 offer excellent cost-performance ratios.
- **Speed vs. Intelligence Trade-offs**: Models like Llama 4 Scout prioritize speed (2,600 tokens/s) while maintaining reasonable intelligence, whereas models like o3 maximize reasoning capability at the cost of speed and price.
- **Open Source Viability**: The gap between open-source and proprietary models continues to narrow, with models like Llama 4 Maverick and DeepSeek V3 offering competitive performance at attractive price points. Fast inference providers particularly shine with open-source models, often delivering better speed-to-cost ratios than proprietary alternatives.
<Info>
**Testing is Essential**: Leaderboard rankings provide general guidance, but your specific use case, prompting style, and evaluation criteria may produce different results. Always test candidate models with your actual tasks and data before making final decisions.
</Info>
### Practical Implementation Strategy
<Steps>
<Step title="Start with Proven Models">
Begin with well-established models like **GPT-4.1**, **Claude 3.7 Sonnet**, or **Gemini 2.0 Flash** that offer good performance across multiple dimensions and have extensive real-world validation.
</Step>
<Step title="Identify Specialized Needs">
Determine if your crew has specific requirements (coding, reasoning, speed) that would benefit from specialized models like **Claude 4 Sonnet** for development or **o3** for complex analysis. For speed-critical applications, consider fast inference providers like **Groq** alongside model selection.
</Step>
<Step title="Implement Multi-Model Strategy">
Use different models for different agents based on their roles. High-capability models for managers and complex tasks, efficient models for routine operations.
</Step>
<Step title="Monitor and Optimize">
Track performance metrics relevant to your use case and be prepared to adjust model selections as new models are released or pricing changes.
</Step>
</Steps>

View File

@@ -0,0 +1,152 @@
---
title: Maxim Integration
description: Start Agent monitoring, evaluation, and observability
icon: bars-staggered
---
# Maxim Integration
Maxim AI provides comprehensive agent monitoring, evaluation, and observability for your CrewAI applications. With Maxim's one-line integration, you can easily trace and analyse agent interactions, performance metrics, and more.
## Features: One Line Integration
- **End-to-End Agent Tracing**: Monitor the complete lifecycle of your agents
- **Performance Analytics**: Track latency, tokens consumed, and costs
- **Hyperparameter Monitoring**: View the configuration details of your agent runs
- **Tool Call Tracking**: Observe when and how agents use their tools
- **Advanced Visualisation**: Understand agent trajectories through intuitive dashboards
## Getting Started
### Prerequisites
- Python version >=3.10
- A Maxim account ([sign up here](https://getmaxim.ai/))
- A CrewAI project
### Installation
Install the Maxim SDK via pip:
```python
pip install maxim-py>=3.6.2
```
Or add it to your `requirements.txt`:
```
maxim-py>=3.6.2
```
### Basic Setup
### 1. Set up environment variables
```python
### Environment Variables Setup
# Create a `.env` file in your project root:
# Maxim API Configuration
MAXIM_API_KEY=your_api_key_here
MAXIM_LOG_REPO_ID=your_repo_id_here
```
### 2. Import the required packages
```python
from crewai import Agent, Task, Crew, Process
from maxim import Maxim
from maxim.logger.crewai import instrument_crewai
```
### 3. Initialise Maxim with your API key
```python
# Initialize Maxim logger
logger = Maxim().logger()
# Instrument CrewAI with just one line
instrument_crewai(logger)
```
### 4. Create and run your CrewAI application as usual
```python
# Create your agent
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI',
backstory="You are an expert researcher at a tech think tank...",
verbose=True,
llm=llm
)
# Define the task
research_task = Task(
description="Research the latest AI advancements...",
expected_output="",
agent=researcher
)
# Configure and run the crew
crew = Crew(
agents=[researcher],
tasks=[research_task],
verbose=True
)
try:
result = crew.kickoff()
finally:
maxim.cleanup() # Ensure cleanup happens even if errors occur
```
That's it! All your CrewAI agent interactions will now be logged and available in your Maxim dashboard.
Check this Google Colab Notebook for a quick reference - [Notebook](https://colab.research.google.com/drive/1ZKIZWsmgQQ46n8TH9zLsT1negKkJA6K8?usp=sharing)
## Viewing Your Traces
After running your CrewAI application:
![Example trace in Maxim showing agent interactions](https://raw.githubusercontent.com/maximhq/maxim-docs/master/images/Screenshot2025-05-14at12.10.58PM.png)
1. Log in to your [Maxim Dashboard](https://getmaxim.ai/dashboard)
2. Navigate to your repository
3. View detailed agent traces, including:
- Agent conversations
- Tool usage patterns
- Performance metrics
- Cost analytics
## Troubleshooting
### Common Issues
- **No traces appearing**: Ensure your API key and repository ID are correc
- Ensure you've **called `instrument_crewai()`** ***before*** running your crew. This initializes logging hooks correctly.
- Set `debug=True` in your `instrument_crewai()` call to surface any internal errors:
```python
instrument_crewai(logger, debug=True)
```
- Configure your agents with `verbose=True` to capture detailed logs:
```python
agent = CrewAgent(..., verbose=True)
```
- Double-check that `instrument_crewai()` is called **before** creating or executing agents. This might be obvious, but it's a common oversight.
### Support
If you encounter any issues:
- Check the [Maxim Documentation](https://getmaxim.ai/docs)
- Maxim Github [Link](https://github.com/maximhq)

View File

@@ -7,196 +7,818 @@ icon: key
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Introduction
## Getting Started
Portkey enhances CrewAI with production-readiness features, turning your experimental agent crews into robust systems by providing:
- **Complete observability** of every agent step, tool use, and interaction
- **Built-in reliability** with fallbacks, retries, and load balancing
- **Cost tracking and optimization** to manage your AI spend
- **Access to 200+ LLMs** through a single integration
- **Guardrails** to keep agent behavior safe and compliant
- **Version-controlled prompts** for consistent agent performance
### Installation & Setup
<Steps>
<Step title="Install CrewAI and Portkey">
```bash
pip install -qU crewai portkey-ai
```
</Step>
<Step title="Configure the LLM Client">
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
<Step title="Install the required packages">
```bash
pip install -U crewai portkey-ai
```
</Step>
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
<Step title="Generate API Key" icon="lock">
Create a Portkey API key with optional budget/rate limits from the [Portkey dashboard](https://app.portkey.ai/). You can also attach configurations for reliability, caching, and more to this key. More on this later.
</Step>
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
</Step>
<Step title="Create and Run Your First Agent">
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
</Step>
</Steps>
## Key Features
| Feature | Description |
|:--------|:------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
<Step title="Configure CrewAI with Portkey">
The integration is simple - you just need to update the LLM configuration in your CrewAI setup:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
# Create an LLM instance with Portkey integration
gpt_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
api_key="dummy", # We are using a Virtual key, so this is a placeholder
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
virtual_key="YOUR_LLM_VIRTUAL_KEY",
trace_id="unique-trace-id", # Optional, for request tracing
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
#Use them in your Crew Agents like this:
@agent
def lead_market_analyst(self) -> Agent:
return Agent(
config=self.agents_config['lead_market_analyst'],
verbose=True,
memory=False,
llm=gpt_llm
)
```
<Info>
**What are Virtual Keys?** Virtual keys in Portkey securely store your LLM provider API keys (OpenAI, Anthropic, etc.) in an encrypted vault. They allow for easier key rotation and budget management. [Learn more about virtual keys here](https://portkey.ai/docs/product/ai-gateway/virtual-keys).
</Info>
</Step>
</Steps>
## Production Features
### 1. Enhanced Observability
Portkey provides comprehensive observability for your CrewAI agents, helping you understand exactly what's happening during each execution.
<Tabs>
<Tab title="Traces">
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/CrewAI%20Product%2011.1.webp"/>
</Frame>
Traces provide a hierarchical view of your crew's execution, showing the sequence of LLM calls, tool invocations, and state transitions.
```python
# Add trace_id to enable hierarchical tracing in Portkey
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
trace_id="unique-session-id" # Add unique trace ID
)
)
```
</Tab>
<Tab title="Logs">
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/CrewAI%20Portkey%20Docs%20Metadata.png"/>
</Frame>
Portkey logs every interaction with LLMs, including:
- Complete request and response payloads
- Latency and token usage metrics
- Cost calculations
- Tool calls and function executions
All logs can be filtered by metadata, trace IDs, models, and more, making it easy to debug specific crew runs.
</Tab>
<Tab title="Metrics & Dashboards">
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/CrewAI%20Dashboard.png"/>
</Frame>
Portkey provides built-in dashboards that help you:
- Track cost and token usage across all crew runs
- Analyze performance metrics like latency and success rates
- Identify bottlenecks in your agent workflows
- Compare different crew configurations and LLMs
You can filter and segment all metrics by custom metadata to analyze specific crew types, user groups, or use cases.
</Tab>
<Tab title="Metadata Filtering">
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/Metadata%20Filters%20from%20CrewAI.png" alt="Analytics with metadata filters" />
</Frame>
Add custom metadata to your CrewAI LLM configuration to enable powerful filtering and segmentation:
```python
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
metadata={
"crew_type": "research_crew",
"environment": "production",
"_user": "user_123", # Special _user field for user analytics
"request_source": "mobile_app"
}
)
)
```
This metadata can be used to filter logs, traces, and metrics on the Portkey dashboard, allowing you to analyze specific crew runs, users, or environments.
</Tab>
</Tabs>
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
### 2. Reliability - Keep Your Crews Running Smoothly
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
When running crews in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey's reliability features ensure your agents keep running smoothly even when problems occur.
It's simple to enable fallback in your CrewAI setup by using a Portkey Config:
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
# Create LLM with fallback configuration
portkey_llm = LLM(
model="gpt-4o",
max_tokens=1000,
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
config={
"strategy": {
"mode": "fallback"
},
"targets": [
{
"provider": "openai",
"api_key": "YOUR_OPENAI_API_KEY",
"override_params": {"model": "gpt-4o"}
},
{
"provider": "anthropic",
"api_key": "YOUR_ANTHROPIC_API_KEY",
"override_params": {"model": "claude-3-opus-20240229"}
}
]
}
)
)
# Use this LLM configuration with your agents
```
This configuration will automatically try Claude if the GPT-4o request fails, ensuring your crew can continue operating.
<CardGroup cols="2">
<Card title="Automatic Retries" icon="rotate" href="https://portkey.ai/docs/product/ai-gateway/automatic-retries">
Handles temporary failures automatically. If an LLM call fails, Portkey will retry the same request for the specified number of times - perfect for rate limits or network blips.
</Card>
<Card title="Request Timeouts" icon="clock" href="https://portkey.ai/docs/product/ai-gateway/request-timeouts">
Prevent your agents from hanging. Set timeouts to ensure you get responses (or can fail gracefully) within your required timeframes.
</Card>
<Card title="Conditional Routing" icon="route" href="https://portkey.ai/docs/product/ai-gateway/conditional-routing">
Send different requests to different providers. Route complex reasoning to GPT-4, creative tasks to Claude, and quick responses to Gemini based on your needs.
</Card>
<Card title="Fallbacks" icon="shield" href="https://portkey.ai/docs/product/ai-gateway/fallbacks">
Keep running even if your primary provider fails. Automatically switch to backup providers to maintain availability.
</Card>
<Card title="Load Balancing" icon="scale-balanced" href="https://portkey.ai/docs/product/ai-gateway/load-balancing">
Spread requests across multiple API keys or providers. Great for high-volume crew operations and staying within rate limits.
</Card>
</CardGroup>
### 3. Prompting in CrewAI
Portkey's Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your CrewAI agents. Instead of hardcoding prompts or instructions, use Portkey's prompt rendering API to dynamically fetch and apply your versioned prompts.
<Frame caption="Manage prompts in Portkey's Prompt Library">
![Prompt Playground Interface](https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/CrewAI%20Portkey%20Docs.webp)
</Frame>
<Tabs>
<Tab title="Prompt Playground">
Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It's where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
1. Iteratively develop prompts before using them in your agents
2. Test prompts with different variables and models
3. Compare outputs between different prompt versions
4. Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your CrewAI agents' workflow.
</Tab>
<Tab title="Using Prompt Templates">
The Prompt Render API retrieves your prompt templates with all parameters configured:
```python
from crewai import Agent, LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL, Portkey
# Initialize Portkey admin client
portkey_admin = Portkey(api_key="YOUR_PORTKEY_API_KEY")
# Retrieve prompt using the render API
prompt_data = portkey_client.prompts.render(
prompt_id="YOUR_PROMPT_ID",
variables={
"agent_role": "Senior Research Scientist",
}
)
backstory_agent_prompt=prompt_data.data.messages[0]["content"]
# Set up LLM with Portkey integration
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
)
)
# Create agent using the rendered prompt
researcher = Agent(
role="Senior Research Scientist",
goal="Discover groundbreaking insights about the assigned topic",
backstory=backstory_agent, # Use the rendered prompt
verbose=True,
llm=portkey_llm
)
```
</Tab>
<Tab title="Prompt Versioning">
You can:
- Create multiple versions of the same prompt
- Compare performance between versions
- Roll back to previous versions if needed
- Specify which version to use in your code:
```python
# Use a specific prompt version
prompt_data = portkey_admin.prompts.render(
prompt_id="YOUR_PROMPT_ID@version_number",
variables={
"agent_role": "Senior Research Scientist",
"agent_goal": "Discover groundbreaking insights"
}
)
```
</Tab>
<Tab title="Mustache Templating for variables">
Portkey prompts use Mustache-style templating for easy variable substitution:
```
You are a {{agent_role}} with expertise in {{domain}}.
Your mission is to {{agent_goal}} by leveraging your knowledge
and experience in the field.
Always maintain a {{tone}} tone and focus on providing {{focus_area}}.
```
When rendering, simply pass the variables:
```python
prompt_data = portkey_admin.prompts.render(
prompt_id="YOUR_PROMPT_ID",
variables={
"agent_role": "Senior Research Scientist",
"domain": "artificial intelligence",
"agent_goal": "discover groundbreaking insights",
"tone": "professional",
"focus_area": "practical applications"
}
)
```
</Tab>
</Tabs>
<Card title="Prompt Engineering Studio" icon="wand-magic-sparkles" href="https://portkey.ai/docs/product/prompt-library">
Learn more about Portkey's prompt management features
</Card>
### 4. Guardrails for Safe Crews
Guardrails ensure your CrewAI agents operate safely and respond appropriately in all situations.
**Why Use Guardrails?**
CrewAI agents can experience various failure modes:
- Generating harmful or inappropriate content
- Leaking sensitive information like PII
- Hallucinating incorrect information
- Generating outputs in incorrect formats
Portkey's guardrails add protections for both inputs and outputs.
**Implementing Guardrails**
```python
from crewai import Agent, LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
# Create LLM with guardrails
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
config={
"input_guardrails": ["guardrails-id-xxx", "guardrails-id-yyy"],
"output_guardrails": ["guardrails-id-zzz"]
}
)
)
# Create agent with guardrailed LLM
researcher = Agent(
role="Senior Research Scientist",
goal="Discover groundbreaking insights about the assigned topic",
backstory="You are an expert researcher with deep domain knowledge.",
verbose=True,
llm=portkey_llm
)
```
Portkey's guardrails can:
- Detect and redact PII in both inputs and outputs
- Filter harmful or inappropriate content
- Validate response formats against schemas
- Check for hallucinations against ground truth
- Apply custom business logic and rules
<Card title="Learn More About Guardrails" icon="shield-check" href="https://portkey.ai/docs/product/guardrails">
Explore Portkey's guardrail features to enhance agent safety
</Card>
### 5. User Tracking with Metadata
Track individual users through your CrewAI agents using Portkey's metadata system.
**What is Metadata in Portkey?**
Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special `_user` field is specifically designed for user tracking.
```python
from crewai import Agent, LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
# Configure LLM with user tracking
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
metadata={
"_user": "user_123", # Special _user field for user analytics
"user_tier": "premium",
"user_company": "Acme Corp",
"session_id": "abc-123"
}
)
)
# Create agent with tracked LLM
researcher = Agent(
role="Senior Research Scientist",
goal="Discover groundbreaking insights about the assigned topic",
backstory="You are an expert researcher with deep domain knowledge.",
verbose=True,
llm=portkey_llm
)
```
**Filter Analytics by User**
With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:
<Frame caption="Filter analytics by user">
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/Metadata%20Filters%20from%20CrewAI.png"/>
</Frame>
This enables:
- Per-user cost tracking and budgeting
- Personalized user analytics
- Team or organization-level metrics
- Environment-specific monitoring (staging vs. production)
<Card title="Learn More About Metadata" icon="tags" href="https://portkey.ai/docs/product/observability/metadata">
Explore how to use custom metadata to enhance your analytics
</Card>
### 6. Caching for Efficient Crews
Implement caching to make your CrewAI agents more efficient and cost-effective:
<Tabs>
<Tab title="Simple Caching">
```python
from crewai import Agent, LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
# Configure LLM with simple caching
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
config={
"cache": {
"mode": "simple"
}
}
)
)
# Create agent with cached LLM
researcher = Agent(
role="Senior Research Scientist",
goal="Discover groundbreaking insights about the assigned topic",
backstory="You are an expert researcher with deep domain knowledge.",
verbose=True,
llm=portkey_llm
)
```
Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.
</Tab>
<Tab title="Semantic Caching">
```python
from crewai import Agent, LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
# Configure LLM with semantic caching
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_OPENAI_VIRTUAL_KEY",
config={
"cache": {
"mode": "semantic"
}
}
)
)
# Create agent with semantically cached LLM
researcher = Agent(
role="Senior Research Scientist",
goal="Discover groundbreaking insights about the assigned topic",
backstory="You are an expert researcher with deep domain knowledge.",
verbose=True,
llm=portkey_llm
)
```
Semantic caching considers the contextual similarity between input requests, caching responses for semantically similar inputs.
</Tab>
</Tabs>
### 7. Model Interoperability
CrewAI supports multiple LLM providers, and Portkey extends this capability by providing access to over 200 LLMs through a unified interface. You can easily switch between different models without changing your core agent logic:
```python
from crewai import Agent, LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
# Set up LLMs with different providers
openai_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_OPENAI_VIRTUAL_KEY"
)
)
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
max_tokens=1000,
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY"
)
)
# Choose which LLM to use for each agent based on your needs
researcher = Agent(
role="Senior Research Scientist",
goal="Discover groundbreaking insights about the assigned topic",
backstory="You are an expert researcher with deep domain knowledge.",
verbose=True,
llm=openai_llm # Use anthropic_llm for Anthropic
)
```
Portkey provides access to LLMs from providers including:
- OpenAI (GPT-4o, GPT-4 Turbo, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
- Mistral AI (Mistral Large, Mistral Medium, etc.)
- Google Vertex AI (Gemini 1.5 Pro, etc.)
- Cohere (Command, Command-R, etc.)
- AWS Bedrock (Claude, Titan, etc.)
- Local/Private Models
<Card title="Supported Providers" icon="server" href="https://portkey.ai/docs/integrations/llms">
See the full list of LLM providers supported by Portkey
</Card>
## Set Up Enterprise Governance for CrewAI
**Why Enterprise Governance?**
If you are using CrewAI inside your organization, you need to consider several governance aspects:
- **Cost Management**: Controlling and tracking AI spending across teams
- **Access Control**: Managing which teams can use specific models
- **Usage Analytics**: Understanding how AI is being used across the organization
- **Security & Compliance**: Maintaining enterprise security standards
- **Reliability**: Ensuring consistent service across all users
Portkey adds a comprehensive governance layer to address these enterprise needs. Let's implement these controls step by step.
<Steps>
<Step title="Create Virtual Key">
Virtual Keys are Portkey's secure way to manage your LLM provider API keys. They provide essential controls like:
- Budget limits for API usage
- Rate limiting capabilities
- Secure API key storage
To create a virtual key:
Go to [Virtual Keys](https://app.portkey.ai/virtual-keys) in the Portkey App. Save and copy the virtual key ID
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/Virtual%20Key%20from%20Portkey%20Docs.png" width="500"/>
</Frame>
<Note>
Save your virtual key ID - you'll need it for the next step.
</Note>
</Step>
<Step title="Create Default Config">
Configs in Portkey define how your requests are routed, with features like advanced routing, fallbacks, and retries.
To create your config:
1. Go to [Configs](https://app.portkey.ai/configs) in Portkey dashboard
2. Create new config with:
```json
{
"virtual_key": "YOUR_VIRTUAL_KEY_FROM_STEP1",
"override_params": {
"model": "gpt-4o" // Your preferred model name
}
}
```
3. Save and note the Config name for the next step
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/CrewAI%20Portkey%20Docs%20Config.png" width="500"/>
</Frame>
</Step>
<Step title="Configure Portkey API Key">
Now create a Portkey API key and attach the config you created in Step 2:
1. Go to [API Keys](https://app.portkey.ai/api-keys) in Portkey and Create new API key
2. Select your config from `Step 2`
3. Generate and save your API key
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/CrewAI%20API%20Key.png" width="500"/>
</Frame>
</Step>
<Step title="Connect to CrewAI">
After setting up your Portkey API key with the attached config, connect it to your CrewAI agents:
```python
from crewai import Agent, LLM
from portkey_ai import PORTKEY_GATEWAY_URL
# Configure LLM with your API key
portkey_llm = LLM(
model="gpt-4o",
base_url=PORTKEY_GATEWAY_URL,
api_key="YOUR_PORTKEY_API_KEY"
)
# Create agent with Portkey-enabled LLM
researcher = Agent(
role="Senior Research Scientist",
goal="Discover groundbreaking insights about the assigned topic",
backstory="You are an expert researcher with deep domain knowledge.",
verbose=True,
llm=portkey_llm
)
```
</Step>
</Steps>
<AccordionGroup>
<Accordion title="Step 1: Implement Budget Controls & Rate Limits">
### Step 1: Implement Budget Controls & Rate Limits
Virtual Keys enable granular control over LLM access at the team/department level. This helps you:
- Set up [budget limits](https://portkey.ai/docs/product/ai-gateway/virtual-keys/budget-limits)
- Prevent unexpected usage spikes using Rate limits
- Track departmental spending
#### Setting Up Department-Specific Controls:
1. Navigate to [Virtual Keys](https://app.portkey.ai/virtual-keys) in Portkey dashboard
2. Create new Virtual Key for each department with budget limits and rate limits
3. Configure department-specific limits
<Frame>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/refs/heads/main/Virtual%20Key%20from%20Portkey%20Docs.png" width="500"/>
</Frame>
</Accordion>
<Accordion title="Step 2: Define Model Access Rules">
### Step 2: Define Model Access Rules
As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:
#### Access Control Features:
- **Model Restrictions**: Limit access to specific models
- **Data Protection**: Implement guardrails for sensitive data
- **Reliability Controls**: Add fallbacks and retry logic
#### Example Configuration:
Here's a basic configuration to route requests to OpenAI, specifically using GPT-4o:
```json
{
"strategy": {
"mode": "single"
},
"targets": [
{
"virtual_key": "YOUR_OPENAI_VIRTUAL_KEY",
"override_params": {
"model": "gpt-4o"
}
}
]
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
Create your config on the [Configs page](https://app.portkey.ai/configs) in your Portkey dashboard.
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
<Note>
Configs can be updated anytime to adjust controls without affecting running applications.
</Note>
</Accordion>
<Accordion title="Step 3: Implement Access Controls">
### Step 3: Implement Access Controls
Create User-specific API keys that automatically:
- Track usage per user/team with the help of virtual keys
- Apply appropriate configs to route requests
- Collect relevant metadata to filter logs
- Enforce access permissions
### 4. Metrics
Create API keys through:
- [Portkey App](https://app.portkey.ai/)
- [API Key Management API](/api-reference/admin-api/control-plane/api-keys/create-api-key)
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
Example using Python SDK:
```python
from portkey_ai import Portkey
portkey = Portkey(api_key="YOUR_ADMIN_API_KEY")
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
api_key = portkey.api_keys.create(
name="engineering-team",
type="organisation",
workspace_id="YOUR_WORKSPACE_ID",
defaults={
"config_id": "your-config-id",
"metadata": {
"environment": "production",
"department": "engineering"
}
},
scopes=["logs.view", "configs.read"]
)
```
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
For detailed key management instructions, see our [API Keys documentation](/api-reference/admin-api/control-plane/api-keys/create-api-key).
</Accordion>
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
<Accordion title="Step 4: Deploy & Monitor">
### Step 4: Deploy & Monitor
After distributing API keys to your team members, your enterprise-ready CrewAI setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls.
Monitor usage in Portkey dashboard:
- Cost tracking by department
- Model usage patterns
- Request volumes
- Error rates
</Accordion>
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
</AccordionGroup>
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<Note>
### Enterprise Features Now Available
**Your CrewAI integration now has:**
- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features
</Note>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
## Frequently Asked Questions
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
<AccordionGroup>
<Accordion title="How does Portkey enhance CrewAI?">
Portkey adds production-readiness to CrewAI through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 200+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent applications.
</Accordion>
<Accordion title="Can I use Portkey with existing CrewAI applications?">
Yes! Portkey integrates seamlessly with existing CrewAI applications. You just need to update your LLM configuration code with the Portkey-enabled version. The rest of your agent and crew code remains unchanged.
</Accordion>
<Accordion title="Does Portkey work with all CrewAI features?">
Portkey supports all CrewAI features, including agents, tools, human-in-the-loop workflows, and all task process types (sequential, hierarchical, etc.). It adds observability and reliability without limiting any of the framework's functionality.
</Accordion>
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
<Accordion title="Can I track usage across multiple agents in a crew?">
Yes, Portkey allows you to use a consistent `trace_id` across multiple agents in a crew to track the entire workflow. This is especially useful for complex crews where you want to understand the full execution path across multiple agents.
</Accordion>
<Accordion title="How do I filter logs and traces for specific crew runs?">
Portkey allows you to add custom metadata to your LLM configuration, which you can then use for filtering. Add fields like `crew_name`, `crew_type`, or `session_id` to easily find and analyze specific crew executions.
</Accordion>
<Accordion title="Can I use my own API keys with Portkey?">
Yes! Portkey uses your own API keys for the various LLM providers. It securely stores them as virtual keys, allowing you to easily manage and rotate keys without changing your code.
</Accordion>
</AccordionGroup>
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)
<CardGroup cols="3">
<Card title="CrewAI Docs" icon="book" href="https://docs.crewai.com/">
<p>Official CrewAI documentation</p>
</Card>
<Card title="Book a Demo" icon="calendar" href="https://calendly.com/portkey-ai">
<p>Get personalized guidance on implementing this integration</p>
</Card>
</CardGroup>

View File

@@ -1,9 +1,9 @@
[project]
name = "crewai"
version = "0.121.1"
version = "0.126.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
requires-python = ">=3.10,<3.14"
authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]
@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.68.0",
"litellm==1.72.0",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
@@ -22,6 +22,8 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http>=1.30.0",
# Data Handling
"chromadb>=0.5.23",
"tokenizers>=0.20.3",
"onnxruntime==1.22.0",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
@@ -45,12 +47,11 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools~=0.45.0"]
tools = ["crewai-tools~=0.46.0"]
embeddings = [
"tiktoken~=0.7.0"
"tiktoken~=0.8.0"
]
agentops = ["agentops>=0.3.0"]
fastembed = ["fastembed>=0.4.1"]
pdfplumber = [
"pdfplumber>=0.11.4",
]
@@ -100,6 +101,27 @@ exclude = ["cli/templates"]
[tool.bandit]
exclude_dirs = ["src/crewai/cli/templates"]
# PyTorch index configuration, since torch 2.5.0 is not compatible with python 3.13
[[tool.uv.index]]
name = "pytorch-nightly"
url = "https://download.pytorch.org/whl/nightly/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[tool.uv.sources]
torch = [
{ index = "pytorch-nightly", marker = "python_version >= '3.13'" },
{ index = "pytorch", marker = "python_version < '3.13'" },
]
torchvision = [
{ index = "pytorch-nightly", marker = "python_version >= '3.13'" },
{ index = "pytorch", marker = "python_version < '3.13'" },
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

View File

@@ -18,7 +18,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.121.1"
__version__ = "0.126.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -200,6 +200,7 @@ class Agent(BaseAgent):
collection_name=self.role,
storage=self.knowledge_storage or None,
)
self.knowledge.add_sources()
except (TypeError, ValueError) as e:
raise ValueError(f"Invalid Knowledge Configuration: {str(e)}")
@@ -243,21 +244,28 @@ class Agent(BaseAgent):
"""
if self.reasoning:
try:
from crewai.utilities.reasoning_handler import AgentReasoning, AgentReasoningOutput
from crewai.utilities.reasoning_handler import (
AgentReasoning,
AgentReasoningOutput,
)
reasoning_handler = AgentReasoning(task=task, agent=self)
reasoning_output: AgentReasoningOutput = reasoning_handler.handle_agent_reasoning()
reasoning_output: AgentReasoningOutput = (
reasoning_handler.handle_agent_reasoning()
)
# Add the reasoning plan to the task description
task.description += f"\n\nReasoning Plan:\n{reasoning_output.plan.plan}"
except Exception as e:
if hasattr(self, '_logger'):
self._logger.log("error", f"Error during reasoning process: {str(e)}")
if hasattr(self, "_logger"):
self._logger.log(
"error", f"Error during reasoning process: {str(e)}"
)
else:
print(f"Error during reasoning process: {str(e)}")
self._inject_date_to_task(task)
if self.tools_handler:
self.tools_handler.last_used_tool = {} # type: ignore # Incompatible types in assignment (expression has type "dict[Never, Never]", variable has type "ToolCalling")
@@ -622,22 +630,33 @@ class Agent(BaseAgent):
"""Inject the current date into the task description if inject_date is enabled."""
if self.inject_date:
from datetime import datetime
try:
valid_format_codes = ['%Y', '%m', '%d', '%H', '%M', '%S', '%B', '%b', '%A', '%a']
valid_format_codes = [
"%Y",
"%m",
"%d",
"%H",
"%M",
"%S",
"%B",
"%b",
"%A",
"%a",
]
is_valid = any(code in self.date_format for code in valid_format_codes)
if not is_valid:
raise ValueError(f"Invalid date format: {self.date_format}")
current_date: str = datetime.now().strftime(self.date_format)
task.description += f"\n\nCurrent Date: {current_date}"
except Exception as e:
if hasattr(self, '_logger'):
if hasattr(self, "_logger"):
self._logger.log("warning", f"Failed to inject date: {str(e)}")
else:
print(f"Warning: Failed to inject date: {str(e)}")
def _validate_docker_installation(self) -> None:
"""Check if Docker is installed and running."""
if not shutil.which("docker"):

View File

@@ -16,6 +16,7 @@ from .deploy.main import DeployCommand
from .evaluate_crew import evaluate_crew
from .install_crew import install_crew
from .kickoff_flow import kickoff_flow
from .organization.main import OrganizationCommand
from .plot_flow import plot_flow
from .replay_from_task import replay_task_command
from .reset_memories_command import reset_memories_command
@@ -353,5 +354,33 @@ def chat():
run_chat()
@crewai.group(invoke_without_command=True)
def org():
"""Organization management commands."""
pass
@org.command()
def list():
"""List available organizations."""
org_command = OrganizationCommand()
org_command.list()
@org.command()
@click.argument("id")
def switch(id):
"""Switch to a specific organization."""
org_command = OrganizationCommand()
org_command.switch(id)
@org.command()
def current():
"""Show current organization when 'crewai org' is called without subcommands."""
org_command = OrganizationCommand()
org_command.current()
if __name__ == "__main__":
crewai()

View File

@@ -14,6 +14,12 @@ class Settings(BaseModel):
tool_repository_password: Optional[str] = Field(
None, description="Password for interacting with the Tool Repository"
)
org_name: Optional[str] = Field(
None, description="Name of the currently active organization"
)
org_uuid: Optional[str] = Field(
None, description="UUID of the currently active organization"
)
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,76 @@
from rich.console import Console
from rich.table import Table
from requests import HTTPError
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
console = Console()
class OrganizationCommand(BaseCommand, PlusAPIMixin):
def __init__(self):
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
def list(self):
try:
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
if not orgs:
console.print("You don't belong to any organizations yet.", style="yellow")
return
table = Table(title="Your Organizations")
table.add_column("Name", style="cyan")
table.add_column("ID", style="green")
for org in orgs:
table.add_row(org["name"], org["uuid"])
console.print(table)
except HTTPError as e:
if e.response.status_code == 401:
console.print("You are not logged in to any organization. Use 'crewai login' to login.", style="bold red")
return
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
except Exception as e:
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
def switch(self, org_id):
try:
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
org = next((o for o in orgs if o["uuid"] == org_id), None)
if not org:
console.print(f"Organization with id '{org_id}' not found.", style="bold red")
return
settings = Settings()
settings.org_name = org["name"]
settings.org_uuid = org["uuid"]
settings.dump()
console.print(f"Successfully switched to {org['name']} ({org['uuid']})", style="bold green")
except HTTPError as e:
if e.response.status_code == 401:
console.print("You are not logged in to any organization. Use 'crewai login' to login.", style="bold red")
return
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
except Exception as e:
console.print(f"Failed to switch organization: {str(e)}", style="bold red")
raise SystemExit(1)
def current(self):
settings = Settings()
if settings.org_uuid:
console.print(f"Currently logged in to organization {settings.org_name} ({settings.org_uuid})", style="bold green")
else:
console.print("You're not currently logged in to any organization.", style="yellow")
console.print("Use 'crewai org list' to see available organizations.", style="yellow")
console.print("Use 'crewai org switch <id>' to switch to an organization.", style="yellow")

View File

@@ -1,9 +1,10 @@
from os import getenv
from typing import Optional
from typing import List, Optional
from urllib.parse import urljoin
import requests
from crewai.cli.config import Settings
from crewai.cli.version import get_crewai_version
@@ -13,6 +14,7 @@ class PlusAPI:
"""
TOOLS_RESOURCE = "/crewai_plus/api/v1/tools"
ORGANIZATIONS_RESOURCE = "/crewai_plus/api/v1/me/organizations"
CREWS_RESOURCE = "/crewai_plus/api/v1/crews"
AGENTS_RESOURCE = "/crewai_plus/api/v1/agents"
@@ -24,6 +26,9 @@ class PlusAPI:
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
"X-Crewai-Version": get_crewai_version(),
}
settings = Settings()
if settings.org_uuid:
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
self.base_url = getenv("CREWAI_BASE_URL", "https://app.crewai.com")
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
@@ -48,6 +53,7 @@ class PlusAPI:
version: str,
description: Optional[str],
encoded_file: str,
available_exports: Optional[List[str]] = None,
):
params = {
"handle": handle,
@@ -55,6 +61,7 @@ class PlusAPI:
"version": version,
"file": encoded_file,
"description": description,
"available_exports": available_exports,
}
return self._make_request("POST", f"{self.TOOLS_RESOURCE}", json=params)
@@ -101,3 +108,7 @@ class PlusAPI:
def create_crew(self, payload) -> requests.Response:
return self._make_request("POST", self.CREWS_RESOURCE, json=payload)
def get_organizations(self) -> requests.Response:
return self._make_request("GET", self.ORGANIZATIONS_RESOURCE)

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.14 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:

View File

@@ -3,9 +3,9 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.121.1,<1.0.0"
"crewai[tools]>=0.126.0,<1.0.0"
]
[project.scripts]

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.14 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:

View File

@@ -3,9 +3,9 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.121.1,<1.0.0",
"crewai[tools]>=0.126.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ custom tools to power up your crews.
## Installing
Ensure you have Python >=3.10 <3.13 installed on your system. This project
Ensure you have Python >=3.10 <3.14 installed on your system. This project
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
handling, offering a seamless setup and execution experience.

View File

@@ -3,9 +3,9 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.121.1"
"crewai[tools]>=0.126.0"
]
[tool.crewai]

View File

@@ -0,0 +1,3 @@
from .tool import {{class_name}}
__all__ = ["{{class_name}}"]

View File

@@ -3,6 +3,7 @@ import os
import subprocess
import tempfile
from pathlib import Path
from typing import Any
import click
from rich.console import Console
@@ -11,6 +12,7 @@ from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
from crewai.cli.utils import (
extract_available_exports,
get_project_description,
get_project_name,
get_project_version,
@@ -82,6 +84,15 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
project_description = get_project_description(require=False)
encoded_tarball = None
console.print("[bold blue]Discovering tools from your project...[/bold blue]")
available_exports = extract_available_exports()
if available_exports:
console.print(
f"[green]Found these tools to publish: {', '.join([e['name'] for e in available_exports])}[/green]"
)
self._print_current_organization()
with tempfile.TemporaryDirectory() as temp_build_dir:
subprocess.run(
["uv", "build", "--sdist", "--out-dir", temp_build_dir],
@@ -105,12 +116,14 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
encoded_tarball = base64.b64encode(tarball_contents).decode("utf-8")
console.print("[bold blue]Publishing tool to repository...[/bold blue]")
publish_response = self.plus_api_client.publish_tool(
handle=project_name,
is_public=is_public,
version=project_version,
description=project_description,
encoded_file=f"data:application/x-gzip;base64,{encoded_tarball}",
available_exports=available_exports,
)
self._validate_response(publish_response)
@@ -124,6 +137,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
def install(self, handle: str):
self._print_current_organization()
get_response = self.plus_api_client.get_tool(handle)
if get_response.status_code == 404:
@@ -161,13 +175,20 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
settings.tool_repository_password = login_response_json["credential"][
"password"
]
settings.org_uuid = login_response_json["current_organization"][
"uuid"
]
settings.org_name = login_response_json["current_organization"][
"name"
]
settings.dump()
console.print(
"Successfully authenticated to the tool repository.", style="bold green"
f"Successfully authenticated to the tool repository as {settings.org_name} ({settings.org_uuid}).", style="bold green"
)
def _add_package(self, tool_details):
def _add_package(self, tool_details: dict[str, Any]):
is_from_pypi = tool_details.get("source", None) == "pypi"
tool_handle = tool_details["handle"]
repository_handle = tool_details["repository"]["handle"]
repository_url = tool_details["repository"]["url"]
@@ -176,10 +197,13 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
add_package_command = [
"uv",
"add",
"--index",
index,
tool_handle,
]
if is_from_pypi:
add_package_command.append(tool_handle)
else:
add_package_command.extend(["--index", index, tool_handle])
add_package_result = subprocess.run(
add_package_command,
capture_output=False,
@@ -218,3 +242,10 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
return env
def _print_current_organization(self):
settings = Settings()
if settings.org_uuid:
console.print(f"Current organization: {settings.org_name} ({settings.org_uuid})", style="bold blue")
else:
console.print("No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.", style="yellow")

View File

@@ -1,8 +1,10 @@
import importlib.util
import os
import shutil
import sys
from functools import reduce
from inspect import isfunction, ismethod
from inspect import getmro, isclass, isfunction, ismethod
from pathlib import Path
from typing import Any, Dict, List, get_type_hints
import click
@@ -339,3 +341,112 @@ def fetch_crews(module_attr) -> list[Crew]:
if crew_instance := get_crew_instance(attr):
crew_instances.append(crew_instance)
return crew_instances
def is_valid_tool(obj):
from crewai.tools.base_tool import Tool
if isclass(obj):
try:
return any(base.__name__ == "BaseTool" for base in getmro(obj))
except (TypeError, AttributeError):
return False
return isinstance(obj, Tool)
def extract_available_exports(dir_path: str = "src"):
"""
Extract available tool classes from the project's __init__.py files.
Only includes classes that inherit from BaseTool or functions decorated with @tool.
Returns:
list: A list of valid tool class names or ["BaseTool"] if none found
"""
try:
init_files = Path(dir_path).glob("**/__init__.py")
available_exports = []
for init_file in init_files:
tools = _load_tools_from_init(init_file)
available_exports.extend(tools)
if not available_exports:
_print_no_tools_warning()
raise SystemExit(1)
return available_exports
except Exception as e:
console.print(f"[red]Error: Could not extract tool classes: {str(e)}[/red]")
console.print(
"Please ensure your project contains valid tools (classes inheriting from BaseTool or functions with @tool decorator)."
)
raise SystemExit(1)
def _load_tools_from_init(init_file: Path) -> list[dict[str, Any]]:
"""
Load and validate tools from a given __init__.py file.
"""
spec = importlib.util.spec_from_file_location("temp_module", init_file)
if not spec or not spec.loader:
return []
module = importlib.util.module_from_spec(spec)
sys.modules["temp_module"] = module
try:
spec.loader.exec_module(module)
if not hasattr(module, "__all__"):
console.print(
f"[bold yellow]Warning: No __all__ defined in {init_file}[/bold yellow]"
)
raise SystemExit(1)
return [
{
"name": name,
}
for name in module.__all__
if hasattr(module, name) and is_valid_tool(getattr(module, name))
]
except Exception as e:
console.print(f"[red]Warning: Could not load {init_file}: {str(e)}[/red]")
raise SystemExit(1)
finally:
sys.modules.pop("temp_module", None)
def _print_no_tools_warning():
"""
Display warning and usage instructions if no tools were found.
"""
console.print(
"\n[bold yellow]Warning: No valid tools were exposed in your __init__.py file![/bold yellow]"
)
console.print(
"Your __init__.py file must contain all classes that inherit from [bold]BaseTool[/bold] "
"or functions decorated with [bold]@tool[/bold]."
)
console.print(
"\nExample:\n[dim]# In your __init__.py file[/dim]\n"
"[green]__all__ = ['YourTool', 'your_tool_function'][/green]\n\n"
"[dim]# In your tool.py file[/dim]\n"
"[green]from crewai.tools import BaseTool, tool\n\n"
"# Tool class example\n"
"class YourTool(BaseTool):\n"
' name = "your_tool"\n'
' description = "Your tool description"\n'
" # ... rest of implementation\n\n"
"# Decorated function example\n"
"@tool\n"
"def your_tool_function(text: str) -> str:\n"
' """Your tool description"""\n'
" # ... implementation\n"
" return result\n"
)

View File

@@ -655,8 +655,6 @@ class Crew(FlowTrackable, BaseModel):
if self.planning:
self._handle_crew_planning()
metrics: List[UsageMetrics] = []
if self.process == Process.sequential:
result = self._run_sequential_process()
elif self.process == Process.hierarchical:
@@ -669,11 +667,8 @@ class Crew(FlowTrackable, BaseModel):
for after_callback in self.after_kickoff_callbacks:
result = after_callback(result)
metrics += [agent._token_process.get_summary() for agent in self.agents]
self.usage_metrics = self.calculate_usage_metrics()
self.usage_metrics = UsageMetrics()
for metric in metrics:
self.usage_metrics.add_usage_metrics(metric)
return result
except Exception as e:
crewai_event_bus.emit(

View File

@@ -17,7 +17,7 @@ Example
import ast
import inspect
from typing import Any, Dict, List, Optional, Tuple, Union
from typing import Any, Dict, List, Tuple, Union
from .utils import (
build_ancestor_dict,
@@ -140,7 +140,7 @@ def compute_positions(
flow: Any,
node_levels: Dict[str, int],
y_spacing: float = 150,
x_spacing: float = 150
x_spacing: float = 300
) -> Dict[str, Tuple[float, float]]:
"""
Compute the (x, y) positions for each node in the flow graph.
@@ -154,7 +154,7 @@ def compute_positions(
y_spacing : float, optional
Vertical spacing between levels, by default 150.
x_spacing : float, optional
Horizontal spacing between nodes, by default 150.
Horizontal spacing between nodes, by default 300.
Returns
-------

View File

@@ -1,93 +0,0 @@
from pathlib import Path
from typing import List, Optional, Union
import numpy as np
from .base_embedder import BaseEmbedder
try:
from fastembed_gpu import TextEmbedding # type: ignore
FASTEMBED_AVAILABLE = True
except ImportError:
try:
from fastembed import TextEmbedding
FASTEMBED_AVAILABLE = True
except ImportError:
FASTEMBED_AVAILABLE = False
class FastEmbed(BaseEmbedder):
"""
A wrapper class for text embedding models using FastEmbed
"""
def __init__(
self,
model_name: str = "BAAI/bge-small-en-v1.5",
cache_dir: Optional[Union[str, Path]] = None,
):
"""
Initialize the embedding model
Args:
model_name: Name of the model to use
cache_dir: Directory to cache the model
gpu: Whether to use GPU acceleration
"""
if not FASTEMBED_AVAILABLE:
raise ImportError(
"FastEmbed is not installed. Please install it with: "
"uv pip install fastembed or uv pip install fastembed-gpu for GPU support"
)
self.model = TextEmbedding(
model_name=model_name,
cache_dir=str(cache_dir) if cache_dir else None,
)
def embed_chunks(self, chunks: List[str]) -> List[np.ndarray]:
"""
Generate embeddings for a list of text chunks
Args:
chunks: List of text chunks to embed
Returns:
List of embeddings
"""
embeddings = list(self.model.embed(chunks))
return embeddings
def embed_texts(self, texts: List[str]) -> List[np.ndarray]:
"""
Generate embeddings for a list of texts
Args:
texts: List of texts to embed
Returns:
List of embeddings
"""
embeddings = list(self.model.embed(texts))
return embeddings
def embed_text(self, text: str) -> np.ndarray:
"""
Generate embedding for a single text
Args:
text: Text to embed
Returns:
Embedding array
"""
return self.embed_texts([text])[0]
@property
def dimension(self) -> int:
"""Get the dimension of the embeddings"""
# Generate a test embedding to get dimensions
test_embed = self.embed_text("test")
return len(test_embed)

View File

@@ -1 +1,7 @@
from .base_tool import BaseTool, tool
from .base_tool import BaseTool, tool, EnvVar
__all__ = [
"BaseTool",
"tool",
"EnvVar",
]

View File

@@ -1,7 +1,7 @@
import asyncio
from abc import ABC, abstractmethod
from inspect import signature
from typing import Any, Callable, Type, get_args, get_origin
from typing import Any, Callable, Type, get_args, get_origin, Optional, List
from pydantic import (
BaseModel,
@@ -14,6 +14,11 @@ from pydantic import BaseModel as PydanticBaseModel
from crewai.tools.structured_tool import CrewStructuredTool
class EnvVar(BaseModel):
name: str
description: str
required: bool = True
default: Optional[str] = None
class BaseTool(BaseModel, ABC):
class _ArgsSchemaPlaceholder(PydanticBaseModel):
@@ -25,6 +30,8 @@ class BaseTool(BaseModel, ABC):
"""The unique name of the tool that clearly communicates its purpose."""
description: str
"""Used to tell the model how/when/why to use the tool."""
env_vars: List[EnvVar] = []
"""List of environment variables used by the tool."""
args_schema: Type[PydanticBaseModel] = Field(
default_factory=_ArgsSchemaPlaceholder, validate_default=True
)

View File

@@ -1,5 +1,7 @@
from __future__ import annotations
import asyncio
import inspect
import textwrap
from typing import Any, Callable, Optional, Union, get_type_hints
@@ -239,7 +241,17 @@ class CrewStructuredTool:
) -> Any:
"""Main method for tool execution."""
parsed_args = self._parse_args(input)
return self.func(**parsed_args, **kwargs)
if inspect.iscoroutinefunction(self.func):
result = asyncio.run(self.func(**parsed_args, **kwargs))
return result
result = self.func(**parsed_args, **kwargs)
if asyncio.iscoroutine(result):
return asyncio.run(result)
return result
@property
def args(self) -> dict:

View File

@@ -20,7 +20,10 @@ from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from rich.console import Console
from crewai.cli.config import Settings
console = Console()
def parse_tools(tools: List[BaseTool]) -> List[CrewStructuredTool]:
"""Parse tools to be used for the task."""
@@ -215,9 +218,6 @@ def handle_agent_action_core(
if show_logs:
show_logs(formatted_answer)
if messages is not None:
messages.append({"role": "assistant", "content": tool_result.result})
return formatted_answer
@@ -438,6 +438,13 @@ def show_agent_logs(
)
def _print_current_organization():
settings = Settings()
if settings.org_uuid:
console.print(f"Fetching agent from organization: {settings.org_name} ({settings.org_uuid})", style="bold blue")
else:
console.print("No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.", style="yellow")
def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
attributes: Dict[str, Any] = {}
if from_repository:
@@ -447,15 +454,18 @@ def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
from crewai.cli.plus_api import PlusAPI
client = PlusAPI(api_key=get_auth_token())
_print_current_organization()
response = client.get_agent(from_repository)
if response.status_code == 404:
raise AgentRepositoryError(
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization"
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization."
f"\nIf you are using the wrong organization, switch to the correct one using `crewai org switch <org_id>` command.",
)
if response.status_code != 200:
raise AgentRepositoryError(
f"Agent {from_repository} could not be loaded: {response.text}"
f"\nIf you are using the wrong organization, switch to the correct one using `crewai org switch <org_id>` command.",
)
agent = response.json()
@@ -464,7 +474,7 @@ def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
attributes[key] = []
for tool in value:
try:
module = importlib.import_module("crewai_tools")
module = importlib.import_module(tool["module"])
tool_class = getattr(module, tool["name"])
attributes[key].append(tool_class())
except Exception as e:

View File

@@ -309,7 +309,9 @@ def test_cache_hitting():
def handle_tool_end(source, event):
received_events.append(event)
with (patch.object(CacheHandler, "read") as read,):
with (
patch.object(CacheHandler, "read") as read,
):
read.return_value = "0"
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
@@ -1628,13 +1630,13 @@ def test_agent_execute_task_with_ollama():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources():
# Create a knowledge source with some content
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch("crewai.knowledge") as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.search.return_value = [{"content": content}]
MockKnowledge.add_sources.return_value = [string_source]
agent = Agent(
role="Information Agent",
@@ -1644,7 +1646,6 @@ def test_agent_with_knowledge_sources():
knowledge_sources=[string_source],
)
# Create a task that requires the agent to use the knowledge
task = Task(
description="What is Brandon's favorite color?",
expected_output="Brandon's favorite color.",
@@ -1652,10 +1653,11 @@ def test_agent_with_knowledge_sources():
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
# Assert that the agent provides the correct information
assert "red" in result.raw.lower()
with patch.object(Knowledge, "add_sources") as mock_add_sources:
result = crew.kickoff()
assert mock_add_sources.called, "add_sources() should have been called"
mock_add_sources.assert_called_once()
assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -2036,7 +2038,7 @@ def mock_get_auth_token():
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
from crewai_tools import SerperDevTool
from crewai_tools import SerperDevTool, XMLSearchTool
mock_get_response = MagicMock()
mock_get_response.status_code = 200
@@ -2044,7 +2046,10 @@ def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "SerperDevTool"}],
"tools": [
{"module": "crewai_tools", "name": "SerperDevTool"},
{"module": "crewai_tools", "name": "XMLSearchTool"},
],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent")
@@ -2052,8 +2057,9 @@ def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
assert len(agent.tools) == 1
assert len(agent.tools) == 2
assert isinstance(agent.tools[0], SerperDevTool)
assert isinstance(agent.tools[1], XMLSearchTool)
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@@ -2066,7 +2072,7 @@ def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "SerperDevTool"}],
"tools": [{"name": "SerperDevTool", "module": "crewai_tools"}],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent", role="Custom Role")
@@ -2086,7 +2092,7 @@ def test_agent_from_repository_with_invalid_tools(mock_get_agent, mock_get_auth_
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "DoesNotExist"}],
"tools": [{"name": "DoesNotExist", "module": "crewai_tools",}],
}
mock_get_agent.return_value = mock_get_response
with pytest.raises(
@@ -2120,3 +2126,60 @@ def test_agent_from_repository_agent_not_found(mock_get_agent, mock_get_auth_tok
match="Agent test_agent does not exist, make sure the name is correct or the agent is available on your organization",
):
Agent(from_repository="test_agent")
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_displays_org_info(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = "test-org-uuid"
mock_settings_instance.org_name = "Test Organization"
mock_settings.return_value = mock_settings_instance
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": []
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent")
mock_console.print.assert_any_call(
"Fetching agent from organization: Test Organization (test-org-uuid)",
style="bold blue"
)
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_without_org_set(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_instance.org_name = None
mock_settings.return_value = mock_settings_instance
mock_get_response = MagicMock()
mock_get_response.status_code = 401
mock_get_response.text = "Unauthorized access"
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Agent test_agent could not be loaded: Unauthorized access"
):
Agent(from_repository="test_agent")
mock_console.print.assert_any_call(
"No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.",
style="yellow"
)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -231,7 +231,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<3.13"
requires-python = ">=3.10,<3.14"
dependencies = ["crewai"]
""",
)
@@ -250,7 +250,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<3.13"
requires-python = ">=3.10,<3.14"
dependencies = ["crewai"]
""",
)

View File

@@ -0,0 +1 @@

View File

@@ -0,0 +1,244 @@
import unittest
from unittest.mock import MagicMock, patch, call
import pytest
from click.testing import CliRunner
import requests
from crewai.cli.organization.main import OrganizationCommand
from crewai.cli.cli import list, switch, current
@pytest.fixture
def runner():
return CliRunner()
@pytest.fixture
def org_command():
with patch.object(OrganizationCommand, '__init__', return_value=None):
command = OrganizationCommand()
yield command
@pytest.fixture
def mock_settings():
with patch('crewai.cli.organization.main.Settings') as mock_settings_class:
mock_settings_instance = MagicMock()
mock_settings_class.return_value = mock_settings_instance
yield mock_settings_instance
@patch('crewai.cli.cli.OrganizationCommand')
def test_org_list_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(list)
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.list.assert_called_once()
@patch('crewai.cli.cli.OrganizationCommand')
def test_org_switch_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(switch, ['test-id'])
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.switch.assert_called_once_with('test-id')
@patch('crewai.cli.cli.OrganizationCommand')
def test_org_current_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(current)
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.current.assert_called_once()
class TestOrganizationCommand(unittest.TestCase):
def setUp(self):
with patch.object(OrganizationCommand, '__init__', return_value=None):
self.org_command = OrganizationCommand()
self.org_command.plus_api_client = MagicMock()
@patch('crewai.cli.organization.main.console')
@patch('crewai.cli.organization.main.Table')
def test_list_organizations_success(self, mock_table, mock_console):
mock_response = MagicMock()
mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = [
{"name": "Org 1", "uuid": "org-123"},
{"name": "Org 2", "uuid": "org-456"}
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
mock_console.print = MagicMock()
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_table.assert_called_once_with(title="Your Organizations")
mock_table.return_value.add_column.assert_has_calls([
call("Name", style="cyan"),
call("ID", style="green")
])
mock_table.return_value.add_row.assert_has_calls([
call("Org 1", "org-123"),
call("Org 2", "org-456")
])
@patch('crewai.cli.organization.main.console')
def test_list_organizations_empty(self, mock_console):
mock_response = MagicMock()
mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = []
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You don't belong to any organizations yet.",
style="yellow"
)
@patch('crewai.cli.organization.main.console')
def test_list_organizations_api_error(self, mock_console):
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.side_effect = requests.exceptions.RequestException("API Error")
with pytest.raises(SystemExit):
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"Failed to retrieve organization list: API Error",
style="bold red"
)
@patch('crewai.cli.organization.main.console')
@patch('crewai.cli.organization.main.Settings')
def test_switch_organization_success(self, mock_settings_class, mock_console):
mock_response = MagicMock()
mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = [
{"name": "Org 1", "uuid": "org-123"},
{"name": "Test Org", "uuid": "test-id"}
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
mock_settings_instance = MagicMock()
mock_settings_class.return_value = mock_settings_instance
self.org_command.switch("test-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_settings_instance.dump.assert_called_once()
assert mock_settings_instance.org_name == "Test Org"
assert mock_settings_instance.org_uuid == "test-id"
mock_console.print.assert_called_once_with(
"Successfully switched to Test Org (test-id)",
style="bold green"
)
@patch('crewai.cli.organization.main.console')
def test_switch_organization_not_found(self, mock_console):
mock_response = MagicMock()
mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = [
{"name": "Org 1", "uuid": "org-123"},
{"name": "Org 2", "uuid": "org-456"}
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.switch("non-existent-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"Organization with id 'non-existent-id' not found.",
style="bold red"
)
@patch('crewai.cli.organization.main.console')
@patch('crewai.cli.organization.main.Settings')
def test_current_organization_with_org(self, mock_settings_class, mock_console):
mock_settings_instance = MagicMock()
mock_settings_instance.org_name = "Test Org"
mock_settings_instance.org_uuid = "test-id"
mock_settings_class.return_value = mock_settings_instance
self.org_command.current()
self.org_command.plus_api_client.get_organizations.assert_not_called()
mock_console.print.assert_called_once_with(
"Currently logged in to organization Test Org (test-id)",
style="bold green"
)
@patch('crewai.cli.organization.main.console')
@patch('crewai.cli.organization.main.Settings')
def test_current_organization_without_org(self, mock_settings_class, mock_console):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_class.return_value = mock_settings_instance
self.org_command.current()
assert mock_console.print.call_count == 3
mock_console.print.assert_any_call(
"You're not currently logged in to any organization.",
style="yellow"
)
@patch('crewai.cli.organization.main.console')
def test_list_organizations_unauthorized(self, mock_console):
mock_response = MagicMock()
mock_http_error = requests.exceptions.HTTPError(
"401 Client Error: Unauthorized",
response=MagicMock(status_code=401)
)
mock_response.raise_for_status.side_effect = mock_http_error
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red"
)
@patch('crewai.cli.organization.main.console')
def test_switch_organization_unauthorized(self, mock_console):
mock_response = MagicMock()
mock_http_error = requests.exceptions.HTTPError(
"401 Client Error: Unauthorized",
response=MagicMock(status_code=401)
)
mock_response.raise_for_status.side_effect = mock_http_error
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.switch("test-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red"
)

View File

@@ -1,6 +1,6 @@
import os
import unittest
from unittest.mock import MagicMock, patch
from unittest.mock import MagicMock, patch, ANY
from crewai.cli.plus_api import PlusAPI
@@ -9,6 +9,7 @@ class TestPlusAPI(unittest.TestCase):
def setUp(self):
self.api_key = "test_api_key"
self.api = PlusAPI(self.api_key)
self.org_uuid = "test-org-uuid"
def test_init(self):
self.assertEqual(self.api.api_key, self.api_key)
@@ -29,17 +30,96 @@ class TestPlusAPI(unittest.TestCase):
)
self.assertEqual(response, mock_response)
def assert_request_with_org_id(self, mock_make_request, method: str, endpoint: str, **kwargs):
mock_make_request.assert_called_once_with(
method, f"https://app.crewai.com{endpoint}", headers={'Authorization': ANY, 'Content-Type': ANY, 'User-Agent': ANY, 'X-Crewai-Version': ANY, 'X-Crewai-Organization-Id': self.org_uuid}, **kwargs
)
@patch("crewai.cli.plus_api.Settings")
@patch("requests.Session.request")
def test_login_to_tool_repository_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
mock_response = MagicMock()
mock_make_request.return_value = mock_response
response = self.api.login_to_tool_repository()
self.assert_request_with_org_id(
mock_make_request,
'POST',
'/crewai_plus/api/v1/tools/login'
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")
def test_get_agent(self, mock_make_request):
mock_response = MagicMock()
mock_make_request.return_value = mock_response
response = self.api.get_agent("test_agent_handle")
mock_make_request.assert_called_once_with(
"GET", "/crewai_plus/api/v1/agents/test_agent_handle"
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.Settings")
@patch("requests.Session.request")
def test_get_agent_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
mock_response = MagicMock()
mock_make_request.return_value = mock_response
response = self.api.get_agent("test_agent_handle")
self.assert_request_with_org_id(
mock_make_request,
"GET",
"/crewai_plus/api/v1/agents/test_agent_handle"
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")
def test_get_tool(self, mock_make_request):
mock_response = MagicMock()
mock_make_request.return_value = mock_response
response = self.api.get_tool("test_tool_handle")
mock_make_request.assert_called_once_with(
"GET", "/crewai_plus/api/v1/tools/test_tool_handle"
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.Settings")
@patch("requests.Session.request")
def test_get_tool_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
# Set up mock response
mock_response = MagicMock()
mock_make_request.return_value = mock_response
response = self.api.get_tool("test_tool_handle")
self.assert_request_with_org_id(
mock_make_request,
"GET",
"/crewai_plus/api/v1/tools/test_tool_handle"
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")
def test_publish_tool(self, mock_make_request):
@@ -61,11 +141,53 @@ class TestPlusAPI(unittest.TestCase):
"version": version,
"file": encoded_file,
"description": description,
"available_exports": None,
}
mock_make_request.assert_called_once_with(
"POST", "/crewai_plus/api/v1/tools", json=params
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.Settings")
@patch("requests.Session.request")
def test_publish_tool_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
# Set up mock response
mock_response = MagicMock()
mock_make_request.return_value = mock_response
handle = "test_tool_handle"
public = True
version = "1.0.0"
description = "Test tool description"
encoded_file = "encoded_test_file"
response = self.api.publish_tool(
handle, public, version, description, encoded_file
)
# Expected params including organization_uuid
expected_params = {
"handle": handle,
"public": public,
"version": version,
"file": encoded_file,
"description": description,
"available_exports": None,
}
self.assert_request_with_org_id(
mock_make_request,
"POST",
"/crewai_plus/api/v1/tools",
json=expected_params
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")
def test_publish_tool_without_description(self, mock_make_request):
@@ -87,6 +209,7 @@ class TestPlusAPI(unittest.TestCase):
"version": version,
"file": encoded_file,
"description": description,
"available_exports": None,
}
mock_make_request.assert_called_once_with(
"POST", "/crewai_plus/api/v1/tools", json=params

View File

@@ -1,6 +1,7 @@
import os
import shutil
import tempfile
from pathlib import Path
import pytest
@@ -100,3 +101,163 @@ def test_tree_copy_to_existing_directory(temp_tree):
assert os.path.isfile(os.path.join(dest_dir, "file1.txt"))
finally:
shutil.rmtree(dest_dir)
@pytest.fixture
def temp_project_dir():
"""Create a temporary directory for testing tool extraction."""
with tempfile.TemporaryDirectory() as temp_dir:
yield Path(temp_dir)
def create_init_file(directory, content):
return create_file(directory / "__init__.py", content)
def test_extract_available_exports_empty_project(temp_project_dir, capsys):
with pytest.raises(SystemExit):
utils.extract_available_exports(dir_path=temp_project_dir)
captured = capsys.readouterr()
assert "No valid tools were exposed in your __init__.py file" in captured.out
def test_extract_available_exports_no_init_file(temp_project_dir, capsys):
(temp_project_dir / "some_file.py").write_text("print('hello')")
with pytest.raises(SystemExit):
utils.extract_available_exports(dir_path=temp_project_dir)
captured = capsys.readouterr()
assert "No valid tools were exposed in your __init__.py file" in captured.out
def test_extract_available_exports_empty_init_file(temp_project_dir, capsys):
create_init_file(temp_project_dir, "")
with pytest.raises(SystemExit):
utils.extract_available_exports(dir_path=temp_project_dir)
captured = capsys.readouterr()
assert "Warning: No __all__ defined in" in captured.out
def test_extract_available_exports_no_all_variable(temp_project_dir, capsys):
create_init_file(
temp_project_dir,
"from crewai.tools import BaseTool\n\nclass MyTool(BaseTool):\n pass",
)
with pytest.raises(SystemExit):
utils.extract_available_exports(dir_path=temp_project_dir)
captured = capsys.readouterr()
assert "Warning: No __all__ defined in" in captured.out
def test_extract_available_exports_valid_base_tool_class(temp_project_dir):
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
__all__ = ['MyTool']
""",
)
tools = utils.extract_available_exports(dir_path=temp_project_dir)
assert [{"name": "MyTool"}] == tools
def test_extract_available_exports_valid_tool_decorator(temp_project_dir):
create_init_file(
temp_project_dir,
"""from crewai.tools import tool
@tool
def my_tool_function(text: str) -> str:
\"\"\"A test tool function\"\"\"
return text
__all__ = ['my_tool_function']
""",
)
tools = utils.extract_available_exports(dir_path=temp_project_dir)
assert [{"name": "my_tool_function"}] == tools
def test_extract_available_exports_multiple_valid_tools(temp_project_dir):
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool, tool
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
@tool
def my_tool_function(text: str) -> str:
\"\"\"A test tool function\"\"\"
return text
__all__ = ['MyTool', 'my_tool_function']
""",
)
tools = utils.extract_available_exports(dir_path=temp_project_dir)
assert [{"name": "MyTool"}, {"name": "my_tool_function"}] == tools
def test_extract_available_exports_with_invalid_tool_decorator(temp_project_dir):
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class MyTool(BaseTool):
name: str = "my_tool"
description: str = "A test tool"
def not_a_tool():
pass
__all__ = ['MyTool', 'not_a_tool']
""",
)
tools = utils.extract_available_exports(dir_path=temp_project_dir)
assert [{"name": "MyTool"}] == tools
def test_extract_available_exports_import_error(temp_project_dir, capsys):
create_init_file(
temp_project_dir,
"""from nonexistent_module import something
class MyTool(BaseTool):
pass
__all__ = ['MyTool']
""",
)
with pytest.raises(SystemExit):
utils.extract_available_exports(dir_path=temp_project_dir)
captured = capsys.readouterr()
assert "nonexistent_module" in captured.out
def test_extract_available_exports_syntax_error(temp_project_dir, capsys):
create_init_file(
temp_project_dir,
"""from crewai.tools import BaseTool
class MyTool(BaseTool):
# Missing closing parenthesis
def __init__(self, name:
pass
__all__ = ['MyTool']
""",
)
with pytest.raises(SystemExit):
utils.extract_available_exports(dir_path=temp_project_dir)
captured = capsys.readouterr()
assert "was never closed" in captured.out

View File

@@ -56,7 +56,8 @@ def test_create_success(mock_subprocess, capsys, tool_command):
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
def test_install_success(mock_print_org, mock_get, mock_subprocess_run, capsys, tool_command):
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
@@ -85,6 +86,39 @@ def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
env=unittest.mock.ANY,
)
# Verify _print_current_organization was called
mock_print_org.assert_called_once()
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success_from_pypi(mock_get, mock_subprocess_run, capsys, tool_command):
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"handle": "sample-tool",
"repository": {"handle": "sample-repo", "url": "https://example.com/repo"},
"source": "pypi",
}
mock_get.return_value = mock_get_response
mock_subprocess_run.return_value = MagicMock(stderr=None)
tool_command.install("sample-tool")
output = capsys.readouterr().out
assert "Successfully installed sample-tool" in output
mock_get.assert_has_calls([mock.call("sample-tool"), mock.call().json()])
mock_subprocess_run.assert_any_call(
[
"uv",
"add",
"sample-tool",
],
capture_output=False,
text=True,
check=True,
env=unittest.mock.ANY,
)
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_tool_not_found(mock_get, capsys, tool_command):
@@ -135,7 +169,11 @@ def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
)
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
@patch("crewai.cli.tools.main.extract_available_exports", return_value=[{"name": "SampleTool"}])
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
def test_publish_when_not_in_sync_and_force(
mock_print_org,
mock_available_exports,
mock_is_synced,
mock_publish,
mock_open,
@@ -168,7 +206,9 @@ def test_publish_when_not_in_sync_and_force(
version="1.0.0",
description="A sample tool",
encoded_file=unittest.mock.ANY,
available_exports=[{"name": "SampleTool"}],
)
mock_print_org.assert_called_once()
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@@ -183,7 +223,9 @@ def test_publish_when_not_in_sync_and_force(
)
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=True)
@patch("crewai.cli.tools.main.extract_available_exports", return_value=[{"name": "SampleTool"}])
def test_publish_success(
mock_available_exports,
mock_is_synced,
mock_publish,
mock_open,
@@ -216,6 +258,7 @@ def test_publish_success(
version="1.0.0",
description="A sample tool",
encoded_file=unittest.mock.ANY,
available_exports=[{"name": "SampleTool"}],
)
@@ -230,7 +273,9 @@ def test_publish_success(
read_data=b"sample tarball content",
)
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
@patch("crewai.cli.tools.main.extract_available_exports", return_value=[{"name": "SampleTool"}])
def test_publish_failure(
mock_available_exports,
mock_publish,
mock_open,
mock_listdir,
@@ -266,7 +311,9 @@ def test_publish_failure(
read_data=b"sample tarball content",
)
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
@patch("crewai.cli.tools.main.extract_available_exports", return_value=[{"name": "SampleTool"}])
def test_publish_api_error(
mock_available_exports,
mock_publish,
mock_open,
mock_listdir,
@@ -289,3 +336,27 @@ def test_publish_api_error(
assert "Request to Enterprise API failed" in output
mock_publish.assert_called_once()
@patch("crewai.cli.tools.main.Settings")
def test_print_current_organization_with_org(mock_settings, capsys, tool_command):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = "test-org-uuid"
mock_settings_instance.org_name = "Test Organization"
mock_settings.return_value = mock_settings_instance
tool_command._print_current_organization()
output = capsys.readouterr().out
assert "Current organization: Test Organization (test-org-uuid)" in output
@patch("crewai.cli.tools.main.Settings")
def test_print_current_organization_without_org(mock_settings, capsys, tool_command):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_instance.org_name = None
mock_settings.return_value = mock_settings_instance
tool_command._print_current_organization()
output = capsys.readouterr().out
assert "No organization currently set" in output
assert "org switch <org_id>" in output

View File

@@ -1765,6 +1765,50 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
)
def test_hierarchical_kickoff_usage_metrics_include_manager(researcher):
"""Ensure Crew.kickoff() sums UsageMetrics from both regular and manager agents."""
# ── 1. Build the manager and a simple task ──────────────────────────────────
manager = Agent(
role="Manager",
goal="Coordinate everything.",
backstory="Keeps the project on track.",
allow_delegation=False,
)
task = Task(
description="Say hello",
expected_output="Hello",
agent=researcher, # *regular* agent
)
# ── 2. Stub out each agents _token_process.get_summary() ───────────────────
researcher_metrics = UsageMetrics(total_tokens=120, prompt_tokens=80, completion_tokens=40, successful_requests=2)
manager_metrics = UsageMetrics(total_tokens=30, prompt_tokens=20, completion_tokens=10, successful_requests=1)
# Replace the internal _token_process objects with simple mocks
researcher._token_process = MagicMock(get_summary=MagicMock(return_value=researcher_metrics))
manager._token_process = MagicMock(get_summary=MagicMock(return_value=manager_metrics))
# ── 3. Create the crew (hierarchical!) and kick it off ──────────────────────
crew = Crew(
agents=[researcher], # regular agents
manager_agent=manager, # manager to be included
tasks=[task],
process=Process.hierarchical,
)
# We dont care about LLM output here; patch execute_sync to avoid network
with patch.object(Task, "execute_sync", return_value=TaskOutput(description="dummy", raw="Hello", agent=researcher.role)):
crew.kickoff()
# ── 4. Assert the aggregated numbers are the *sum* of both agents ───────────
assert crew.usage_metrics.total_tokens == researcher_metrics.total_tokens + manager_metrics.total_tokens
assert crew.usage_metrics.prompt_tokens == researcher_metrics.prompt_tokens + manager_metrics.prompt_tokens
assert crew.usage_metrics.completion_tokens == researcher_metrics.completion_tokens + manager_metrics.completion_tokens
assert crew.usage_metrics.successful_requests == researcher_metrics.successful_requests + manager_metrics.successful_requests
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_crew_creation_tasks_with_agents(researcher, writer):
"""
@@ -4564,5 +4608,3 @@ def test_reset_agent_knowledge_with_only_agent_knowledge(researcher,writer):
crew.reset_memories(command_type='agent_knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])

View File

@@ -25,122 +25,206 @@ def schema_class():
return TestSchema
class InternalCrewStructuredTool:
def test_initialization(self, basic_function, schema_class):
"""Test basic initialization of CrewStructuredTool"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool description",
func=basic_function,
args_schema=schema_class,
)
def test_initialization(basic_function, schema_class):
"""Test basic initialization of CrewStructuredTool"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool description",
func=basic_function,
args_schema=schema_class,
)
assert tool.name == "test_tool"
assert tool.description == "Test tool description"
assert tool.func == basic_function
assert tool.args_schema == schema_class
assert tool.name == "test_tool"
assert tool.description == "Test tool description"
assert tool.func == basic_function
assert tool.args_schema == schema_class
def test_from_function(self, basic_function):
"""Test creating tool from function"""
tool = CrewStructuredTool.from_function(
func=basic_function, name="test_tool", description="Test description"
)
def test_from_function(basic_function):
"""Test creating tool from function"""
tool = CrewStructuredTool.from_function(
func=basic_function, name="test_tool", description="Test description"
)
assert tool.name == "test_tool"
assert tool.description == "Test description"
assert tool.func == basic_function
assert isinstance(tool.args_schema, type(BaseModel))
assert tool.name == "test_tool"
assert tool.description == "Test description"
assert tool.func == basic_function
assert isinstance(tool.args_schema, type(BaseModel))
def test_validate_function_signature(self, basic_function, schema_class):
"""Test function signature validation"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool",
func=basic_function,
args_schema=schema_class,
)
def test_validate_function_signature(basic_function, schema_class):
"""Test function signature validation"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool",
func=basic_function,
args_schema=schema_class,
)
# Should not raise any exceptions
tool._validate_function_signature()
# Should not raise any exceptions
tool._validate_function_signature()
@pytest.mark.asyncio
async def test_ainvoke(self, basic_function):
"""Test asynchronous invocation"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
@pytest.mark.asyncio
async def test_ainvoke(basic_function):
"""Test asynchronous invocation"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
result = await tool.ainvoke(input={"param1": "test"})
assert result == "test 0"
result = await tool.ainvoke(input={"param1": "test"})
assert result == "test 0"
def test_parse_args_dict(self, basic_function):
"""Test parsing dictionary arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
def test_parse_args_dict(basic_function):
"""Test parsing dictionary arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
parsed = tool._parse_args({"param1": "test", "param2": 42})
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
parsed = tool._parse_args({"param1": "test", "param2": 42})
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
def test_parse_args_string(self, basic_function):
"""Test parsing string arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
def test_parse_args_string(basic_function):
"""Test parsing string arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
parsed = tool._parse_args('{"param1": "test", "param2": 42}')
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
parsed = tool._parse_args('{"param1": "test", "param2": 42}')
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
def test_complex_types(self):
"""Test handling of complex parameter types"""
def test_complex_types():
"""Test handling of complex parameter types"""
def complex_func(nested: dict, items: list) -> str:
"""Process complex types."""
return f"Processed {len(items)} items with {len(nested)} nested keys"
def complex_func(nested: dict, items: list) -> str:
"""Process complex types."""
return f"Processed {len(items)} items with {len(nested)} nested keys"
tool = CrewStructuredTool.from_function(
func=complex_func, name="test_tool", description="Test complex types"
)
result = tool.invoke({"nested": {"key": "value"}, "items": [1, 2, 3]})
assert result == "Processed 3 items with 1 nested keys"
tool = CrewStructuredTool.from_function(
func=complex_func, name="test_tool", description="Test complex types"
)
result = tool.invoke({"nested": {"key": "value"}, "items": [1, 2, 3]})
assert result == "Processed 3 items with 1 nested keys"
def test_schema_inheritance(self):
"""Test tool creation with inherited schema"""
def test_schema_inheritance():
"""Test tool creation with inherited schema"""
def extended_func(base_param: str, extra_param: int) -> str:
"""Test function with inherited schema."""
return f"{base_param} {extra_param}"
def extended_func(base_param: str, extra_param: int) -> str:
"""Test function with inherited schema."""
return f"{base_param} {extra_param}"
class BaseSchema(BaseModel):
base_param: str
class BaseSchema(BaseModel):
base_param: str
class ExtendedSchema(BaseSchema):
extra_param: int
class ExtendedSchema(BaseSchema):
extra_param: int
tool = CrewStructuredTool.from_function(
func=extended_func, name="test_tool", args_schema=ExtendedSchema
)
tool = CrewStructuredTool.from_function(
func=extended_func, name="test_tool", args_schema=ExtendedSchema
)
result = tool.invoke({"base_param": "test", "extra_param": 42})
assert result == "test 42"
result = tool.invoke({"base_param": "test", "extra_param": 42})
assert result == "test 42"
def test_default_values_in_schema(self):
"""Test handling of default values in schema"""
def test_default_values_in_schema():
"""Test handling of default values in schema"""
def default_func(
required_param: str,
optional_param: str = "default",
nullable_param: Optional[int] = None,
) -> str:
"""Test function with default values."""
return f"{required_param} {optional_param} {nullable_param}"
def default_func(
required_param: str,
optional_param: str = "default",
nullable_param: Optional[int] = None,
) -> str:
"""Test function with default values."""
return f"{required_param} {optional_param} {nullable_param}"
tool = CrewStructuredTool.from_function(
func=default_func, name="test_tool", description="Test defaults"
)
tool = CrewStructuredTool.from_function(
func=default_func, name="test_tool", description="Test defaults"
)
# Test with minimal parameters
result = tool.invoke({"required_param": "test"})
assert result == "test default None"
# Test with minimal parameters
result = tool.invoke({"required_param": "test"})
assert result == "test default None"
# Test with all parameters
result = tool.invoke(
{"required_param": "test", "optional_param": "custom", "nullable_param": 42}
)
assert result == "test custom 42"
# Test with all parameters
result = tool.invoke(
{"required_param": "test", "optional_param": "custom", "nullable_param": 42}
)
assert result == "test custom 42"
@pytest.fixture
def custom_tool_decorator():
from crewai.tools import tool
@tool("custom_tool", result_as_answer=True)
async def custom_tool():
"""This is a tool that does something"""
return "Hello World from Custom Tool"
return custom_tool
@pytest.fixture
def custom_tool():
from crewai.tools import BaseTool
class CustomTool(BaseTool):
name: str = "my_tool"
description: str = "This is a tool that does something"
result_as_answer: bool = True
async def _run(self):
return "Hello World from Custom Tool"
return CustomTool()
def build_simple_crew(tool):
from crewai import Agent, Task, Crew
agent1 = Agent(role="Simple role", goal="Simple goal", backstory="Simple backstory", tools=[tool])
say_hi_task = Task(
description="Use the custom tool result as answer.", agent=agent1, expected_output="Use the tool result"
)
crew = Crew(agents=[agent1], tasks=[say_hi_task])
return crew
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_within_isolated_crew(custom_tool):
crew = build_simple_crew(custom_tool)
result = crew.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_decorator_within_isolated_crew(custom_tool_decorator):
crew = build_simple_crew(custom_tool_decorator)
result = crew.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_within_flow(custom_tool):
from crewai.flow.flow import Flow
class StructuredExampleFlow(Flow):
from crewai.flow.flow import start
@start()
async def start(self):
crew = build_simple_crew(custom_tool)
result = await crew.kickoff_async()
return result
flow = StructuredExampleFlow()
result = flow.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_decorator_within_flow(custom_tool_decorator):
from crewai.flow.flow import Flow
class StructuredExampleFlow(Flow):
from crewai.flow.flow import start
@start()
async def start(self):
crew = build_simple_crew(custom_tool_decorator)
result = await crew.kickoff_async()
return result
flow = StructuredExampleFlow()
result = flow.kickoff()
assert result.raw == "Hello World from Custom Tool"

1738
uv.lock generated

File diff suppressed because it is too large Load Diff