Compare commits

..

11 Commits

Author SHA1 Message Date
Brandon Hancock (bhancock_ai)
a024f576b3 Merge branch 'main' into feat/agent-delegation-control 2025-03-14 11:04:35 -04:00
Brandon Hancock
f232f11ad9 getting ready 2025-03-14 10:31:18 -04:00
Brandon Hancock
c334feea7e Clean up tests 2025-03-14 10:09:21 -04:00
Brandon Hancock
4b6498de8b fix tests 2025-03-14 09:35:13 -04:00
Brandon Hancock
0a6098fb50 more fixes 2025-03-13 15:47:02 -04:00
Brandon Hancock
358befe2c1 wip 2025-03-13 15:45:11 -04:00
Brandon Hancock
cb86594f92 More sequences 2025-03-13 15:33:23 -04:00
Brandon Hancock
403890d8e8 wip 2025-03-13 11:02:28 -04:00
Brandon Hancock
bd27d03bc7 more test improvements 2025-03-13 10:38:18 -04:00
Brandon Hancock
3e563365a2 fix failing tests 2025-03-13 10:32:07 -04:00
Brandon Hancock
f4186fad14 wip 2025-03-13 10:23:09 -04:00
38 changed files with 992 additions and 927 deletions

View File

@@ -1,187 +0,0 @@
---
title: Changelog
description: View the latest updates and changes to CrewAI
icon: timeline
---
<Update label="2024-03-17" description="v0.108.0">
**Features**
- Converted tabs to spaces in `crew.py` template
- Enhanced LLM Streaming Response Handling and Event System
- Included `model_name`
- Enhanced Event Listener with rich visualization and improved logging
- Added fingerprints
**Bug Fixes**
- Fixed Mistral issues
- Fixed a bug in documentation
- Fixed type check error in fingerprint property
**Documentation Updates**
- Improved tool documentation
- Updated installation guide for the `uv` tool package
- Added instructions for upgrading crewAI with the `uv` tool
- Added documentation for `ApifyActorsTool`
</Update>
<Update label="2024-03-10" description="v0.105.0">
**Core Improvements & Fixes**
- Fixed issues with missing template variables and user memory configuration
- Improved async flow support and addressed agent response formatting
- Enhanced memory reset functionality and fixed CLI memory commands
- Fixed type issues, tool calling properties, and telemetry decoupling
**New Features & Enhancements**
- Added Flow state export and improved state utilities
- Enhanced agent knowledge setup with optional crew embedder
- Introduced event emitter for better observability and LLM call tracking
- Added support for Python 3.10 and ChatOllama from langchain_ollama
- Integrated context window size support for the o3-mini model
- Added support for multiple router calls
**Documentation & Guides**
- Improved documentation layout and hierarchical structure
- Added QdrantVectorSearchTool guide and clarified event listener usage
- Fixed typos in prompts and updated Amazon Bedrock model listings
</Update>
<Update label="2024-02-12" description="v0.102.0">
**Core Improvements & Fixes**
- Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models
- Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks
- Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class
- Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types
**New Features & Enhancements**
- Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support
- Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation
- Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files
- General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues
- Adding new tool: `QdrantVectorSearchTool`
**Documentation & Guides**
- Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation
- Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation
- Fixed Various Typos & Formatting Issues
</Update>
<Update label="2024-01-28" description="v0.100.0">
**Features**
- Add Composio docs
- Add SageMaker as a LLM provider
**Fixes**
- Overall LLM connection issues
- Using safe accessors on training
- Add version check to crew_chat.py
**Documentation**
- New docs for crewai chat
- Improve formatting and clarity in CLI and Composio Tool docs
</Update>
<Update label="2024-01-20" description="v0.98.0">
**Features**
- Conversation crew v1
- Add unique ID to flow states
- Add @persist decorator with FlowPersistence interface
**Integrations**
- Add SambaNova integration
- Add NVIDIA NIM provider in cli
- Introducing VoyageAI
**Fixes**
- Fix API Key Behavior and Entity Handling in Mem0 Integration
- Fixed core invoke loop logic and relevant tests
- Make tool inputs actual objects and not strings
- Add important missing parts to creating tools
- Drop litellm version to prevent windows issue
- Before kickoff if inputs are none
- Fixed typos, nested pydantic model issue, and docling issues
</Update>
<Update label="2024-01-04" description="v0.95.0">
**New Features**
- Adding Multimodal Abilities to Crew
- Programatic Guardrails
- HITL multiple rounds
- Gemini 2.0 Support
- CrewAI Flows Improvements
- Add Workflow Permissions
- Add support for langfuse with litellm
- Portkey Integration with CrewAI
- Add interpolate_only method and improve error handling
- Docling Support
- Weviate Support
**Fixes**
- output_file not respecting system path
- disk I/O error when resetting short-term memory
- CrewJSONEncoder now accepts enums
- Python max version
- Interpolation for output_file in Task
- Handle coworker role name case/whitespace properly
- Add tiktoken as explicit dependency and document Rust requirement
- Include agent knowledge in planning process
- Change storage initialization to None for KnowledgeStorage
- Fix optional storage checks
- include event emitter in flows
- Docstring, Error Handling, and Type Hints Improvements
- Suppressed userWarnings from litellm pydantic issues
</Update>
<Update label="2023-12-05" description="v0.86.0">
**Changes**
- Remove all references to pipeline and pipeline router
- Add Nvidia NIM as provider in Custom LLM
- Add knowledge demo + improve knowledge docs
- Add HITL multiple rounds of followup
- New docs about yaml crew with decorators
- Simplify template crew
</Update>
<Update label="2023-12-04" description="v0.85.0">
**Features**
- Added knowledge to agent level
- Feat/remove langchain
- Improve typed task outputs
- Log in to Tool Repository on crewai login
**Fixes**
- Fixes issues with result as answer not properly exiting LLM loop
- Fix missing key name when running with ollama provider
- Fix spelling issue found
**Documentation**
- Update readme for running mypy
- Add knowledge to mint.json
- Update Github actions
- Update Agents docs to include two approaches for creating an agent
- Improvements to LLM Configuration and Usage
</Update>
<Update label="2023-11-25" description="v0.83.0">
**New Features**
- New before_kickoff and after_kickoff crew callbacks
- Support to pre-seed agents with Knowledge
- Add support for retrieving user preferences and memories using Mem0
**Fixes**
- Fix Async Execution
- Upgrade chroma and adjust embedder function generator
- Update CLI Watson supported models + docs
- Reduce level for Bandit
- Fixing all tests
**Documentation**
- Update Docs
</Update>
<Update label="2023-11-13" description="v0.80.0">
**Fixes**
- Fixing Tokens callback replacement bug
- Fixing Step callback issue
- Add cached prompt tokens info on usage metrics
- Fix crew_train_success test
</Update>

View File

@@ -150,8 +150,6 @@ result = crew.kickoff(
Here are examples of how to use different types of knowledge sources:
Note: Please ensure that you create the ./knowldge folder. All source files (e.g., .txt, .pdf, .xlsx, .json) should be placed in this folder for centralized management.
### Text File Knowledge Source
```python
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
@@ -462,12 +460,12 @@ class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
data = response.json()
articles = data.get('results', [])
formatted_data = self.validate_content(articles)
formatted_data = self._format_articles(articles)
return {self.api_endpoint: formatted_data}
except Exception as e:
raise ValueError(f"Failed to fetch space news: {str(e)}")
def validate_content(self, articles: list) -> str:
def _format_articles(self, articles: list) -> str:
"""Format articles into readable text."""
formatted = "Space News Articles:\n\n"
for article in articles:

View File

@@ -158,11 +158,7 @@ In this section, you'll find detailed examples that help you select, configure,
<Accordion title="Anthropic">
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
@@ -254,40 +250,6 @@ In this section, you'll find detailed examples that help you select, configure,
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
| Amazon Nova Pro | Up to 300k tokens | High-performance, model balancing accuracy, speed, and cost-effectiveness across diverse tasks. |
| Amazon Nova Micro | Up to 128k tokens | High-performance, cost-effective text-only model optimized for lowest latency responses. |
| Amazon Nova Lite | Up to 300k tokens | High-performance, affordable multimodal processing for images, video, and text with real-time capabilities. |
| Claude 3.7 Sonnet | Up to 128k tokens | High-performance, best for complex reasoning, coding & AI agents |
| Claude 3.5 Sonnet v2 | Up to 200k tokens | State-of-the-art model specialized in software engineering, agentic capabilities, and computer interaction at optimized cost. |
| Claude 3.5 Sonnet | Up to 200k tokens | High-performance model delivering superior intelligence and reasoning across diverse tasks with optimal speed-cost balance. |
| Claude 3.5 Haiku | Up to 200k tokens | Fast, compact multimodal model optimized for quick responses and seamless human-like interactions |
| Claude 3 Sonnet | Up to 200k tokens | Multimodal model balancing intelligence and speed for high-volume deployments. |
| Claude 3 Haiku | Up to 200k tokens | Compact, high-speed multimodal model optimized for quick responses and natural conversational interactions |
| Claude 3 Opus | Up to 200k tokens | Most advanced multimodal model exceling at complex tasks with human-like reasoning and superior contextual understanding. |
| Claude 2.1 | Up to 200k tokens | Enhanced version with expanded context window, improved reliability, and reduced hallucinations for long-form and RAG applications |
| Claude | Up to 100k tokens | Versatile model excelling in sophisticated dialogue, creative content, and precise instruction following. |
| Claude Instant | Up to 100k tokens | Fast, cost-effective model for everyday tasks like dialogue, analysis, summarization, and document Q&A |
| Llama 3.1 405B Instruct | Up to 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| Llama 3.1 70B Instruct | Up to 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3.1 8B Instruct | Up to 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| Llama 3 70B Instruct | Up to 8k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| Llama 3 8B Instruct | Up to 8k tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| Titan Text G1 - Lite | Up to 4k tokens | Lightweight, cost-effective model optimized for English tasks and fine-tuning with focus on summarization and content generation. |
| Titan Text G1 - Express | Up to 8k tokens | Versatile model for general language tasks, chat, and RAG applications with support for English and 100+ languages. |
| Cohere Command | Up to 4k tokens | Model specialized in following user commands and delivering practical enterprise solutions. |
| Jurassic-2 Mid | Up to 8,191 tokens | Cost-effective model balancing quality and affordability for diverse language tasks like Q&A, summarization, and content generation. |
| Jurassic-2 Ultra | Up to 8,191 tokens | Model for advanced text generation and comprehension, excelling in complex tasks like analysis and content creation. |
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
</Accordion>
<Accordion title="Amazon SageMaker">
@@ -406,46 +368,6 @@ In this section, you'll find detailed examples that help you select, configure,
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
3. Configure your crewai local models:
```python Code
from crewai.llm import LLM
local_nvidia_nim_llm = LLM(
model="openai/meta/llama-3.1-8b-instruct", # it's an openai-api compatible model
base_url="http://localhost:8000/v1",
api_key="<your_api_key|any text if you have not configured it>", # api_key is required, but you can use any text
)
# Then you can use it in your crew:
@CrewBase
class MyCrew():
# ...
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
llm=local_nvidia_nim_llm
)
# ...
```
</Accordion>
<Accordion title="Groq">
Set the following environment variables in your `.env` file:
@@ -786,5 +708,5 @@ Learn how to get the most out of your LLM configuration:
<Tip>
Use larger context models for extensive tasks
</Tip>
</Tab>
</Tabs>
```

View File

@@ -60,8 +60,7 @@ my_crew = Crew(
```python Code
from crewai import Crew, Process
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from crewai.memory.storage import LTMSQLiteStorage, RAGStorage
from typing import List, Optional
# Assemble your crew with memory capabilities
@@ -120,7 +119,7 @@ Example using environment variables:
import os
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from crewai.memory.storage import LTMSQLiteStorage
# Configure storage path using environment variable
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
@@ -149,7 +148,7 @@ crew = Crew(memory=True) # Uses default storage locations
```python
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from crewai.memory.storage import LTMSQLiteStorage
# Configure custom storage paths
crew = Crew(

View File

@@ -106,7 +106,6 @@ Here is a list of the available tools and their descriptions:
| Tool | Description |
| :------------------------------- | :--------------------------------------------------------------------------------------------- |
| **ApifyActorsTool** | A tool that integrates Apify Actors with your workflows for web scraping and automation tasks. |
| **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
| **CodeInterpreterTool** | A tool for interpreting python code. |

View File

@@ -1,223 +0,0 @@
{
"$schema": "https://mintlify.com/docs.json",
"theme": "palm",
"name": "CrewAI",
"colors": {
"primary": "#EB6658",
"light": "#F3A78B",
"dark": "#C94C3C"
},
"favicon": "favicon.svg",
"navigation": {
"tabs": [
{
"tab": "Get Started",
"groups": [
{
"group": "Get Started",
"pages": [
"introduction",
"installation",
"quickstart",
"changelog"
]
},
{
"group": "Guides",
"pages": [
{
"group": "Concepts",
"pages": [
"guides/concepts/evaluating-use-cases"
]
},
{
"group": "Agents",
"pages": [
"guides/agents/crafting-effective-agents"
]
},
{
"group": "Crews",
"pages": [
"guides/crews/first-crew"
]
},
{
"group": "Flows",
"pages": [
"guides/flows/first-flow",
"guides/flows/mastering-flow-state"
]
},
{
"group": "Advanced",
"pages": [
"guides/advanced/customizing-prompts",
"guides/advanced/fingerprinting"
]
}
]
},
{
"group": "Core Concepts",
"pages": [
"concepts/agents",
"concepts/tasks",
"concepts/crews",
"concepts/flows",
"concepts/knowledge",
"concepts/llms",
"concepts/processes",
"concepts/collaboration",
"concepts/training",
"concepts/memory",
"concepts/planning",
"concepts/testing",
"concepts/cli",
"concepts/tools",
"concepts/event-listener",
"concepts/langchain-tools",
"concepts/llamaindex-tools"
]
},
{
"group": "How to Guides",
"pages": [
"how-to/create-custom-tools",
"how-to/sequential-process",
"how-to/hierarchical-process",
"how-to/custom-manager-agent",
"how-to/llm-connections",
"how-to/customizing-agents",
"how-to/multimodal-agents",
"how-to/coding-agents",
"how-to/force-tool-output-as-result",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/conditional-tasks",
"how-to/agentops-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/portkey-observability",
"how-to/langfuse-observability"
]
},
{
"group": "Tools",
"pages": [
"tools/aimindtool",
"tools/apifyactorstool",
"tools/bravesearchtool",
"tools/browserbaseloadtool",
"tools/codedocssearchtool",
"tools/codeinterpretertool",
"tools/composiotool",
"tools/csvsearchtool",
"tools/dalletool",
"tools/directorysearchtool",
"tools/directoryreadtool",
"tools/docxsearchtool",
"tools/exasearchtool",
"tools/filereadtool",
"tools/filewritetool",
"tools/firecrawlcrawlwebsitetool",
"tools/firecrawlscrapewebsitetool",
"tools/firecrawlsearchtool",
"tools/githubsearchtool",
"tools/hyperbrowserloadtool",
"tools/linkupsearchtool",
"tools/llamaindextool",
"tools/serperdevtool",
"tools/s3readertool",
"tools/s3writertool",
"tools/scrapegraphscrapetool",
"tools/scrapeelementfromwebsitetool",
"tools/jsonsearchtool",
"tools/mdxsearchtool",
"tools/mysqltool",
"tools/multiontool",
"tools/nl2sqltool",
"tools/patronustools",
"tools/pdfsearchtool",
"tools/pgsearchtool",
"tools/qdrantvectorsearchtool",
"tools/ragtool",
"tools/scrapewebsitetool",
"tools/scrapflyscrapetool",
"tools/seleniumscrapingtool",
"tools/snowflakesearchtool",
"tools/spidertool",
"tools/txtsearchtool",
"tools/visiontool",
"tools/weaviatevectorsearchtool",
"tools/websitesearchtool",
"tools/xmlsearchtool",
"tools/youtubechannelsearchtool",
"tools/youtubevideosearchtool"
]
},
{
"group": "Telemetry",
"pages": [
"telemetry"
]
}
]
},
{
"tab": "Examples",
"groups": [
{
"group": "Examples",
"pages": [
"examples/example"
]
}
]
}
],
"global": {
"anchors": [
{
"anchor": "Community",
"href": "https://community.crewai.com",
"icon": "discourse"
}
]
}
},
"logo": {
"light": "crew_only_logo.png",
"dark": "crew_only_logo.png"
},
"appearance": {
"default": "dark",
"strict": false
},
"navbar": {
"primary": {
"type": "github",
"href": "https://github.com/crewAIInc/crewAI"
}
},
"search": {
"prompt": "Search CrewAI docs"
},
"seo": {
"indexing": "navigable"
},
"footer": {
"socials": {
"website": "https://crewai.com",
"x": "https://x.com/crewAIInc",
"github": "https://github.com/crewAIInc/crewAI",
"linkedin": "https://www.linkedin.com/company/crewai-inc",
"youtube": "https://youtube.com/@crewAIInc",
"reddit": "https://www.reddit.com/r/crewAIInc/"
}
}
}

View File

@@ -1,5 +1,4 @@
---
title: Customizing Prompts
---title: Customizing Prompts
description: Dive deeper into low-level prompt customization for CrewAI, enabling super custom and complex use cases for different models and languages.
icon: message-pen
---

223
docs/mint.json Normal file
View File

@@ -0,0 +1,223 @@
{
"name": "CrewAI",
"theme": "venus",
"logo": {
"dark": "crew_only_logo.png",
"light": "crew_only_logo.png"
},
"favicon": "favicon.svg",
"colors": {
"primary": "#EB6658",
"light": "#F3A78B",
"dark": "#C94C3C",
"anchors": {
"from": "#737373",
"to": "#EB6658"
}
},
"seo": {
"indexHiddenPages": false
},
"modeToggle": {
"default": "dark",
"isHidden": false
},
"feedback": {
"suggestEdit": true,
"raiseIssue": true,
"thumbsRating": true
},
"topbarCtaButton": {
"type": "github",
"url": "https://github.com/crewAIInc/crewAI"
},
"primaryTab": {
"name": "Get Started"
},
"tabs": [
{
"name": "Examples",
"url": "examples"
}
],
"anchors": [
{
"name": "Community",
"icon": "discourse",
"url": "https://community.crewai.com"
},
{
"name": "Changelog",
"icon": "timeline",
"url": "https://github.com/crewAIInc/crewAI/releases"
}
],
"navigation": [
{
"group": "Get Started",
"pages": [
"introduction",
"installation",
"quickstart"
]
},
{
"group": "Guides",
"pages": [
{
"group": "Concepts",
"pages": [
"guides/concepts/evaluating-use-cases"
]
},
{
"group": "Agents",
"pages": [
"guides/agents/crafting-effective-agents"
]
},
{
"group": "Crews",
"pages": [
"guides/crews/first-crew"
]
},
{
"group": "Flows",
"pages": [
"guides/flows/first-flow",
"guides/flows/mastering-flow-state"
]
},
{
"group": "Advanced",
"pages": [
"guides/advanced/customizing-prompts",
"guides/advanced/fingerprinting"
]
}
]
},
{
"group": "Core Concepts",
"pages": [
"concepts/agents",
"concepts/tasks",
"concepts/crews",
"concepts/flows",
"concepts/knowledge",
"concepts/llms",
"concepts/processes",
"concepts/collaboration",
"concepts/training",
"concepts/memory",
"concepts/planning",
"concepts/testing",
"concepts/cli",
"concepts/tools",
"concepts/langchain-tools",
"concepts/llamaindex-tools"
]
},
{
"group": "How to Guides",
"pages": [
"how-to/create-custom-tools",
"how-to/sequential-process",
"how-to/hierarchical-process",
"how-to/custom-manager-agent",
"how-to/llm-connections",
"how-to/customizing-agents",
"how-to/multimodal-agents",
"how-to/coding-agents",
"how-to/force-tool-output-as-result",
"how-to/human-input-on-execution",
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/conditional-tasks",
"how-to/agentops-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/portkey-observability",
"how-to/langfuse-observability"
]
},
{
"group": "Examples",
"pages": [
"examples/example"
]
},
{
"group": "Tools",
"pages": [
"tools/aimindtool",
"tools/bravesearchtool",
"tools/browserbaseloadtool",
"tools/codedocssearchtool",
"tools/codeinterpretertool",
"tools/composiotool",
"tools/csvsearchtool",
"tools/dalletool",
"tools/directorysearchtool",
"tools/directoryreadtool",
"tools/docxsearchtool",
"tools/exasearchtool",
"tools/filereadtool",
"tools/filewritetool",
"tools/firecrawlcrawlwebsitetool",
"tools/firecrawlscrapewebsitetool",
"tools/firecrawlsearchtool",
"tools/githubsearchtool",
"tools/hyperbrowserloadtool",
"tools/linkupsearchtool",
"tools/llamaindextool",
"tools/serperdevtool",
"tools/s3readertool",
"tools/s3writertool",
"tools/scrapegraphscrapetool",
"tools/scrapeelementfromwebsitetool",
"tools/jsonsearchtool",
"tools/mdxsearchtool",
"tools/mysqltool",
"tools/multiontool",
"tools/nl2sqltool",
"tools/patronustools",
"tools/pdfsearchtool",
"tools/pgsearchtool",
"tools/qdrantvectorsearchtool",
"tools/ragtool",
"tools/scrapewebsitetool",
"tools/scrapflyscrapetool",
"tools/seleniumscrapingtool",
"tools/snowflakesearchtool",
"tools/spidertool",
"tools/txtsearchtool",
"tools/visiontool",
"tools/weaviatevectorsearchtool",
"tools/websitesearchtool",
"tools/xmlsearchtool",
"tools/youtubechannelsearchtool",
"tools/youtubevideosearchtool"
]
},
{
"group": "Telemetry",
"pages": [
"telemetry"
]
}
],
"search": {
"prompt": "Search CrewAI docs"
},
"footerSocials": {
"website": "https://crewai.com",
"x": "https://x.com/crewAIInc",
"github": "https://github.com/crewAIInc/crewAI",
"linkedin": "https://www.linkedin.com/company/crewai-inc",
"youtube": "https://youtube.com/@crewAIInc"
}
}

View File

@@ -300,7 +300,7 @@ email_summarizer:
```
<Tip>
Note how we use the same name for the task in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
Note how we use the same name for the agent in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
</Tip>
```yaml tasks.yaml

View File

@@ -1,99 +0,0 @@
---
title: Apify Actors
description: "`ApifyActorsTool` lets you call Apify Actors to provide your CrewAI workflows with web scraping, crawling, data extraction, and web automation capabilities."
# hack to use custom Apify icon
icon: "); -webkit-mask-image: url('https://upload.wikimedia.org/wikipedia/commons/a/ae/Apify.svg');/*"
---
# `ApifyActorsTool`
Integrate [Apify Actors](https://apify.com/actors) into your CrewAI workflows.
## Description
The `ApifyActorsTool` connects [Apify Actors](https://apify.com/actors), cloud-based programs for web scraping and automation, to your CrewAI workflows.
Use any of the 4,000+ Actors on [Apify Store](https://apify.com/store) for use cases such as extracting data from social media, search engines, online maps, e-commerce sites, travel portals, or general websites.
For details, see the [Apify CrewAI integration](https://docs.apify.com/platform/integrations/crewai) in Apify documentation.
## Steps to get started
<Steps>
<Step title="Install dependencies">
Install `crewai[tools]` and `langchain-apify` using pip: `pip install 'crewai[tools]' langchain-apify`.
</Step>
<Step title="Obtain an Apify API token">
Sign up to [Apify Console](https://console.apify.com/) and get your [Apify API token](https://console.apify.com/settings/integrations)..
</Step>
<Step title="Configure environment">
Set your Apify API token as the `APIFY_API_TOKEN` environment variable to enable the tool's functionality.
</Step>
</Steps>
## Usage example
Use the `ApifyActorsTool` manually to run the [RAG Web Browser Actor](https://apify.com/apify/rag-web-browser) to perform a web search:
```python
from crewai_tools import ApifyActorsTool
# Initialize the tool with an Apify Actor
tool = ApifyActorsTool(actor_name="apify/rag-web-browser")
# Run the tool with input parameters
results = tool.run(run_input={"query": "What is CrewAI?", "maxResults": 5})
# Process the results
for result in results:
print(f"URL: {result['metadata']['url']}")
print(f"Content: {result.get('markdown', 'N/A')[:100]}...")
```
### Expected output
Here is the output from running the code above:
```text
URL: https://www.example.com/crewai-intro
Content: CrewAI is a framework for building AI-powered workflows...
URL: https://docs.crewai.com/
Content: Official documentation for CrewAI...
```
The `ApifyActorsTool` automatically fetches the Actor definition and input schema from Apify using the provided `actor_name` and then constructs the tool description and argument schema. This means you need to specify only a valid `actor_name`, and the tool handles the rest when used with agents—no need to specify the `run_input`. Here's how it works:
```python
from crewai import Agent
from crewai_tools import ApifyActorsTool
rag_browser = ApifyActorsTool(actor_name="apify/rag-web-browser")
agent = Agent(
role="Research Analyst",
goal="Find and summarize information about specific topics",
backstory="You are an experienced researcher with attention to detail",
tools=[rag_browser],
)
```
You can run other Actors from [Apify Store](https://apify.com/store) simply by changing the `actor_name` and, when using it manually, adjusting the `run_input` based on the Actor input schema.
For an example of usage with agents, see the [CrewAI Actor template](https://apify.com/templates/python-crewai).
## Configuration
The `ApifyActorsTool` requires these inputs to work:
- **`actor_name`**
The ID of the Apify Actor to run, e.g., `"apify/rag-web-browser"`. Browse all Actors on [Apify Store](https://apify.com/store).
- **`run_input`**
A dictionary of input parameters for the Actor when running the tool manually.
- For example, for the `apify/rag-web-browser` Actor: `{"query": "search term", "maxResults": 5}`
- See the Actor's [input schema](https://apify.com/apify/rag-web-browser/input-schema) for the list of input parameters.
## Resources
- **[Apify](https://apify.com/)**: Explore the Apify platform.
- **[How to build an AI agent on Apify](https://blog.apify.com/how-to-build-an-ai-agent/)** - A complete step-by-step guide to creating, publishing, and monetizing AI agents on the Apify platform.
- **[RAG Web Browser Actor](https://apify.com/apify/rag-web-browser)**: A popular Actor for web search for LLMs.
- **[CrewAI Integration Guide](https://docs.apify.com/platform/integrations/crewai)**: Follow the official guide for integrating Apify and CrewAI.

View File

@@ -7,10 +7,8 @@ icon: file-code
# `JSONSearchTool`
<Note>
The JSONSearchTool is currently in an experimental phase. This means the tool
is under active development, and users might encounter unexpected behavior or
changes. We highly encourage feedback on any issues or suggestions for
improvements.
The JSONSearchTool is currently in an experimental phase. This means the tool is under active development, and users might encounter unexpected behavior or changes.
We highly encourage feedback on any issues or suggestions for improvements.
</Note>
## Description
@@ -62,7 +60,7 @@ tool = JSONSearchTool(
# stream=true,
},
},
"embedding_model": {
"embedder": {
"provider": "google", # or openai, ollama, ...
"config": {
"model": "models/embedding-001",
@@ -72,4 +70,4 @@ tool = JSONSearchTool(
},
}
)
```
```

View File

@@ -8,8 +8,8 @@ icon: vector-square
## Description
The `RagTool` is designed to answer questions by leveraging the power of Retrieval-Augmented Generation (RAG) through EmbedChain.
It provides a dynamic knowledge base that can be queried to retrieve relevant information from various data sources.
The `RagTool` is designed to answer questions by leveraging the power of Retrieval-Augmented Generation (RAG) through EmbedChain.
It provides a dynamic knowledge base that can be queried to retrieve relevant information from various data sources.
This tool is particularly useful for applications that require access to a vast array of information and need to provide contextually relevant answers.
## Example
@@ -138,7 +138,7 @@ config = {
"model": "gpt-4",
}
},
"embedding_model": {
"embedder": {
"provider": "openai",
"config": {
"model": "text-embedding-ada-002"
@@ -151,4 +151,4 @@ rag_tool = RagTool(config=config, summarize=True)
## Conclusion
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.108.0"
version = "0.105.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -17,9 +17,9 @@ dependencies = [
"pdfplumber>=0.11.4",
"regex>=2024.9.11",
# Telemetry and Monitoring
"opentelemetry-api>=1.30.0",
"opentelemetry-sdk>=1.30.0",
"opentelemetry-exporter-otlp-proto-http>=1.30.0",
"opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
# Data Handling
"chromadb>=0.5.23",
"openpyxl>=3.1.5",

View File

@@ -14,7 +14,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.108.0"
__version__ = "0.105.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -1,7 +1,7 @@
import re
import shutil
import subprocess
from typing import Any, Dict, List, Literal, Optional, Sequence, Union
from typing import Any, Dict, List, Literal, Optional, Sequence, Union, cast
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
@@ -50,6 +50,7 @@ class Agent(BaseAgent):
max_rpm: Maximum number of requests per minute for the agent execution to be respected.
verbose: Whether the agent execution should be in verbose mode.
allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
delegate_to: List of agents this agent can delegate to. If None and allow_delegation is True, can delegate to all agents.
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
knowledge_sources: Knowledge sources for the agent.
@@ -342,10 +343,17 @@ class Agent(BaseAgent):
callbacks=[TokenCalcHandler(self._token_process)],
)
def get_delegation_tools(self, agents: List[BaseAgent]):
agent_tools = AgentTools(agents=agents)
tools = agent_tools.tools()
return tools
def get_delegation_tools(self, agents: Sequence[BaseAgent]) -> Sequence[BaseTool]:
# If delegate_to is specified, use those agents instead of all agents
agents_to_use: List[BaseAgent]
if self.delegate_to is not None:
agents_to_use = cast(List[BaseAgent], list(self.delegate_to))
else:
agents_to_use = list(agents) # Convert to list to match expected type
agent_tools = AgentTools(agents=agents_to_use)
delegation_tools = agent_tools.tools()
return delegation_tools
def get_multimodal_tools(self) -> Sequence[BaseTool]:
from crewai.tools.agent_tools.add_image_tool import AddImageTool

View File

@@ -2,7 +2,7 @@ import uuid
from abc import ABC, abstractmethod
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Dict, List, Optional, TypeVar
from typing import Any, Dict, List, Optional, Sequence, TypeVar
from pydantic import (
UUID4,
@@ -42,6 +42,7 @@ class BaseAgent(ABC, BaseModel):
verbose (bool): Verbose mode for the Agent Execution.
max_rpm (Optional[int]): Maximum number of requests per minute for the agent execution.
allow_delegation (bool): Allow delegation of tasks to agents.
delegate_to (Optional[List["BaseAgent"]]): List of agents this agent can delegate to. If None and allow_delegation is True, can delegate to all agents.
tools (Optional[List[Any]]): Tools at the agent's disposal.
max_iter (int): Maximum iterations for an agent to execute a task.
agent_executor (InstanceOf): An instance of the CrewAgentExecutor class.
@@ -63,7 +64,7 @@ class BaseAgent(ABC, BaseModel):
Abstract method to create an agent executor.
_parse_tools(tools: List[BaseTool]) -> List[Any]:
Abstract method to parse tools.
get_delegation_tools(agents: List["BaseAgent"]):
get_delegation_tools(agents: Sequence["BaseAgent"]) -> Sequence[BaseTool]:
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
get_output_converter(llm, model, instructions):
Abstract method to get the converter class for the agent to create json/pydantic outputs.
@@ -113,6 +114,10 @@ class BaseAgent(ABC, BaseModel):
default=False,
description="Enable agent to delegate and ask questions among each other.",
)
delegate_to: Optional[List["BaseAgent"]] = Field(
default=None,
description="List of agents this agent can delegate to. If None and allow_delegation is True, can delegate to all agents.",
)
tools: Optional[List[BaseTool]] = Field(
default_factory=list, description="Tools at agents' disposal"
)
@@ -258,7 +263,7 @@ class BaseAgent(ABC, BaseModel):
pass
@abstractmethod
def get_delegation_tools(self, agents: List["BaseAgent"]) -> List[BaseTool]:
def get_delegation_tools(self, agents: Sequence["BaseAgent"]) -> Sequence[BaseTool]:
"""Set the task tools that init BaseAgenTools class."""
pass
@@ -285,6 +290,7 @@ class BaseAgent(ABC, BaseModel):
"knowledge_sources",
"knowledge_storage",
"knowledge",
"delegate_to",
}
# Copy llm
@@ -310,6 +316,10 @@ class BaseAgent(ABC, BaseModel):
copied_source.storage = shared_storage
existing_knowledge_sources.append(copied_source)
existing_delegate_to = None
if self.delegate_to:
existing_delegate_to = list(self.delegate_to)
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(
@@ -319,6 +329,7 @@ class BaseAgent(ABC, BaseModel):
knowledge_sources=existing_knowledge_sources,
knowledge=copied_knowledge,
knowledge_storage=copied_knowledge_storage,
delegate_to=existing_delegate_to,
)
return copied_agent

View File

@@ -124,9 +124,9 @@ class CrewAgentParser:
)
def _extract_thought(self, text: str) -> str:
thought_index = text.find("\nAction")
thought_index = text.find("\n\nAction")
if thought_index == -1:
thought_index = text.find("\nFinal Answer")
thought_index = text.find("\n\nFinal Answer")
if thought_index == -1:
return ""
thought = text[:thought_index].strip()
@@ -136,7 +136,7 @@ class CrewAgentParser:
def _clean_action(self, text: str) -> str:
"""Clean action string by removing non-essential formatting characters."""
return text.strip().strip("*").strip()
return re.sub(r"^\s*\*+\s*|\s*\*+\s*$", "", text).strip()
def _safe_repair_json(self, tool_input: str) -> str:
UNABLE_TO_REPAIR_JSON_RESULTS = ['""', "{}"]

View File

@@ -1,5 +1,4 @@
import subprocess
from functools import lru_cache
class Repository:
@@ -36,7 +35,6 @@ class Repository:
encoding="utf-8",
).strip()
@lru_cache(maxsize=None)
def is_git_repo(self) -> bool:
"""Check if the current directory is a git repository."""
try:

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.108.0,<1.0.0"
"crewai[tools]>=0.105.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,12 +5,11 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.108.0,<1.0.0",
"crewai[tools]>=0.105.0,<1.0.0",
]
[project.scripts]
kickoff = "{{folder_name}}.main:kickoff"
run_crew = "{{folder_name}}.main:kickoff"
plot = "{{folder_name}}.main:plot"
[build-system]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.108.0"
"crewai[tools]>=0.105.0"
]
[tool.crewai]

View File

@@ -6,7 +6,7 @@ import warnings
from concurrent.futures import Future
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
from typing import Any, Callable, Dict, List, Optional, Sequence, Set, Tuple, Union
from pydantic import (
UUID4,
@@ -36,6 +36,7 @@ from crewai.security import Fingerprint, SecurityConfig
from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.types.usage_metrics import UsageMetrics
@@ -759,22 +760,27 @@ class Crew(BaseModel):
def _create_manager_agent(self):
i18n = I18N(prompt_file=self.prompt_file)
if self.manager_agent is not None:
# Ensure delegation is enabled for the manager agent
self.manager_agent.allow_delegation = True
# Set the delegate_to property to all agents in the crew
# If delegate_to is already set, it will be used instead of all agents
if self.manager_agent.delegate_to is None:
self.manager_agent.delegate_to = self.agents
manager = self.manager_agent
if manager.tools is not None and len(manager.tools) > 0:
self._logger.log(
"warning", "Manager agent should not have tools", color="orange"
)
manager.tools = []
raise Exception("Manager agent should not have tools")
else:
self.manager_llm = create_llm(self.manager_llm)
# Create delegation tools
delegation_tools = AgentTools(agents=self.agents).tools()
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
tools=AgentTools(agents=self.agents).tools(),
tools=delegation_tools,
allow_delegation=True,
delegate_to=self.agents,
llm=self.manager_llm,
verbose=self.verbose,
)
@@ -818,8 +824,8 @@ class Crew(BaseModel):
)
# Determine which tools to use - task tools take precedence over agent tools
tools_for_task = task.tools or agent_to_use.tools or []
tools_for_task = self._prepare_tools(agent_to_use, task, tools_for_task)
initial_tools = task.tools or agent_to_use.tools or []
prepared_tools = self._prepare_tools(agent_to_use, task, initial_tools)
self._log_task_start(task, agent_to_use.role)
@@ -838,7 +844,7 @@ class Crew(BaseModel):
future = task.execute_async(
agent=agent_to_use,
context=context,
tools=tools_for_task,
tools=prepared_tools,
)
futures.append((task, future, task_index))
else:
@@ -850,7 +856,7 @@ class Crew(BaseModel):
task_output = task.execute_sync(
agent=agent_to_use,
context=context,
tools=tools_for_task,
tools=prepared_tools,
)
task_outputs.append(task_output)
self._process_task_result(task, task_output)
@@ -888,8 +894,8 @@ class Crew(BaseModel):
return None
def _prepare_tools(
self, agent: BaseAgent, task: Task, tools: List[Tool]
) -> List[Tool]:
self, agent: BaseAgent, task: Task, tools: Sequence[BaseTool]
) -> list[BaseTool]:
# Add delegation tools if agent allows delegation
if agent.allow_delegation:
if self.process == Process.hierarchical:
@@ -904,13 +910,15 @@ class Crew(BaseModel):
tools = self._add_delegation_tools(task, tools)
# Add code execution tools if agent allows code execution
if agent.allow_code_execution:
if hasattr(agent, "allow_code_execution") and getattr(
agent, "allow_code_execution", False
):
tools = self._add_code_execution_tools(agent, tools)
if agent and agent.multimodal:
if hasattr(agent, "multimodal") and getattr(agent, "multimodal", False):
tools = self._add_multimodal_tools(agent, tools)
return tools
return list(tools)
def _get_agent_to_use(self, task: Task) -> Optional[BaseAgent]:
if self.process == Process.hierarchical:
@@ -918,8 +926,8 @@ class Crew(BaseModel):
return task.agent
def _merge_tools(
self, existing_tools: List[Tool], new_tools: List[Tool]
) -> List[Tool]:
self, existing_tools: Sequence[BaseTool], new_tools: Sequence[BaseTool]
) -> Sequence[BaseTool]:
"""Merge new tools into existing tools list, avoiding duplicates by tool name."""
if not new_tools:
return existing_tools
@@ -936,21 +944,42 @@ class Crew(BaseModel):
return tools
def _inject_delegation_tools(
self, tools: List[Tool], task_agent: BaseAgent, agents: List[BaseAgent]
self,
tools: Sequence[BaseTool],
task_agent: BaseAgent,
agents: Sequence[BaseAgent],
):
delegation_tools = task_agent.get_delegation_tools(agents)
return self._merge_tools(tools, delegation_tools)
def _add_multimodal_tools(self, agent: BaseAgent, tools: List[Tool]):
multimodal_tools = agent.get_multimodal_tools()
return self._merge_tools(tools, multimodal_tools)
def _add_multimodal_tools(
self, agent: BaseAgent, tools: Sequence[BaseTool]
) -> Sequence[BaseTool]:
if hasattr(agent, "get_multimodal_tools"):
multimodal_tools = getattr(agent, "get_multimodal_tools")()
return self._merge_tools(tools, multimodal_tools)
return tools
def _add_code_execution_tools(self, agent: BaseAgent, tools: List[Tool]):
code_tools = agent.get_code_execution_tools()
return self._merge_tools(tools, code_tools)
def _add_code_execution_tools(
self, agent: BaseAgent, tools: Sequence[BaseTool]
) -> Sequence[BaseTool]:
if hasattr(agent, "get_code_execution_tools"):
code_tools = getattr(agent, "get_code_execution_tools")()
return self._merge_tools(tools, code_tools)
return tools
def _add_delegation_tools(
self, task: Task, tools: Sequence[BaseTool]
) -> Sequence[BaseTool]:
# If the agent has specific agents to delegate to, use those
if task.agent and task.agent.delegate_to is not None:
agents_for_delegation = task.agent.delegate_to
else:
# Otherwise use all agents except the current one
agents_for_delegation = [
agent for agent in self.agents if agent != task.agent
]
def _add_delegation_tools(self, task: Task, tools: List[Tool]):
agents_for_delegation = [agent for agent in self.agents if agent != task.agent]
if len(self.agents) > 1 and len(agents_for_delegation) > 0 and task.agent:
if not tools:
tools = []
@@ -965,7 +994,7 @@ class Crew(BaseModel):
task_name=task.name, task=task.description, agent=role, status="started"
)
def _update_manager_tools(self, task: Task, tools: List[Tool]):
def _update_manager_tools(self, task: Task, tools: Sequence[BaseTool]):
if self.manager_agent:
if task.agent:
tools = self._inject_delegation_tools(tools, task.agent, [task.agent])

View File

@@ -1,7 +1,7 @@
import os
from typing import Any, Dict, List
from mem0 import Memory, MemoryClient
from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage
@@ -32,16 +32,13 @@ class Mem0Storage(Storage):
mem0_org_id = config.get("org_id")
mem0_project_id = config.get("project_id")
# Initialize MemoryClient or Memory based on the presence of the mem0_api_key
if mem0_api_key:
if mem0_org_id and mem0_project_id:
self.memory = MemoryClient(
api_key=mem0_api_key, org_id=mem0_org_id, project_id=mem0_project_id
)
else:
self.memory = MemoryClient(api_key=mem0_api_key)
# Initialize MemoryClient with available parameters
if mem0_org_id and mem0_project_id:
self.memory = MemoryClient(
api_key=mem0_api_key, org_id=mem0_org_id, project_id=mem0_project_id
)
else:
self.memory = Memory() # Fallback to Memory if no Mem0 API key is provided
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str:
"""

View File

@@ -19,8 +19,6 @@ from typing import (
Tuple,
Type,
Union,
get_args,
get_origin,
)
from pydantic import (
@@ -180,29 +178,15 @@ class Task(BaseModel):
"""
if v is not None:
sig = inspect.signature(v)
positional_args = [
param
for param in sig.parameters.values()
if param.default is inspect.Parameter.empty
]
if len(positional_args) != 1:
if len(sig.parameters) != 1:
raise ValueError("Guardrail function must accept exactly one parameter")
# Check return annotation if present, but don't require it
return_annotation = sig.return_annotation
if return_annotation != inspect.Signature.empty:
return_annotation_args = get_args(return_annotation)
if not (
get_origin(return_annotation) is tuple
and len(return_annotation_args) == 2
and return_annotation_args[0] is bool
and (
return_annotation_args[1] is Any
or return_annotation_args[1] is str
or return_annotation_args[1] is TaskOutput
or return_annotation_args[1] == Union[str, TaskOutput]
)
return_annotation == Tuple[bool, Any]
or str(return_annotation) == "Tuple[bool, Any]"
):
raise ValueError(
"If return type is annotated, it must be Tuple[bool, Any]"

View File

@@ -281,16 +281,8 @@ class Telemetry:
return self._safe_telemetry_operation(operation)
def task_ended(self, span: Span, task: Task, crew: Crew):
"""Records the completion of a task execution in a crew.
"""Records task execution in a crew."""
Args:
span (Span): The OpenTelemetry span tracking the task execution
task (Task): The task that was completed
crew (Crew): The crew context in which the task was executed
Note:
If share_crew is enabled, this will also record the task output
"""
def operation():
if crew.share_crew:
self._add_attribute(
@@ -305,13 +297,8 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records when a tool is used repeatedly, which might indicate an issue.
"""Records the repeated usage 'error' of a tool by an agent."""
Args:
llm (Any): The language model being used
tool_name (str): Name of the tool being repeatedly used
attempts (int): Number of attempts made with this tool
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
@@ -330,13 +317,8 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the usage of a tool by an agent.
"""Records the usage of a tool by an agent."""
Args:
llm (Any): The language model being used
tool_name (str): Name of the tool being used
attempts (int): Number of attempts made with this tool
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
@@ -355,11 +337,8 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def tool_usage_error(self, llm: Any):
"""Records when a tool usage results in an error.
"""Records the usage of a tool by an agent."""
Args:
llm (Any): The language model being used when the error occurred
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
@@ -378,14 +357,6 @@ class Telemetry:
def individual_test_result_span(
self, crew: Crew, quality: float, exec_time: int, model_name: str
):
"""Records individual test results for a crew execution.
Args:
crew (Crew): The crew being tested
quality (float): Quality score of the execution
exec_time (int): Execution time in seconds
model_name (str): Name of the model used
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Individual Test Result")
@@ -412,14 +383,6 @@ class Telemetry:
inputs: dict[str, Any] | None,
model_name: str,
):
"""Records the execution of a test suite for a crew.
Args:
crew (Crew): The crew being tested
iterations (int): Number of test iterations
inputs (dict[str, Any] | None): Input parameters for the test
model_name (str): Name of the model used in testing
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Test Execution")
@@ -445,7 +408,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def deploy_signup_error_span(self):
"""Records when an error occurs during the deployment signup process."""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Deploy Signup Error")
@@ -455,11 +417,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def start_deployment_span(self, uuid: Optional[str] = None):
"""Records the start of a deployment process.
Args:
uuid (Optional[str]): Unique identifier for the deployment
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Start Deployment")
@@ -471,7 +428,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def create_crew_deployment_span(self):
"""Records the creation of a new crew deployment."""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Create Crew Deployment")
@@ -481,12 +437,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
"""Records the retrieval of crew logs.
Args:
uuid (Optional[str]): Unique identifier for the crew
log_type (str, optional): Type of logs being retrieved. Defaults to "deployment".
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Get Crew Logs")
@@ -499,11 +449,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def remove_crew_span(self, uuid: Optional[str] = None):
"""Records the removal of a crew.
Args:
uuid (Optional[str]): Unique identifier for the crew being removed
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Remove Crew")
@@ -629,11 +574,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def flow_creation_span(self, flow_name: str):
"""Records the creation of a new flow.
Args:
flow_name (str): Name of the flow being created
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Creation")
@@ -644,12 +584,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def flow_plotting_span(self, flow_name: str, node_names: list[str]):
"""Records flow visualization/plotting activity.
Args:
flow_name (str): Name of the flow being plotted
node_names (list[str]): List of node names in the flow
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Plotting")
@@ -661,12 +595,6 @@ class Telemetry:
self._safe_telemetry_operation(operation)
def flow_execution_span(self, flow_name: str, node_names: list[str]):
"""Records the execution of a flow.
Args:
flow_name (str): Name of the flow being executed
node_names (list[str]): List of nodes being executed in the flow
"""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Execution")

View File

@@ -1,3 +1,5 @@
from typing import List
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
@@ -9,11 +11,11 @@ from .delegate_work_tool import DelegateWorkTool
class AgentTools:
"""Manager class for agent-related tools"""
def __init__(self, agents: list[BaseAgent], i18n: I18N = I18N()):
def __init__(self, agents: List[BaseAgent], i18n: I18N = I18N()):
self.agents = agents
self.i18n = i18n
def tools(self) -> list[BaseTool]:
def tools(self) -> List[BaseTool]:
"""Get all available agent tools"""
coworkers = ", ".join([f"{agent.role}" for agent in self.agents])

View File

@@ -1,5 +1,5 @@
import logging
from typing import Optional
from typing import Optional, Sequence
from pydantic import Field
@@ -14,7 +14,7 @@ logger = logging.getLogger(__name__)
class BaseAgentTool(BaseTool):
"""Base class for agent-related tools"""
agents: list[BaseAgent] = Field(description="List of available agents")
agents: Sequence[BaseAgent] = Field(description="List of available agents")
i18n: I18N = Field(
default_factory=I18N, description="Internationalization settings"
)
@@ -47,10 +47,7 @@ class BaseAgentTool(BaseTool):
return coworker
def _execute(
self,
agent_name: Optional[str],
task: str,
context: Optional[str] = None
self, agent_name: Optional[str], task: str, context: Optional[str] = None
) -> str:
"""
Execute delegation to an agent with case-insensitive and whitespace-tolerant matching.
@@ -77,33 +74,43 @@ class BaseAgentTool(BaseTool):
# when it should look like this:
# {"task": "....", "coworker": "...."}
sanitized_name = self.sanitize_agent_name(agent_name)
logger.debug(f"Sanitized agent name from '{agent_name}' to '{sanitized_name}'")
logger.debug(
f"Sanitized agent name from '{agent_name}' to '{sanitized_name}'"
)
available_agents = [agent.role for agent in self.agents]
logger.debug(f"Available agents: {available_agents}")
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
agent = [ # type: ignore # Incompatible types in assignment (expression has type "Sequence[BaseAgent]", variable has type "str | None")
available_agent
for available_agent in self.agents
if self.sanitize_agent_name(available_agent.role) == sanitized_name
]
logger.debug(f"Found {len(agent)} matching agents for role '{sanitized_name}'")
logger.debug(
f"Found {len(agent)} matching agents for role '{sanitized_name}'"
)
except (AttributeError, ValueError) as e:
# Handle specific exceptions that might occur during role name processing
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
[
f"- {self.sanitize_agent_name(agent.role)}"
for agent in self.agents
]
),
error=str(e)
error=str(e),
)
if not agent:
# No matching agent found after sanitization
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
[
f"- {self.sanitize_agent_name(agent.role)}"
for agent in self.agents
]
),
error=f"No agent found with role '{sanitized_name}'"
error=f"No agent found with role '{sanitized_name}'",
)
agent = agent[0]
@@ -114,11 +121,12 @@ class BaseAgentTool(BaseTool):
expected_output=agent.i18n.slice("manager_request"),
i18n=agent.i18n,
)
logger.debug(f"Created task for agent '{self.sanitize_agent_name(agent.role)}': {task}")
logger.debug(
f"Created task for agent '{self.sanitize_agent_name(agent.role)}': {task}"
)
return agent.execute_task(task_with_assigned_agent, context)
except Exception as e:
# Handle task creation or execution errors
return self.i18n.errors("agent_tool_execution_error").format(
agent_role=self.sanitize_agent_name(agent.role),
error=str(e)
agent_role=self.sanitize_agent_name(agent.role), error=str(e)
)

View File

@@ -248,13 +248,18 @@ def to_langchain(
def tool(*args):
"""
Decorator to create a tool from a function.
Ensures the decorated function is always wrapped as a BaseTool.
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_with_name(tool_name: str) -> Callable[[Callable], BaseTool]:
def _make_tool(f: Callable) -> BaseTool:
# If f is already a BaseTool, return it
if isinstance(f, BaseTool):
return f
if f.__doc__ is None:
raise ValueError("Function must have a docstring")
if f.__annotations__ is None:
if not f.__annotations__:
raise ValueError("Function must have type annotations")
class_name = "".join(tool_name.split()).title()
@@ -278,7 +283,10 @@ def tool(*args):
return _make_tool
if len(args) == 1 and callable(args[0]):
if isinstance(args[0], BaseTool):
return args[0]
return _make_with_name(args[0].__name__)(args[0])
if len(args) == 1 and isinstance(args[0], str):
elif len(args) == 1 and isinstance(args[0], str):
return _make_with_name(args[0])
raise ValueError("Invalid arguments")
else:
raise ValueError("Invalid arguments")

View File

@@ -337,23 +337,11 @@ class ToolUsage:
return "\n--\n".join(descriptions)
def _function_calling(self, tool_string: str):
supports_function_calling = (
self.function_calling_llm.supports_function_calling()
model = (
InstructorToolCalling
if self.function_calling_llm.supports_function_calling()
else ToolCalling
)
if not supports_function_calling:
import warnings
warnings.warn(
"The model you're using doesn't natively support function calling. "
"CrewAI will attempt to use a workaround, but this may be less reliable. "
"Consider using a model with native function calling support for better results.",
UserWarning,
stacklevel=2,
)
model = InstructorToolCalling if supports_function_calling else ToolCalling
converter = Converter(
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid output schema:\n\n### TEXT \n{tool_string}",
llm=self.function_calling_llm,

View File

@@ -67,13 +67,16 @@ class CrewAIEventsBus:
source: The object emitting the event
event: The event instance to emit
"""
for event_type, handlers in self._handlers.items():
if isinstance(event, event_type):
for handler in handlers:
handler(source, event)
event_type = type(event)
if event_type in self._handlers:
for handler in self._handlers[event_type]:
handler(source, event)
self._signal.send(source, event=event)
def clear_handlers(self) -> None:
"""Clear all registered event handlers - useful for testing"""
self._handlers.clear()
def register_handler(
self, event_type: Type[EventTypes], handler: Callable[[Any, EventTypes], None]
) -> None:

View File

@@ -1,6 +1,6 @@
from typing import Any, Dict, Optional, Union
from pydantic import BaseModel, ConfigDict
from pydantic import BaseModel
from .base_events import CrewEvent
@@ -52,11 +52,9 @@ class MethodExecutionFailedEvent(FlowEvent):
flow_name: str
method_name: str
error: Exception
error: Any
type: str = "method_execution_failed"
model_config = ConfigDict(arbitrary_types_allowed=True)
class FlowFinishedEvent(FlowEvent):
"""Event emitted when a flow completes execution"""

View File

@@ -96,10 +96,6 @@ class CrewPlanner:
tasks_summary = []
for idx, task in enumerate(self.tasks):
knowledge_list = self._get_agent_knowledge(task)
agent_tools = (
f"[{', '.join(str(tool) for tool in task.agent.tools)}]" if task.agent and task.agent.tools else '"agent has no tools"',
f',\n "agent_knowledge": "[\\"{knowledge_list[0]}\\"]"' if knowledge_list and str(knowledge_list) != "None" else ""
)
task_summary = f"""
Task Number {idx + 1} - {task.description}
"task_description": {task.description}
@@ -107,7 +103,10 @@ class CrewPlanner:
"agent": {task.agent.role if task.agent else "None"}
"agent_goal": {task.agent.goal if task.agent else "None"}
"task_tools": {task.tools}
"agent_tools": {"".join(agent_tools)}"""
"agent_tools": %s%s""" % (
f"[{', '.join(str(tool) for tool in task.agent.tools)}]" if task.agent and task.agent.tools else '"agent has no tools"',
f',\n "agent_knowledge": "[\\"{knowledge_list[0]}\\"]"' if knowledge_list and str(knowledge_list) != "None" else ""
)
tasks_summary.append(task_summary)
return " ".join(tasks_summary)

View File

@@ -1797,3 +1797,136 @@ def test_litellm_anthropic_error_handling():
# Verify the LLM call was only made once (no retries)
mock_llm_call.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_delegation_to_specific_agents():
"""Test that an agent can delegate to specific agents using the delegate_to property."""
# Create agents in order so we can reference them in delegate_to
agent2 = Agent(
role="Agent 2",
goal="Goal for Agent 2",
backstory="Backstory for Agent 2",
allow_delegation=True,
)
agent3 = Agent(
role="Agent 3",
goal="Goal for Agent 3",
backstory="Backstory for Agent 3",
allow_delegation=True,
)
# Create agent1 without specific delegation first to test default behavior
agent1 = Agent(
role="Agent 1",
goal="Goal for Agent 1",
backstory="Backstory for Agent 1",
allow_delegation=True,
)
# Test default behavior (delegate to all agents)
all_agents = [agent1, agent2, agent3]
delegation_tools = agent1.get_delegation_tools(all_agents)
# Verify that tools for all agents are returned
assert len(delegation_tools) == 2 # Delegate and Ask tools
# Check that the tools can delegate to all agents
delegate_tool = delegation_tools[0]
ask_tool = delegation_tools[1]
# Verify the tools description includes all agents
assert "Agent 1" in delegate_tool.description
assert "Agent 2" in delegate_tool.description
assert "Agent 3" in delegate_tool.description
assert "Agent 1" in ask_tool.description
assert "Agent 2" in ask_tool.description
assert "Agent 3" in ask_tool.description
# Test delegation to specific agents by creating a new agent with delegate_to
agent1_with_specific_delegation = Agent(
role="Agent 1",
goal="Goal for Agent 1",
backstory="Backstory for Agent 1",
allow_delegation=True,
delegate_to=[agent2], # Only delegate to agent2
)
specific_delegation_tools = agent1_with_specific_delegation.get_delegation_tools(
all_agents
)
# Verify that tools for only the specified agent are returned
assert len(specific_delegation_tools) == 2 # Delegate and Ask tools
# Check that the tools can only delegate to agent2
specific_delegate_tool = specific_delegation_tools[0]
specific_ask_tool = specific_delegation_tools[1]
# Verify the tools description includes only agent2
assert "Agent 2" in specific_delegate_tool.description
assert "Agent 1" not in specific_delegate_tool.description
assert "Agent 3" not in specific_delegate_tool.description
assert "Agent 2" in specific_ask_tool.description
assert "Agent 1" not in specific_ask_tool.description
assert "Agent 3" not in specific_ask_tool.description
def test_agent_copy_with_delegate_to():
"""Test that the delegate_to attribute is properly copied when copying an agent."""
# Create a few agents for delegation
agent1 = Agent(
role="Researcher",
goal="Research topics",
backstory="Experienced researcher",
)
agent2 = Agent(
role="Writer",
goal="Write content",
backstory="Professional writer",
)
agent3 = Agent(
role="Manager",
goal="Manage the team",
backstory="Expert manager",
allow_delegation=True,
delegate_to=[agent1, agent2], # This manager can delegate to agent1 and agent2
)
# Make a copy of the manager agent
copied_agent3 = agent3.copy()
# Verify the copied agent has the same delegation settings
assert copied_agent3.allow_delegation == agent3.allow_delegation
assert (
copied_agent3.delegate_to is not agent3.delegate_to
) # Should be different objects
assert copied_agent3.delegate_to is not None
assert agent3.delegate_to is not None
assert len(copied_agent3.delegate_to) == len(agent3.delegate_to)
assert all(a in copied_agent3.delegate_to for a in agent3.delegate_to)
# Modify the original agent's delegate_to list
assert agent3.delegate_to is not None
agent3.delegate_to.pop()
# Verify the copied agent's delegate_to list is not affected
assert copied_agent3.delegate_to is not None
assert agent3.delegate_to is not None
assert len(copied_agent3.delegate_to) == 2
assert len(agent3.delegate_to) == 1
# Test copying an agent with delegate_to=None
agent4 = Agent(
role="Solo Worker",
goal="Work independently",
backstory="Independent worker",
allow_delegation=False,
delegate_to=None,
)
copied_agent4 = agent4.copy()
assert copied_agent4.delegate_to == agent4.delegate_to

View File

@@ -724,13 +724,14 @@ def test_task_tools_override_agent_tools():
crew.kickoff()
# Verify task tools override agent tools
assert task.tools is not None
assert len(task.tools) == 1 # AnotherTestTool
assert any(isinstance(tool, AnotherTestTool) for tool in task.tools)
assert not any(isinstance(tool, TestTool) for tool in task.tools)
# Verify agent tools remain unchanged
assert new_researcher.tools is not None
assert len(new_researcher.tools) == 1
assert isinstance(new_researcher.tools[0], TestTool)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -868,11 +869,17 @@ def test_crew_verbose_output(capsys):
event_listener.formatter.verbose = False
crew.kickoff()
captured = capsys.readouterr()
# Filter out event listener logs, escape codes, and now also 'tools:' lines
filtered_output = "\n".join(
line
for line in captured.out.split("\n")
if not line.startswith("[") and line.strip() and not line.startswith("\x1b")
if not line.startswith("[")
and line.strip()
and not line.startswith("\x1b")
and not "tools:" in line.lower() # Exclude 'tools:' lines
)
assert filtered_output == ""
@@ -1599,6 +1606,8 @@ def test_crew_function_calling_llm():
crew = Crew(agents=[agent1], tasks=[essay])
result = crew.kickoff()
assert result.raw == "Howdy!"
assert agent1.tools is not None
assert len(agent1.tools) == 1
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -4025,3 +4034,442 @@ def test_crew_with_knowledge_sources_works_with_copy():
assert len(crew_copy.tasks) == len(crew.tasks)
assert len(crew_copy.tasks) == len(crew.tasks)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_specific_delegation():
"""Test that agents in a crew can delegate to specific agents using the delegate_to property."""
# Create editor agent first since it will be referenced in writer's delegate_to
editor = Agent(
role="Editor",
goal="Edit content",
backstory="You're an expert editor",
allow_delegation=True,
)
# Create writer with delegate_to set during initialization
writer = Agent(
role="Writer",
goal="Write content",
backstory="You're an expert writer",
allow_delegation=True,
delegate_to=[editor], # Writer can only delegate to Editor
)
# Create researcher with delegate_to set during initialization
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You're an expert researcher",
allow_delegation=True,
delegate_to=[writer], # Researcher can only delegate to Writer
)
# Create tasks
task1 = Task(
description="Research a topic",
expected_output="Research results",
agent=researcher,
)
task2 = Task(
description="Write an article",
expected_output="Written article",
agent=writer,
)
# Create crew
crew = Crew(
agents=[researcher, writer, editor],
tasks=[task1, task2],
)
# Test that the _add_delegation_tools method respects the delegate_to property
tools = []
tools_with_delegation = crew._add_delegation_tools(task1, tools)
# Verify that delegation tools are added
assert len(tools_with_delegation) > 0
# Find the delegation tool
delegate_tool = None
for tool in tools_with_delegation:
if "Delegate" in tool.name:
delegate_tool = tool
break
assert delegate_tool is not None
# Verify that the delegation tool only includes the writer
assert "Writer" in delegate_tool.description
assert "Editor" not in delegate_tool.description
assert "Researcher" not in delegate_tool.description
# Test delegation for the writer
tools = []
tools_with_delegation = crew._add_delegation_tools(task2, tools)
# Find the delegation tool
delegate_tool = None
for tool in tools_with_delegation:
if "Delegate" in tool.name:
delegate_tool = tool
break
assert delegate_tool is not None
# Verify that the delegation tool only includes the editor
assert "Editor" in delegate_tool.description
assert "Writer" not in delegate_tool.description
assert "Researcher" not in delegate_tool.description
@pytest.mark.vcr(filter_headers=["authorization"])
def test_manager_agent_with_tools_and_delegation():
"""Test that a manager agent can have tools and still delegate to all agents."""
from crewai.tools.base_tool import BaseTool
# Create a simple tool for the manager
class SimpleTestTool(BaseTool):
name: str = "Simple Test Tool"
description: str = "A simple test tool"
def _run(self) -> str:
return "Tool executed"
# Create agents
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You're an expert researcher",
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You're an expert writer",
)
# Create a manager agent with tools
manager = Agent(
role="Manager",
goal="Manage the team",
backstory="You're an expert manager",
tools=[SimpleTestTool()],
allow_delegation=True,
)
# Create a crew with the manager agent
crew = Crew(
agents=[researcher, writer],
manager_agent=manager,
process=Process.hierarchical,
)
# Explicitly call _create_manager_agent to set up delegation
crew._create_manager_agent()
# Verify that the manager agent has tools
assert manager.tools is not None
assert len(manager.tools) == 1
assert manager.tools[0].name == "Simple Test Tool"
# Verify that the manager agent can delegate to all agents
assert manager.allow_delegation is True
assert manager.delegate_to == crew.agents
# Create a task
task = Task(
description="Complete a project",
expected_output="Project completed",
)
# Create a crew with the task
crew = Crew(
agents=[researcher, writer],
manager_agent=manager,
tasks=[task],
process=Process.hierarchical,
)
# Mock the execute_task method to avoid actual execution
with patch.object(Agent, "execute_task", return_value="Task executed"):
# Run the crew
result = crew.kickoff()
# Verify that the result is as expected
assert result.raw == "Task executed"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_default_delegation():
"""Test that an agent with allow_delegation=True but without delegate_to specified can delegate to all agents in the crew."""
# Create agents
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You're an expert researcher",
allow_delegation=True, # Allow delegation but don't specify delegate_to
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You're an expert writer",
allow_delegation=True, # Allow delegation but don't specify delegate_to
)
editor = Agent(
role="Editor",
goal="Edit content",
backstory="You're an expert editor",
allow_delegation=True, # Allow delegation but don't specify delegate_to
)
# Create tasks
task1 = Task(
description="Research a topic",
expected_output="Research results",
agent=researcher,
)
task2 = Task(
description="Write content based on research",
expected_output="Written content",
agent=writer,
)
task3 = Task(
description="Edit the content",
expected_output="Edited content",
agent=editor,
)
# Create crew
crew = Crew(
agents=[researcher, writer, editor],
tasks=[task1, task2, task3],
)
# Verify that all agents have allow_delegation=True
for agent in crew.agents:
assert agent.allow_delegation is True
# Verify that delegate_to is None (default delegation to all)
assert agent.delegate_to is None
# Get delegation tools for researcher
delegation_tools = researcher.get_delegation_tools(crew.agents)
# Verify that tools for all agents are returned
assert len(delegation_tools) == 2 # Delegate and Ask tools
# Check that the tools can delegate to all agents
delegate_tool = delegation_tools[0]
ask_tool = delegation_tools[1]
# Verify the tools description includes all agents
assert "Researcher" in delegate_tool.description
assert "Writer" in delegate_tool.description
assert "Editor" in delegate_tool.description
assert "Researcher" in ask_tool.description
assert "Writer" in ask_tool.description
assert "Editor" in ask_tool.description
@pytest.mark.vcr(filter_headers=["authorization"])
def test_update_manager_tools_functionality():
"""Test that _update_manager_tools correctly adds delegation tools to the manager agent."""
# Create agents
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You're an expert researcher",
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You're an expert writer",
)
# Create a manager agent
manager = Agent(
role="Manager",
goal="Manage the team",
backstory="You're an expert manager",
allow_delegation=True,
)
# Create a crew with the manager agent
crew = Crew(
agents=[researcher, writer],
manager_agent=manager,
process=Process.hierarchical,
)
# Ensure the manager agent is set up
crew._create_manager_agent()
# Case 1: Task with an assigned agent
task_with_agent = Task(
description="Research a topic",
expected_output="Research results",
agent=researcher,
)
# Create an initial set of tools
from crewai.tools.base_tool import BaseTool
class TestTool(BaseTool):
name: str = "Test Tool"
description: str = "A test tool"
def _run(self) -> str:
return "Tool executed"
initial_tools = [TestTool()]
# Test _update_manager_tools with a task that has an agent
updated_tools = crew._update_manager_tools(task_with_agent, initial_tools)
# Verify that delegation tools for the task's agent were added
assert len(updated_tools) > len(initial_tools)
assert any(
f"Delegate a specific task to one of the following coworkers: {researcher.role}"
in tool.description
for tool in updated_tools
)
assert any(
f"Ask a specific question to one of the following coworkers: {researcher.role}"
in tool.description
for tool in updated_tools
)
# Case 2: Task without an assigned agent
task_without_agent = Task(
description="General task",
expected_output="Task completed",
)
# Test _update_manager_tools with a task that doesn't have an agent
updated_tools = crew._update_manager_tools(task_without_agent, initial_tools)
# Verify that delegation tools for all agents were added
assert len(updated_tools) > len(initial_tools)
assert any(
f"Delegate a specific task to one of the following coworkers: {researcher.role}, {writer.role}"
in tool.description
for tool in updated_tools
)
assert any(
f"Ask a specific question to one of the following coworkers: {researcher.role}, {writer.role}"
in tool.description
for tool in updated_tools
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_manager_tools_during_task_execution():
"""Test that manager tools are correctly added during task execution in a hierarchical process."""
# Create agents
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You're an expert researcher",
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You're an expert writer",
)
# Create tasks
task_with_agent = Task(
description="Research a topic",
expected_output="Research results",
agent=researcher,
)
task_without_agent = Task(
description="General task",
expected_output="Task completed",
)
# Create a crew with hierarchical process
crew_with_agent_task = Crew(
agents=[researcher, writer],
tasks=[task_with_agent],
process=Process.hierarchical,
manager_llm="gpt-4o",
)
crew_without_agent_task = Crew(
agents=[researcher, writer],
tasks=[task_without_agent],
process=Process.hierarchical,
manager_llm="gpt-4o",
)
# Mock task execution to capture the tools
mock_task_output = TaskOutput(
description="Mock description", raw="mocked output", agent="mocked agent"
)
# Test case 1: Task with an assigned agent
with patch.object(
Task, "execute_sync", return_value=mock_task_output
) as mock_execute_sync:
# Set the output attribute to avoid None errors
task_with_agent.output = mock_task_output
# Execute the crew
crew_with_agent_task.kickoff()
# Verify execute_sync was called
mock_execute_sync.assert_called_once()
# Get the tools argument from the call
_, kwargs = mock_execute_sync.call_args
tools = kwargs["tools"]
# Verify that delegation tools for the task's agent were added
assert any(
f"Delegate a specific task to one of the following coworkers: {researcher.role}"
in tool.description
for tool in tools
)
assert any(
f"Ask a specific question to one of the following coworkers: {researcher.role}"
in tool.description
for tool in tools
)
# Test case 2: Task without an assigned agent
with patch.object(
Task, "execute_sync", return_value=mock_task_output
) as mock_execute_sync:
# Set the output attribute to avoid None errors
task_without_agent.output = mock_task_output
# Execute the crew
crew_without_agent_task.kickoff()
# Verify execute_sync was called
mock_execute_sync.assert_called_once()
# Get the tools argument from the call
_, kwargs = mock_execute_sync.call_args
tools = kwargs["tools"]
# Verify that delegation tools for all agents were added
assert any(
f"Delegate a specific task to one of the following coworkers: {researcher.role}, {writer.role}"
in tool.description
for tool in tools
)
assert any(
f"Ask a specific question to one of the following coworkers: {researcher.role}, {writer.role}"
in tool.description
for tool in tools
)

View File

@@ -3,8 +3,6 @@
import hashlib
import json
import os
from functools import partial
from typing import Tuple, Union
from unittest.mock import MagicMock, patch
import pytest
@@ -217,75 +215,6 @@ def test_multiple_output_type_error():
)
def test_guardrail_type_error():
desc = "Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting."
expected_output = "Bullet point list of 5 interesting ideas."
# Lambda function
Task(
description=desc,
expected_output=expected_output,
guardrail=lambda x: (True, x),
)
# Function
def guardrail_fn(x: TaskOutput) -> tuple[bool, TaskOutput]:
return (True, x)
Task(
description=desc,
expected_output=expected_output,
guardrail=guardrail_fn,
)
class Object:
def guardrail_fn(self, x: TaskOutput) -> tuple[bool, TaskOutput]:
return (True, x)
@classmethod
def guardrail_class_fn(cls, x: TaskOutput) -> tuple[bool, str]:
return (True, x)
@staticmethod
def guardrail_static_fn(x: TaskOutput) -> tuple[bool, Union[str, TaskOutput]]:
return (True, x)
obj = Object()
# Method
Task(
description=desc,
expected_output=expected_output,
guardrail=obj.guardrail_fn,
)
# Class method
Task(
description=desc,
expected_output=expected_output,
guardrail=Object.guardrail_class_fn,
)
# Static method
Task(
description=desc,
expected_output=expected_output,
guardrail=Object.guardrail_static_fn,
)
def error_fn(x: TaskOutput, y: bool) -> Tuple[bool, TaskOutput]:
return (y, x)
Task(
description=desc,
expected_output=expected_output,
guardrail=partial(error_fn, y=True),
)
with pytest.raises(ValidationError):
Task(
description=desc,
expected_output=expected_output,
guardrail=error_fn,
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_output_pydantic_sequential():
class ScoreOutput(BaseModel):

View File

@@ -1,34 +0,0 @@
from unittest.mock import Mock
from crewai.utilities.events.base_events import CrewEvent
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
class TestEvent(CrewEvent):
pass
def test_specific_event_handler():
mock_handler = Mock()
@crewai_event_bus.on(TestEvent)
def handler(source, event):
mock_handler(source, event)
event = TestEvent(type="test_event")
crewai_event_bus.emit("source_object", event)
mock_handler.assert_called_once_with("source_object", event)
def test_wildcard_event_handler():
mock_handler = Mock()
@crewai_event_bus.on(CrewEvent)
def handler(source, event):
mock_handler(source, event)
event = TestEvent(type="test_event")
crewai_event_bus.emit("source_object", event)
mock_handler.assert_called_once_with("source_object", event)

2
uv.lock generated
View File

@@ -619,7 +619,7 @@ wheels = [
[[package]]
name = "crewai"
version = "0.108.0"
version = "0.105.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },