Compare commits

..

1 Commits

Author SHA1 Message Date
Brandon Hancock
a6cd115b25 Update pyproject.toml and uv.lock to drop crewai-tools as a default requirement in crewai repo 2024-12-05 12:59:17 -05:00
173 changed files with 46594 additions and 14459 deletions

View File

@@ -65,6 +65,7 @@ body:
- '3.10' - '3.10'
- '3.11' - '3.11'
- '3.12' - '3.12'
- '3.13'
validations: validations:
required: true required: true
- type: input - type: input
@@ -112,4 +113,4 @@ body:
label: Additional context label: Additional context
description: Add any other context about the problem here. description: Add any other context about the problem here.
validations: validations:
required: true required: true

View File

@@ -13,4 +13,4 @@ jobs:
pip install ruff pip install ruff
- name: Run Ruff Linter - name: Run Ruff Linter
run: ruff check run: ruff check --exclude "templates","__init__.py"

View File

@@ -1,10 +1,5 @@
name: Mark stale issues and pull requests name: Mark stale issues and pull requests
permissions:
contents: write
issues: write
pull-requests: write
on: on:
schedule: schedule:
- cron: '10 12 * * *' - cron: '10 12 * * *'
@@ -13,6 +8,9 @@ on:
jobs: jobs:
stale: stale:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps: steps:
- uses: actions/stale@v9 - uses: actions/stale@v9
with: with:

View File

@@ -1,60 +1,32 @@
name: Run Tests name: Run Tests
on: on: [pull_request]
pull_request:
push:
branches:
- main
permissions: permissions:
contents: write contents: write
env:
OPENAI_API_KEY: fake-api-key
jobs: jobs:
tests: tests:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 15 timeout-minutes: 15
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
MODEL: gpt-4o-mini
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Install UV - name: Install uv
uses: astral-sh/setup-uv@v3 uses: astral-sh/setup-uv@v3
with: with:
enable-cache: true enable-cache: true
- name: Set up Python - name: Set up Python
run: uv python install 3.12.8 run: uv python install 3.11.9
- name: Install the project - name: Install the project
run: uv sync --dev --all-extras run: uv sync --dev --all-extras
- name: Run General Tests - name: Run tests
run: uv run pytest tests -k "not main_branch_tests" -vv run: uv run pytest tests -vv
main_branch_tests:
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
needs: tests
timeout-minutes: 15
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install UV
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
- name: Set up Python
run: uv python install 3.12.8
- name: Install the project
run: uv sync --dev --all-extras
- name: Run Main Branch Specific Tests
run: uv run pytest tests/main_branch_tests -vv

1
.gitignore vendored
View File

@@ -21,4 +21,3 @@ crew_tasks_output.json
.mypy_cache .mypy_cache
.ruff_cache .ruff_cache
.venv .venv
agentops.log

View File

@@ -1,7 +1,9 @@
repos: repos:
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.2 rev: v0.4.4
hooks: hooks:
- id: ruff - id: ruff
args: ["--fix"] args: ["--fix"]
exclude: "templates"
- id: ruff-format - id: ruff-format
exclude: "templates"

View File

@@ -1,9 +0,0 @@
exclude = [
"templates",
"__init__.py",
]
[lint]
select = [
"I", # isort rules
]

177
README.md
View File

@@ -4,7 +4,7 @@
# **CrewAI** # **CrewAI**
🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results. 🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<h3> <h3>
@@ -22,17 +22,13 @@
- [Why CrewAI?](#why-crewai) - [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started) - [Getting Started](#getting-started)
- [Key Features](#key-features) - [Key Features](#key-features)
- [Understanding Flows and Crews](#understanding-flows-and-crews)
- [CrewAI vs LangGraph](#how-crewai-compares)
- [Examples](#examples) - [Examples](#examples)
- [Quick Tutorial](#quick-tutorial) - [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions) - [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner) - [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis) - [Stock Analysis](#stock-analysis)
- [Using Crews and Flows Together](#using-crews-and-flows-together)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model) - [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares) - [How CrewAI Compares](#how-crewai-compares)
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
- [Contribution](#contribution) - [Contribution](#contribution)
- [Telemetry](#telemetry) - [Telemetry](#telemetry)
- [License](#license) - [License](#license)
@@ -40,51 +36,22 @@
## Why CrewAI? ## Why CrewAI?
The power of AI collaboration has too much to offer. The power of AI collaboration has too much to offer.
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions. CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
## Getting Started ## Getting Started
### Learning Resources
Learn CrewAI through our comprehensive courses:
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
### Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
### Getting Started with Installation
To get started with CrewAI, follow these simple steps: To get started with CrewAI, follow these simple steps:
### 1. Installation ### 1. Installation
Ensure you have Python >=3.10 <3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience. Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI: First, install CrewAI:
```shell ```shell
pip install crewai pip install crewai
``` ```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command: If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell ```shell
@@ -92,22 +59,6 @@ pip install 'crewai[tools]'
``` ```
The command above installs the basic package and also adds extra components which require more dependencies to function. The command above installs the basic package and also adds extra components which require more dependencies to function.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
#### Common Issues
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration ### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command: To create a new CrewAI project, run the following CLI (Command Line Interface) command:
@@ -313,16 +264,13 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features ## Key Features
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks. - **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions. - **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios. - **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes. - **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management. - **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details. - **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map") ![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
@@ -357,98 +305,6 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis") [![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
### Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
```python
from crewai.flow.flow import Flow, listen, start, router
from crewai import Crew, Agent, Task
from pydantic import BaseModel
# Define structured state for precise control
class MarketState(BaseModel):
sentiment: str = "neutral"
confidence: float = 0.0
recommendations: list = []
class AdvancedAnalysisFlow(Flow[MarketState]):
@start()
def fetch_market_data(self):
# Demonstrate low-level control with structured state
self.state.sentiment = "analyzing"
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
@listen(fetch_market_data)
def analyze_with_crew(self, market_data):
# Show crew agency through specialized roles
analyst = Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher = Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task = Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task = Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew = Crew(
agents=[analyst, researcher],
tasks=[analysis_task, research_task],
process=Process.sequential,
verbose=True
)
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
@router(analyze_with_crew)
def determine_next_steps(self):
# Show flow control with conditional routing
if self.state.confidence > 0.8:
return "high_confidence"
elif self.state.confidence > 0.5:
return "medium_confidence"
return "low_confidence"
@listen("high_confidence")
def execute_strategy(self):
# Demonstrate complex decision making
strategy_crew = Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
return strategy_crew.kickoff()
@listen("medium_confidence", "low_confidence")
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
```
This example demonstrates how to:
1. Use Python code for basic data operations
2. Create and execute Crews as steps in your workflow
3. Use Flow decorators to manage the sequence of operations
4. Implement conditional branching based on Crew results
## Connecting Your Crew to a Model ## Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool. CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
@@ -457,13 +313,9 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
## How CrewAI Compares ## How CrewAI Compares
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control. **CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems. - **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications. - **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -588,8 +440,5 @@ A: CrewAI uses anonymous telemetry to collect usage data for improvement purpose
### Q: Where can I find examples of CrewAI in action? ### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more. A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: What is the difference between Crews and Flows?
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
### Q: How can I contribute to CrewAI? ### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details. A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

View File

@@ -101,8 +101,6 @@ from crewai_tools import SerperDevTool
class LatestAiDevelopmentCrew(): class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew""" """LatestAiDevelopment crew"""
agents_config = "config/agents.yaml"
@agent @agent
def researcher(self) -> Agent: def researcher(self) -> Agent:
return Agent( return Agent(

View File

@@ -161,7 +161,6 @@ The CLI will initially prompt for API keys for the following services:
* Groq * Groq
* Anthropic * Anthropic
* Google Gemini * Google Gemini
* SambaNova
When you select a provider, the CLI will prompt you to enter your API key. When you select a provider, the CLI will prompt you to enter your API key.

View File

@@ -32,6 +32,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. | | **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. | | **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. | | **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. | | **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. | | **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
@@ -40,155 +41,6 @@ A crew in crewAI represents a collaborative group of agents working together to
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. **Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Tip> </Tip>
## Creating Crews
There are two ways to create crews in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
#### Example Crew Class with Decorators
```python code
from crewai import Agent, Crew, Task, Process
from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff
@CrewBase
class YourCrewName:
"""Description of your crew"""
# Paths to your YAML configuration files
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff
def prepare_inputs(self, inputs):
# Modify inputs before the crew starts
inputs['additional_data'] = "Some extra information"
return inputs
@after_kickoff
def process_output(self, output):
# Modify output after the crew finishes
output.raw += "\nProcessed after kickoff."
return output
@agent
def agent_one(self) -> Agent:
return Agent(
config=self.agents_config['agent_one'],
verbose=True
)
@agent
def agent_two(self) -> Agent:
return Agent(
config=self.agents_config['agent_two'],
verbose=True
)
@task
def task_one(self) -> Task:
return Task(
config=self.tasks_config['task_one']
)
@task
def task_two(self) -> Task:
return Task(
config=self.tasks_config['task_two']
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the @agent decorator
tasks=self.tasks, # Automatically collected by the @task decorator.
process=Process.sequential,
verbose=True,
)
```
<Note>
Tasks will be executed in the order they are defined.
</Note>
The `CrewBase` class, along with these decorators, automates the collection of agents and tasks, reducing the need for manual management.
#### Decorators overview from `annotations.py`
CrewAI provides several decorators in the `annotations.py` file that are used to mark methods within your crew class for special handling:
- `@CrewBase`: Marks the class as a crew base class.
- `@agent`: Denotes a method that returns an `Agent` object.
- `@task`: Denotes a method that returns a `Task` object.
- `@crew`: Denotes the method that returns the `Crew` object.
- `@before_kickoff`: (Optional) Marks a method to be executed before the crew starts.
- `@after_kickoff`: (Optional) Marks a method to be executed after the crew finishes.
These decorators help in organizing your crew's structure and automatically collecting agents and tasks without manually listing them.
### Direct Code Definition (Alternative)
Alternatively, you can define the crew directly in code without using YAML configuration files.
```python code
from crewai import Agent, Crew, Task, Process
from crewai_tools import YourCustomTool
class YourCrewName:
def agent_one(self) -> Agent:
return Agent(
role="Data Analyst",
goal="Analyze data trends in the market",
backstory="An experienced data analyst with a background in economics",
verbose=True,
tools=[YourCustomTool()]
)
def agent_two(self) -> Agent:
return Agent(
role="Market Researcher",
goal="Gather information on market dynamics",
backstory="A diligent researcher with a keen eye for detail",
verbose=True
)
def task_one(self) -> Task:
return Task(
description="Collect recent market data and identify trends.",
expected_output="A report summarizing key trends in the market.",
agent=self.agent_one()
)
def task_two(self) -> Task:
return Task(
description="Research factors affecting market dynamics.",
expected_output="An analysis of factors influencing the market.",
agent=self.agent_two()
)
def crew(self) -> Crew:
return Crew(
agents=[self.agent_one(), self.agent_two()],
tasks=[self.task_one(), self.task_two()],
process=Process.sequential,
verbose=True
)
```
In this example:
- Agents and tasks are defined directly within the class without decorators.
- We manually create and manage the list of agents and tasks.
- This approach provides more control but can be less maintainable for larger projects.
## Crew Output ## Crew Output
@@ -336,4 +188,4 @@ Then, to replay from a specific task, use:
crewai replay -t <task_id> crewai replay -t <task_id>
``` ```
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks. These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.

View File

@@ -138,7 +138,7 @@ print("---- Final Output ----")
print(final_output) print(final_output)
```` ````
```text Output ``` text Output
---- Final Output ---- ---- Final Output ----
Second method received: Output from first_method Second method received: Output from first_method
```` ````

View File

@@ -4,10 +4,12 @@ description: What is knowledge in CrewAI and how to use it.
icon: book icon: book
--- ---
# Using Knowledge in CrewAI
## What is Knowledge? ## What is Knowledge?
Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks. Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks.
Think of it as giving your agents a reference library they can consult while working. Think of it as giving your agents a reference library they can consult while working.
<Info> <Info>
Key benefits of using Knowledge: Key benefits of using Knowledge:
@@ -34,20 +36,7 @@ CrewAI supports various types of knowledge sources out of the box:
</Card> </Card>
</CardGroup> </CardGroup>
## Supported Knowledge Parameters ## Quick Start
| Parameter | Type | Required | Description |
| :--------------------------- | :---------------------------------- | :------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
| `sources` | **List[BaseKnowledgeSource]** | Yes | List of knowledge sources that provide content to be stored and queried. Can include PDF, CSV, Excel, JSON, text files, or string content. |
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
## Quickstart Example
<Tip>
For file-Based Knowledge Sources, make sure to place your files in a `knowledge` directory at the root of your project.
Also, use relative paths from the `knowledge` directory when creating the source.
</Tip>
Here's an example using string-based knowledge: Here's an example using string-based knowledge:
@@ -58,7 +47,8 @@ from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSourc
# Create a knowledge source # Create a knowledge source
content = "Users name is John. He is 30 years old and lives in San Francisco." content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource( string_source = StringKnowledgeSource(
content=content, content=content,
metadata={"preference": "personal"}
) )
# Create an LLM with a temperature of 0 to ensure deterministic outputs # Create an LLM with a temperature of 0 to ensure deterministic outputs
@@ -84,334 +74,62 @@ crew = Crew(
tasks=[task], tasks=[task],
verbose=True, verbose=True,
process=Process.sequential, process=Process.sequential,
knowledge_sources=[string_source], # Enable knowledge by adding the sources here. You can also add more sources to the sources list. knowledge={
"sources": [string_source],
"metadata": {"preference": "personal"}
}, # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
) )
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"}) result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
``` ```
Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including TXT, PDF, DOCX, HTML, and more.
```python Code
from crewai import LLM, Agent, Crew, Process, Task
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
# Create a knowledge source
content_source = CrewDoclingSource(
file_paths=[
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking",
"https://lilianweng.github.io/posts/2024-07-07-hallucination",
],
)
# Create an LLM with a temperature of 0 to ensure deterministic outputs
llm = LLM(model="gpt-4o-mini", temperature=0)
# Create an agent with the knowledge store
agent = Agent(
role="About papers",
goal="You know everything about the papers.",
backstory="""You are a master at understanding papers and their content.""",
verbose=True,
allow_delegation=False,
llm=llm,
)
task = Task(
description="Answer the following questions about the papers: {question}",
expected_output="An answer to the question.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
knowledge_sources=[
content_source
], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
)
result = crew.kickoff(
inputs={
"question": "What is the reward hacking paper about? Be sure to provide sources."
}
)
```
## More Examples
Here are examples of how to use different types of knowledge sources:
### Text File Knowledge Source
```python
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
# Create a text file knowledge source
text_source = CrewDoclingSource(
file_paths=["document.txt", "another.txt"]
)
# Create crew with text file source on agents or crew level
agent = Agent(
...
knowledge_sources=[text_source]
)
crew = Crew(
...
knowledge_sources=[text_source]
)
```
### PDF Knowledge Source
```python
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
# Create a PDF knowledge source
pdf_source = PDFKnowledgeSource(
file_paths=["document.pdf", "another.pdf"]
)
# Create crew with PDF knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[pdf_source]
)
crew = Crew(
...
knowledge_sources=[pdf_source]
)
```
### CSV Knowledge Source
```python
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
# Create a CSV knowledge source
csv_source = CSVKnowledgeSource(
file_paths=["data.csv"]
)
# Create crew with CSV knowledge source or on agent level
agent = Agent(
...
knowledge_sources=[csv_source]
)
crew = Crew(
...
knowledge_sources=[csv_source]
)
```
### Excel Knowledge Source
```python
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
# Create an Excel knowledge source
excel_source = ExcelKnowledgeSource(
file_paths=["spreadsheet.xlsx"]
)
# Create crew with Excel knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[excel_source]
)
crew = Crew(
...
knowledge_sources=[excel_source]
)
```
### JSON Knowledge Source
```python
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
# Create a JSON knowledge source
json_source = JSONKnowledgeSource(
file_paths=["data.json"]
)
# Create crew with JSON knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[json_source]
)
crew = Crew(
...
knowledge_sources=[json_source]
)
```
## Knowledge Configuration ## Knowledge Configuration
### Chunking Configuration ### Metadata and Filtering
Knowledge sources automatically chunk content for better processing. Knowledge sources support metadata for better organization and filtering. Metadata is used to filter the knowledge sources when querying the knowledge store.
You can configure chunking behavior in your knowledge sources:
```python
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
source = StringKnowledgeSource(
content="Your content here",
chunk_size=4000, # Maximum size of each chunk (default: 4000)
chunk_overlap=200 # Overlap between chunks (default: 200)
)
```
The chunking configuration helps in:
- Breaking down large documents into manageable pieces
- Maintaining context through chunk overlap
- Optimizing retrieval accuracy
### Embeddings Configuration
You can also configure the embedder for the knowledge store.
This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
The `embedder` parameter supports various embedding model providers that include:
- `openai`: OpenAI's embedding models
- `google`: Google's text embedding models
- `azure`: Azure OpenAI embeddings
- `ollama`: Local embeddings with Ollama
- `vertexai`: Google Cloud VertexAI embeddings
- `cohere`: Cohere's embedding models
- `bedrock`: AWS Bedrock embeddings
- `huggingface`: Hugging Face models
- `watson`: IBM Watson embeddings
Here's an example of how to configure the embedder for the knowledge store using Google's `text-embedding-004` model:
<CodeGroup>
```python Example
from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
import os
# Get the GEMINI API key
GEMINI_API_KEY = os.environ.get("GEMINI_API_KEY")
# Create a knowledge source
content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource(
content=content,
)
# Create an LLM with a temperature of 0 to ensure deterministic outputs
gemini_llm = LLM(
model="gemini/gemini-1.5-pro-002",
api_key=GEMINI_API_KEY,
temperature=0,
)
# Create an agent with the knowledge store
agent = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
verbose=True,
allow_delegation=False,
llm=gemini_llm,
)
task = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
knowledge_sources=[string_source],
embedder={
"provider": "google",
"config": {
"model": "models/text-embedding-004",
"api_key": GEMINI_API_KEY,
}
}
)
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
```
```text Output
# Agent: About User
## Task: Answer the following questions about the user: What city does John live in and how old is he?
# Agent: About User
## Final Answer:
John is 30 years old and lives in San Francisco.
```
</CodeGroup>
## Clearing Knowledge
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.
```bash Command
crewai reset-memories --knowledge
```
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
## Agent-Specific Knowledge
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
```python Code ```python Code
from crewai import Agent, Task, Crew knowledge_source = StringKnowledgeSource(
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource content="Users name is John. He is 30 years old and lives in San Francisco.",
metadata={"preference": "personal"} # Metadata is used to filter the knowledge sources
# Create agent-specific knowledge about a product
product_specs = StringKnowledgeSource(
content="""The XPS 13 laptop features:
- 13.4-inch 4K display
- Intel Core i7 processor
- 16GB RAM
- 512GB SSD storage
- 12-hour battery life""",
metadata={"category": "product_specs"}
)
# Create a support agent with product knowledge
support_agent = Agent(
role="Technical Support Specialist",
goal="Provide accurate product information and support.",
backstory="You are an expert on our laptop products and specifications.",
knowledge_sources=[product_specs] # Agent-specific knowledge
)
# Create a task that requires product knowledge
support_task = Task(
description="Answer this customer question: {question}",
agent=support_agent
)
# Create and run the crew
crew = Crew(
agents=[support_agent],
tasks=[support_task]
)
# Get answer about the laptop's specifications
result = crew.kickoff(
inputs={"question": "What is the storage capacity of the XPS 13?"}
) )
``` ```
<Info> ### Chunking Configuration
Benefits of agent-specific knowledge:
- Give agents specialized information for their roles Control how content is split for processing by setting the chunk size and overlap.
- Maintain separation of concerns between agents
- Combine with crew-level knowledge for layered information access ```python Code
</Info> knowledge_source = StringKnowledgeSource(
content="Long content...",
chunk_size=4000, # Characters per chunk (default)
chunk_overlap=200 # Overlap between chunks (default)
)
```
## Embedder Configuration
You can also configure the embedder for the knowledge store. This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
```python Code
...
string_source = StringKnowledgeSource(
content="Users name is John. He is 30 years old and lives in San Francisco.",
metadata={"preference": "personal"}
)
crew = Crew(
...
knowledge={
"sources": [string_source],
"metadata": {"preference": "personal"},
"embedder_config": {
"provider": "openai", # Default embedder provider; can be "ollama", "gemini", e.t.c.
"config": {"model": "text-embedding-3-small"} # Default embedder model; can be "mxbai-embed-large", "nomic-embed-tex", e.t.c.
},
},
)
```
## Custom Knowledge Sources ## Custom Knowledge Sources
@@ -431,10 +149,10 @@ from pydantic import BaseModel, Field
class SpaceNewsKnowledgeSource(BaseKnowledgeSource): class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
"""Knowledge source that fetches data from Space News API.""" """Knowledge source that fetches data from Space News API."""
api_endpoint: str = Field(description="API endpoint URL") api_endpoint: str = Field(description="API endpoint URL")
limit: int = Field(default=10, description="Number of articles to fetch") limit: int = Field(default=10, description="Number of articles to fetch")
def load_content(self) -> Dict[Any, str]: def load_content(self) -> Dict[Any, str]:
"""Fetch and format space news articles.""" """Fetch and format space news articles."""
try: try:
@@ -442,26 +160,26 @@ class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
f"{self.api_endpoint}?limit={self.limit}" f"{self.api_endpoint}?limit={self.limit}"
) )
response.raise_for_status() response.raise_for_status()
data = response.json() data = response.json()
articles = data.get('results', []) articles = data.get('results', [])
formatted_data = self._format_articles(articles) formatted_data = self._format_articles(articles)
return {self.api_endpoint: formatted_data} return {self.api_endpoint: formatted_data}
except Exception as e: except Exception as e:
raise ValueError(f"Failed to fetch space news: {str(e)}") raise ValueError(f"Failed to fetch space news: {str(e)}")
def _format_articles(self, articles: list) -> str: def _format_articles(self, articles: list) -> str:
"""Format articles into readable text.""" """Format articles into readable text."""
formatted = "Space News Articles:\n\n" formatted = "Space News Articles:\n\n"
for article in articles: for article in articles:
formatted += f""" formatted += f"""
Title: {article['title']} Title: {article['title']}
Published: {article['published_at']} Published: {article['published_at']}
Summary: {article['summary']} Summary: {article['summary']}
News Site: {article['news_site']} News Site: {article['news_site']}
URL: {article['url']} URL: {article['url']}
-------------------""" -------------------"""
return formatted return formatted
def add(self) -> None: def add(self) -> None:
@@ -470,20 +188,25 @@ class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
for _, text in content.items(): for _, text in content.items():
chunks = self._chunk_text(text) chunks = self._chunk_text(text)
self.chunks.extend(chunks) self.chunks.extend(chunks)
self._save_documents() self.save_documents(metadata={
"source": "space_news_api",
"timestamp": datetime.now().isoformat(),
"article_count": self.limit
})
# Create knowledge source # Create knowledge source
recent_news = SpaceNewsKnowledgeSource( recent_news = SpaceNewsKnowledgeSource(
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles", api_endpoint="https://api.spaceflightnewsapi.net/v4/articles",
limit=10, limit=10,
metadata={"category": "recent_news", "source": "spaceflight_news"}
) )
# Create specialized agent # Create specialized agent
space_analyst = Agent( space_analyst = Agent(
role="Space News Analyst", role="Space News Analyst",
goal="Answer questions about space news accurately and comprehensively", goal="Answer questions about space news accurately and comprehensively",
backstory="""You are a space industry analyst with expertise in space exploration, backstory="""You are a space industry analyst with expertise in space exploration,
satellite technology, and space industry trends. You excel at answering questions satellite technology, and space industry trends. You excel at answering questions
about space news and providing detailed, accurate information.""", about space news and providing detailed, accurate information.""",
knowledge_sources=[recent_news], knowledge_sources=[recent_news],
@@ -510,14 +233,13 @@ result = crew.kickoff(
inputs={"user_question": "What are the latest developments in space exploration?"} inputs={"user_question": "What are the latest developments in space exploration?"}
) )
``` ```
```output Output ```output Output
# Agent: Space News Analyst # Agent: Space News Analyst
## Task: Answer this question about space news: What are the latest developments in space exploration? ## Task: Answer this question about space news: What are the latest developments in space exploration?
# Agent: Space News Analyst # Agent: Space News Analyst
## Final Answer: ## Final Answer:
The latest developments in space exploration, based on recent space news articles, include the following: The latest developments in space exploration, based on recent space news articles, include the following:
1. SpaceX has received the final regulatory approvals to proceed with the second integrated Starship/Super Heavy launch, scheduled for as soon as the morning of Nov. 17, 2023. This is a significant step in SpaceX's ambitious plans for space exploration and colonization. [Source: SpaceNews](https://spacenews.com/starship-cleared-for-nov-17-launch/) 1. SpaceX has received the final regulatory approvals to proceed with the second integrated Starship/Super Heavy launch, scheduled for as soon as the morning of Nov. 17, 2023. This is a significant step in SpaceX's ambitious plans for space exploration and colonization. [Source: SpaceNews](https://spacenews.com/starship-cleared-for-nov-17-launch/)
@@ -533,27 +255,23 @@ The latest developments in space exploration, based on recent space news article
6. The National Natural Science Foundation of China has outlined a five-year project for researchers to study the assembly of ultra-large spacecraft. This could lead to significant advancements in spacecraft technology and space exploration capabilities. [Source: SpaceNews](https://spacenews.com/china-researching-challenges-of-kilometer-scale-ultra-large-spacecraft/) 6. The National Natural Science Foundation of China has outlined a five-year project for researchers to study the assembly of ultra-large spacecraft. This could lead to significant advancements in spacecraft technology and space exploration capabilities. [Source: SpaceNews](https://spacenews.com/china-researching-challenges-of-kilometer-scale-ultra-large-spacecraft/)
7. The Center for AEroSpace Autonomy Research (CAESAR) at Stanford University is focusing on spacecraft autonomy. The center held a kickoff event on May 22, 2024, to highlight the industry, academia, and government collaboration it seeks to foster. This could lead to significant advancements in autonomous spacecraft technology. [Source: SpaceNews](https://spacenews.com/stanford-center-focuses-on-spacecraft-autonomy/) 7. The Center for AEroSpace Autonomy Research (CAESAR) at Stanford University is focusing on spacecraft autonomy. The center held a kickoff event on May 22, 2024, to highlight the industry, academia, and government collaboration it seeks to foster. This could lead to significant advancements in autonomous spacecraft technology. [Source: SpaceNews](https://spacenews.com/stanford-center-focuses-on-spacecraft-autonomy/)
``` ```
</CodeGroup> </CodeGroup>
#### Key Components Explained #### Key Components Explained
1. **Custom Knowledge Source (`SpaceNewsKnowledgeSource`)**: 1. **Custom Knowledge Source (`SpaceNewsKnowledgeSource`)**:
- Extends `BaseKnowledgeSource` for integration with CrewAI - Extends `BaseKnowledgeSource` for integration with CrewAI
- Configurable API endpoint and article limit - Configurable API endpoint and article limit
- Implements three key methods: - Implements three key methods:
- `load_content()`: Fetches articles from the API - `load_content()`: Fetches articles from the API
- `_format_articles()`: Structures the articles into readable text - `_format_articles()`: Structures the articles into readable text
- `add()`: Processes and stores the content - `add()`: Processes and stores the content with metadata
2. **Agent Configuration**: 2. **Agent Configuration**:
- Specialized role as a Space News Analyst - Specialized role as a Space News Analyst
- Uses the knowledge source to access space news - Uses the knowledge source to access space news
3. **Task Setup**: 3. **Task Setup**:
- Takes a user question as input through `{user_question}` - Takes a user question as input through `{user_question}`
- Designed to provide detailed answers based on the knowledge source - Designed to provide detailed answers based on the knowledge source
@@ -562,7 +280,6 @@ The latest developments in space exploration, based on recent space news article
- Handles input/output through the kickoff method - Handles input/output through the kickoff method
This example demonstrates how to: This example demonstrates how to:
- Create a custom knowledge source that fetches real-time data - Create a custom knowledge source that fetches real-time data
- Process and format external data for AI consumption - Process and format external data for AI consumption
- Use the knowledge source to answer specific user questions - Use the knowledge source to answer specific user questions
@@ -570,26 +287,26 @@ This example demonstrates how to:
#### About the Spaceflight News API #### About the Spaceflight News API
The example uses the [Spaceflight News API](https://api.spaceflightnewsapi.net/v4/docs/), which: The example uses the [Spaceflight News API](https://api.spaceflightnewsapi.net/v4/documentation), which:
- Provides free access to space-related news articles - Provides free access to space-related news articles
- Requires no authentication - Requires no authentication
- Returns structured data about space news - Returns structured data about space news
- Supports pagination and filtering - Supports pagination and filtering
You can customize the API query by modifying the endpoint URL: You can customize the API query by modifying the endpoint URL:
```python ```python
# Fetch more articles # Fetch more articles
recent_news = SpaceNewsKnowledgeSource( recent_news = SpaceNewsKnowledgeSource(
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles", api_endpoint="https://api.spaceflightnewsapi.net/v4/articles",
limit=20, # Increase the number of articles limit=20, # Increase the number of articles
metadata={"category": "recent_news"}
) )
# Add search parameters # Add search parameters
recent_news = SpaceNewsKnowledgeSource( recent_news = SpaceNewsKnowledgeSource(
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles?search=NASA", # Search for NASA news api_endpoint="https://api.spaceflightnewsapi.net/v4/articles?search=NASA", # Search for NASA news
limit=10, limit=10,
metadata={"category": "nasa_news"}
) )
``` ```
@@ -597,14 +314,16 @@ recent_news = SpaceNewsKnowledgeSource(
<AccordionGroup> <AccordionGroup>
<Accordion title="Content Organization"> <Accordion title="Content Organization">
- Use descriptive metadata for better filtering
- Keep chunk sizes appropriate for your content type - Keep chunk sizes appropriate for your content type
- Consider content overlap for context preservation - Consider content overlap for context preservation
- Organize related information into separate knowledge sources - Organize related information into separate knowledge sources
</Accordion> </Accordion>
<Accordion title="Performance Tips"> <Accordion title="Performance Tips">
- Adjust chunk sizes based on content complexity - Use metadata filtering to narrow search scope
- Adjust chunk sizes based on content complexity
- Configure appropriate embedding models - Configure appropriate embedding models
- Consider using local embedding providers for faster processing - Consider using local embedding providers for faster processing
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
## Available Models and Their Capabilities ## Available Models and Their Capabilities
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/?leaderboard) and [artificialanalysis.ai](https://artificialanalysis.ai/): Here's a detailed breakdown of supported models and their capabilities:
<Tabs> <Tabs>
<Tab title="OpenAI"> <Tab title="OpenAI">
@@ -43,128 +43,24 @@ Here's a detailed breakdown of supported models and their capabilities, you can
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words. 1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note> </Note>
</Tab> </Tab>
<Tab title="Nvidia NIM">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
| nvidia/nemotron-4-mini-hindi-4b-instruct| 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
| "nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses. |
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
| nvidia/neva-22| 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
| nvidia/nemotron-mini-4b-instruct | 8,192 tokens | General-purpose tasks |
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
<Note>
NVIDIA's NIM support for models is expanding continuously! For the most up-to-date list of available models, please visit build.nvidia.com.
</Note>
</Tab>
<Tab title="Gemini">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
<Tip>
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
These models are available via API_KEY from
[The Gemini API](https://ai.google.dev/gemini-api/docs) and also from
[Google Cloud Vertex](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai) as part of the
[Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models).
</Tip>
</Tab>
<Tab title="Groq"> <Tab title="Groq">
| Model | Context Window | Best For | | Model | Context Window | Best For |
|-------|---------------|-----------| |-------|---------------|-----------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks | | Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks | | Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context | | Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
<Tip> <Tip>
Groq is known for its fast inference speeds, making it suitable for real-time applications. Groq is known for its fast inference speeds, making it suitable for real-time applications.
</Tip> </Tip>
</Tab> </Tab>
<Tab title="SambaNova">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks, multimodal |
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality|
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
<Tip>
[SambaNova](https://cloud.sambanova.ai/) has several models with fast inference speed at full precision.
</Tip>
</Tab>
<Tab title="Others"> <Tab title="Others">
| Provider | Context Window | Key Features | | Provider | Context Window | Key Features |
|----------|---------------|--------------| |----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions | | Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding | | Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks | | Gemini | Varies by model | Multimodal capabilities |
<Info> <Info>
Provider selection should consider factors like: Provider selection should consider factors like:
@@ -232,10 +128,10 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: anthropic/claude-2.1 # llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0 # llm: anthropic/claude-2.0
# Google Models - Strong reasoning, large cachable context window, multimodal # Google Models - Good for general tasks
# llm: gemini/gemini-pro
# llm: gemini/gemini-1.5-pro-latest # llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.5-flash-latest # llm: gemini/gemini-1.0-pro-latest
# llm: gemini/gemini-1.5-flash-8b-latest
# AWS Bedrock Models - Enterprise-grade # AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0 # llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
@@ -454,18 +350,13 @@ Learn how to get the most out of your LLM configuration:
<Accordion title="Google"> <Accordion title="Google">
```python Code ```python Code
# Option 1. Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key> GEMINI_API_KEY=<your-api-key>
# Option 2. Vertex AI IAM credentials for Gemini, Anthropic, and anything in the Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
``` ```
Example usage: Example usage:
```python Code ```python Code
llm = LLM( llm = LLM(
model="gemini/gemini-1.5-pro-latest", model="gemini/gemini-pro",
temperature=0.7 temperature=0.7
) )
``` ```
@@ -521,20 +412,6 @@ Learn how to get the most out of your LLM configuration:
``` ```
</Accordion> </Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Groq"> <Accordion title="Groq">
```python Code ```python Code
GROQ_API_KEY=<your-api-key> GROQ_API_KEY=<your-api-key>
@@ -625,6 +502,20 @@ Learn how to get the most out of your LLM configuration:
``` ```
</Accordion> </Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="SambaNova"> <Accordion title="SambaNova">
```python Code ```python Code
SAMBANOVA_API_KEY=<your-api-key> SAMBANOVA_API_KEY=<your-api-key>

View File

@@ -134,23 +134,6 @@ crew = Crew(
) )
``` ```
## Memory Configuration Options
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
```python Code
from crewai import Crew
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
},
)
```
## Additional Embedding Providers ## Additional Embedding Providers

View File

@@ -6,7 +6,7 @@ icon: list-check
## Overview of a Task ## Overview of a Task
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`. In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities. Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
@@ -263,148 +263,8 @@ analysis_task = Task(
) )
``` ```
## Task Guardrails
Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides
efeedback to agents when their output doesn't meet specific criteria.
### Using Task Guardrails
To add a guardrail to a task, provide a validation function through the `guardrail` parameter:
```python Code
from typing import Tuple, Union, Dict, Any
def validate_blog_content(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
"""Validate blog content meets requirements."""
try:
# Check word count
word_count = len(result.split())
if word_count > 200:
return (False, {
"error": "Blog content exceeds 200 words",
"code": "WORD_COUNT_ERROR",
"context": {"word_count": word_count}
})
# Additional validation logic here
return (True, result.strip())
except Exception as e:
return (False, {
"error": "Unexpected error during validation",
"code": "SYSTEM_ERROR"
})
blog_task = Task(
description="Write a blog post about AI",
expected_output="A blog post under 200 words",
agent=blog_agent,
guardrail=validate_blog_content # Add the guardrail function
)
```
### Guardrail Function Requirements
1. **Function Signature**:
- Must accept exactly one parameter (the task output)
- Should return a tuple of `(bool, Any)`
- Type hints are recommended but optional
2. **Return Values**:
- Success: Return `(True, validated_result)`
- Failure: Return `(False, error_details)`
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
try:
# Main validation logic
validated_data = perform_validation(result)
return (True, validated_data)
except ValidationError as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"input": result}
})
except Exception as e:
return (False, {
"error": "Unexpected error",
"code": "SYSTEM_ERROR"
})
```
2. **Error Categories**:
- Use specific error codes
- Include relevant context
- Provide actionable feedback
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
def complex_validation(result: str) -> Tuple[bool, Union[str, Dict[str, Any]]]:
"""Chain multiple validation steps."""
# Step 1: Basic validation
if not result:
return (False, {"error": "Empty result", "code": "EMPTY_INPUT"})
# Step 2: Content validation
try:
validated = validate_content(result)
if not validated:
return (False, {"error": "Invalid content", "code": "CONTENT_ERROR"})
# Step 3: Format validation
formatted = format_output(validated)
return (True, formatted)
except Exception as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"step": "content_validation"}
})
```
### Handling Guardrail Results
When a guardrail returns `(False, error)`:
1. The error is sent back to the agent
2. The agent attempts to fix the issue
3. The process repeats until:
- The guardrail returns `(True, result)`
- Maximum retries are reached
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
def validate_json_output(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
"""Validate and parse JSON output."""
try:
# Try to parse as JSON
data = json.loads(result)
return (True, data)
except json.JSONDecodeError as e:
return (False, {
"error": "Invalid JSON format",
"code": "JSON_ERROR",
"context": {"line": e.lineno, "column": e.colno}
})
task = Task(
description="Generate a JSON report",
expected_output="A valid JSON object",
agent=analyst,
guardrail=validate_json_output,
max_retries=3 # Limit retry attempts
)
```
## Getting Structured Consistent Outputs from Tasks ## Getting Structured Consistent Outputs from Tasks
When you need to ensure that a task outputs a structured and consistent format, you can use the `output_pydantic` or `output_json` properties on a task. These properties allow you to define the expected output structure, making it easier to parse and utilize the results in your application.
<Note> <Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself. It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
@@ -748,114 +608,6 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework. These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Task Guardrails
Task guardrails provide a powerful way to validate, transform, or filter task outputs before they are passed to the next task. Guardrails are optional functions that execute before the next task starts, allowing you to ensure that task outputs meet specific requirements or formats.
### Basic Usage
```python Code
from typing import Tuple, Union
from crewai import Task
def validate_json_output(result: str) -> Tuple[bool, Union[dict, str]]:
"""Validate that the output is valid JSON."""
try:
json_data = json.loads(result)
return (True, json_data)
except json.JSONDecodeError:
return (False, "Output must be valid JSON")
task = Task(
description="Generate JSON data",
expected_output="Valid JSON object",
guardrail=validate_json_output
)
```
### How Guardrails Work
1. **Optional Attribute**: Guardrails are an optional attribute at the task level, allowing you to add validation only where needed.
2. **Execution Timing**: The guardrail function is executed before the next task starts, ensuring valid data flow between tasks.
3. **Return Format**: Guardrails must return a tuple of `(success, data)`:
- If `success` is `True`, `data` is the validated/transformed result
- If `success` is `False`, `data` is the error message
4. **Result Routing**:
- On success (`True`), the result is automatically passed to the next task
- On failure (`False`), the error is sent back to the agent to generate a new answer
### Common Use Cases
#### Data Format Validation
```python Code
def validate_email_format(result: str) -> Tuple[bool, Union[str, str]]:
"""Ensure the output contains a valid email address."""
import re
email_pattern = r'^[\w\.-]+@[\w\.-]+\.\w+$'
if re.match(email_pattern, result.strip()):
return (True, result.strip())
return (False, "Output must be a valid email address")
```
#### Content Filtering
```python Code
def filter_sensitive_info(result: str) -> Tuple[bool, Union[str, str]]:
"""Remove or validate sensitive information."""
sensitive_patterns = ['SSN:', 'password:', 'secret:']
for pattern in sensitive_patterns:
if pattern.lower() in result.lower():
return (False, f"Output contains sensitive information ({pattern})")
return (True, result)
```
#### Data Transformation
```python Code
def normalize_phone_number(result: str) -> Tuple[bool, Union[str, str]]:
"""Ensure phone numbers are in a consistent format."""
import re
digits = re.sub(r'\D', '', result)
if len(digits) == 10:
formatted = f"({digits[:3]}) {digits[3:6]}-{digits[6:]}"
return (True, formatted)
return (False, "Output must be a 10-digit phone number")
```
### Advanced Features
#### Chaining Multiple Validations
```python Code
def chain_validations(*validators):
"""Chain multiple validators together."""
def combined_validator(result):
for validator in validators:
success, data = validator(result)
if not success:
return (False, data)
result = data
return (True, result)
return combined_validator
# Usage
task = Task(
description="Get user contact info",
expected_output="Email and phone",
guardrail=chain_validations(
validate_email_format,
filter_sensitive_info
)
)
```
#### Custom Retry Logic
```python Code
task = Task(
description="Generate data",
expected_output="Valid data",
guardrail=validate_data,
max_retries=5 # Override default retry limit
)
```
## Creating Directories when Saving Files ## Creating Directories when Saving Files
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured. You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
@@ -877,7 +629,7 @@ save_output_task = Task(
## Conclusion ## Conclusion
Tasks are the driving force behind the actions of agents in CrewAI. Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended. ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -172,48 +172,6 @@ def my_tool(question: str) -> str:
return "Result from your custom tool" return "Result from your custom tool"
``` ```
### Structured Tools
The `StructuredTool` class wraps functions as tools, providing flexibility and validation while reducing boilerplate. It supports custom schemas and dynamic logic for seamless integration of complex functionalities.
#### Example:
Using `StructuredTool.from_function`, you can wrap a function that interacts with an external API or system, providing a structured interface. This enables robust validation and consistent execution, making it easier to integrate complex functionalities into your applications as demonstrated in the following example:
```python
from crewai.tools.structured_tool import CrewStructuredTool
from pydantic import BaseModel
# Define the schema for the tool's input using Pydantic
class APICallInput(BaseModel):
endpoint: str
parameters: dict
# Wrapper function to execute the API call
def tool_wrapper(*args, **kwargs):
# Here, you would typically call the API using the parameters
# For demonstration, we'll return a placeholder string
return f"Call the API at {kwargs['endpoint']} with parameters {kwargs['parameters']}"
# Create and return the structured tool
def create_structured_tool():
return CrewStructuredTool.from_function(
name='Wrapper API',
description="A tool to wrap API calls with structured input.",
args_schema=APICallInput,
func=tool_wrapper,
)
# Example usage
structured_tool = create_structured_tool()
# Execute the tool with structured input
result = structured_tool._run(**{
"endpoint": "https://example.com/api",
"parameters": {"key1": "value1", "key2": "value2"}
})
print(result) # Output: Call the API at https://example.com/api with parameters {'key1': 'value1', 'key2': 'value2'}
```
### Custom Caching Mechanism ### Custom Caching Mechanism
<Tip> <Tip>

View File

@@ -57,7 +57,7 @@ This feature is useful for debugging and understanding how agents interact with
<Step title="Install AgentOps"> <Step title="Install AgentOps">
Install AgentOps with: Install AgentOps with:
```bash ```bash
pip install 'crewai[agentops]' pip install crewai[agentops]
``` ```
or or
```bash ```bash

View File

@@ -32,7 +32,6 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Cloudflare Workers AI - Cloudflare Workers AI
- DeepInfra - DeepInfra
- Groq - Groq
- SambaNova
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1) - [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
- And many more! - And many more!

View File

@@ -1,138 +0,0 @@
---
title: Using Multimodal Agents
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
icon: image
---
# Using Multimodal Agents
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
## Enabling Multimodal Capabilities
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
```python
from crewai import Agent
agent = Agent(
role="Image Analyst",
goal="Analyze and extract insights from images",
backstory="An expert in visual content interpretation with years of experience in image analysis",
multimodal=True # This enables multimodal capabilities
)
```
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
## Working with Images
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
Here's a complete example showing how to use a multimodal agent to analyze an image:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent
image_analyst = Agent(
role="Product Analyst",
goal="Analyze product images and provide detailed descriptions",
backstory="Expert in visual product analysis with deep knowledge of design and features",
multimodal=True
)
# Create a task for image analysis
task = Task(
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
agent=image_analyst
)
# Create and run the crew
crew = Crew(
agents=[image_analyst],
tasks=[task]
)
result = crew.kickoff()
```
### Advanced Usage with Context
You can provide additional context or specific questions about the image when creating tasks for multimodal agents. The task description can include specific aspects you want the agent to focus on:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent for detailed analysis
expert_analyst = Agent(
role="Visual Quality Inspector",
goal="Perform detailed quality analysis of product images",
backstory="Senior quality control expert with expertise in visual inspection",
multimodal=True # AddImageTool is automatically included
)
# Create a task with specific analysis requirements
inspection_task = Task(
description="""
Analyze the product image at https://example.com/product.jpg with focus on:
1. Quality of materials
2. Manufacturing defects
3. Compliance with standards
Provide a detailed report highlighting any issues found.
""",
agent=expert_analyst
)
# Create and run the crew
crew = Crew(
agents=[expert_analyst],
tasks=[inspection_task]
)
result = crew.kickoff()
```
### Tool Details
When working with multimodal agents, the `AddImageTool` is automatically configured with the following schema:
```python
class AddImageToolSchema:
image_url: str # Required: The URL or path of the image to process
action: Optional[str] = None # Optional: Additional context or specific questions about the image
```
The multimodal agent will automatically handle the image processing through its built-in tools, allowing it to:
- Access images via URLs or local file paths
- Process image content with optional context or specific questions
- Provide analysis and insights based on the visual information and task requirements
## Best Practices
When working with multimodal agents, keep these best practices in mind:
1. **Image Access**
- Ensure your images are accessible via URLs that the agent can reach
- For local images, consider hosting them temporarily or using absolute file paths
- Verify that image URLs are valid and accessible before running tasks
2. **Task Description**
- Be specific about what aspects of the image you want the agent to analyze
- Include clear questions or requirements in the task description
- Consider using the optional `action` parameter for focused analysis
3. **Resource Management**
- Image processing may require more computational resources than text-only tasks
- Some language models may require base64 encoding for image data
- Consider batch processing for multiple images to optimize performance
4. **Environment Setup**
- Verify that your environment has the necessary dependencies for image processing
- Ensure your language model supports multimodal capabilities
- Test with small images first to validate your setup
5. **Error Handling**
- Implement proper error handling for image loading failures
- Have fallback strategies for when image processing fails
- Monitor and log image processing operations for debugging

View File

@@ -1,211 +0,0 @@
# Portkey Integration with CrewAI
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Getting Started
1. **Install Required Packages:**
```bash
pip install -qU crewai portkey-ai
```
2. **Configure the LLM Client:**
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
3. **Create and Run Your First Agent:**
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
## Key Features
| Feature | Description |
|---------|-------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
}
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)

View File

@@ -1,202 +0,0 @@
---
title: Portkey Observability and Guardrails
description: How to use Portkey with CrewAI
icon: key
---
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Getting Started
<Steps>
<Step title="Install CrewAI and Portkey">
```bash
pip install -qU crewai portkey-ai
```
</Step>
<Step title="Configure the LLM Client">
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
</Step>
<Step title="Create and Run Your First Agent">
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
</Step>
</Steps>
## Key Features
| Feature | Description |
|:--------|:------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
}
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)

View File

@@ -7,7 +7,7 @@ icon: wrench
<Note> <Note>
**Python Version Requirements** **Python Version Requirements**
CrewAI requires `Python >=3.10 and <3.13`. Here's how to check your version: CrewAI requires `Python >=3.10 and <=3.13`. Here's how to check your version:
```bash ```bash
python3 --version python3 --version
``` ```

View File

@@ -100,8 +100,7 @@
"how-to/conditional-tasks", "how-to/conditional-tasks",
"how-to/agentops-observability", "how-to/agentops-observability",
"how-to/langtrace-observability", "how-to/langtrace-observability",
"how-to/openlit-observability", "how-to/openlit-observability"
"how-to/portkey-observability"
] ]
}, },
{ {

View File

@@ -1,46 +1,34 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.95.0" version = "0.85.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<=3.13"
authors = [ authors = [
{ name = "Joao Moura", email = "joao@crewai.com" } { name = "Joao Moura", email = "joao@crewai.com" }
] ]
dependencies = [ dependencies = [
# Core Dependencies
"pydantic>=2.4.2", "pydantic>=2.4.2",
"openai>=1.13.3", "openai>=1.13.3",
"litellm>=1.44.22",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
"regex>=2024.9.11",
# Telemetry and Monitoring
"opentelemetry-api>=1.22.0", "opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0", "opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0", "opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3",
# Data Handling "regex>=2024.9.11",
"chromadb>=0.5.23",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
"auth0-python>=4.7.1",
"python-dotenv>=1.0.0",
# Configuration and Utils
"click>=8.1.7", "click>=8.1.7",
"python-dotenv>=1.0.0",
"appdirs>=1.4.4", "appdirs>=1.4.4",
"jsonref>=1.1.0", "jsonref>=1.1.0",
"json-repair>=0.25.2", "json-repair>=0.25.2",
"auth0-python>=4.7.1",
"litellm>=1.44.22",
"pyvis>=0.3.2",
"uv>=0.4.25", "uv>=0.4.25",
"tomli-w>=1.1.0", "tomli-w>=1.1.0",
"tomli>=2.0.2", "tomli>=2.0.2",
"blinker>=1.9.0" "chromadb>=0.5.18",
"pdfplumber>=0.11.4",
"openpyxl>=3.1.5",
] ]
[project.urls] [project.urls]
@@ -49,10 +37,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI" Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies] [project.optional-dependencies]
tools = ["crewai-tools>=0.25.5"] tools = ["crewai-tools>=0.14.0"]
embeddings = [
"tiktoken~=0.7.0"
]
agentops = ["agentops>=0.3.0"] agentops = ["agentops>=0.3.0"]
fastembed = ["fastembed>=0.4.1"] fastembed = ["fastembed>=0.4.1"]
pdfplumber = [ pdfplumber = [
@@ -65,13 +50,10 @@ openpyxl = [
"openpyxl>=3.1.5", "openpyxl>=3.1.5",
] ]
mem0 = ["mem0ai>=0.1.29"] mem0 = ["mem0ai>=0.1.29"]
docling = [
"docling>=2.12.0",
]
[tool.uv] [tool.uv]
dev-dependencies = [ dev-dependencies = [
"ruff>=0.8.2", "ruff>=0.4.10",
"mypy>=1.10.0", "mypy>=1.10.0",
"pre-commit>=3.6.0", "pre-commit>=3.6.0",
"mkdocs>=1.4.3", "mkdocs>=1.4.3",
@@ -81,6 +63,7 @@ dev-dependencies = [
"mkdocs-material-extensions>=1.3.1", "mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0", "pillow>=10.2.0",
"cairosvg>=2.7.1", "cairosvg>=2.7.1",
"crewai-tools>=0.14.0",
"pytest>=8.0.0", "pytest>=8.0.0",
"pytest-vcr>=1.0.2", "pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",

View File

@@ -14,7 +14,7 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.95.0" __version__ = "0.85.0"
__all__ = [ __all__ = [
"Agent", "Agent",
"Crew", "Crew",

View File

@@ -8,7 +8,7 @@ from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS, LITELLM_PARAMS from crewai.cli.constants import ENV_VARS
from crewai.knowledge.knowledge import Knowledge from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
@@ -17,27 +17,33 @@ from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.task import Task from crewai.task import Task
from crewai.tools import BaseTool from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.utilities import Converter, Prompts from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.converter import generate_model_description from crewai.utilities.converter import generate_model_description
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.token_counter_callback import TokenCalcHandler from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
try: def mock_agent_ops_provider():
import agentops # type: ignore # Name "agentops" is already defined def track_agent(*args, **kwargs):
from agentops import track_agent # type: ignore
except ImportError:
def track_agent():
def noop(f): def noop(f):
return f return f
return noop return noop
return track_agent
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
else:
track_agent = mock_agent_ops_provider()
@track_agent() @track_agent()
class Agent(BaseAgent): class Agent(BaseAgent):
@@ -116,10 +122,6 @@ class Agent(BaseAgent):
default=2, default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.", description="Maximum number of retries for an agent to execute a task when an error occurs.",
) )
multimodal: bool = Field(
default=False,
description="Whether the agent is multimodal.",
)
code_execution_mode: Literal["safe", "unsafe"] = Field( code_execution_mode: Literal["safe", "unsafe"] = Field(
default="safe", default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).", description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
@@ -140,9 +142,98 @@ class Agent(BaseAgent):
def post_init_setup(self): def post_init_setup(self):
self._set_knowledge() self._set_knowledge()
self.agent_ops_agent_name = self.role self.agent_ops_agent_name = self.role
unaccepted_attributes = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
]
self.llm = create_llm(self.llm) # Handle different cases for self.llm
self.function_calling_llm = create_llm(self.function_calling_llm) if isinstance(self.llm, str):
# If it's a string, create an LLM instance
self.llm = LLM(model=self.llm)
elif isinstance(self.llm, LLM):
# If it's already an LLM instance, keep it as is
pass
elif self.llm is None:
# Determine the model name from environment variables or use default
model_name = (
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
or "gpt-4o-mini"
)
llm_params = {"model": model_name}
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
"OPENAI_BASE_URL"
)
if api_base:
llm_params["base_url"] = api_base
set_provider = model_name.split("/")[0] if "/" in model_name else "openai"
# Iterate over all environment variables to find matching API keys or use defaults
for provider, env_vars in ENV_VARS.items():
if provider == set_provider:
for env_var in env_vars:
# Check if the environment variable is set
key_name = env_var.get("key_name")
if key_name and key_name not in unaccepted_attributes:
env_value = os.environ.get(key_name)
if env_value:
# Map key names containing "API_KEY" to "api_key"
key_name = (
"api_key" if "API_KEY" in key_name else key_name
)
# Map key names containing "API_BASE" to "api_base"
key_name = (
"api_base" if "API_BASE" in key_name else key_name
)
# Map key names containing "API_VERSION" to "api_version"
key_name = (
"api_version"
if "API_VERSION" in key_name
else key_name
)
llm_params[key_name] = env_value
# Check for default values if the environment variable is not set
elif env_var.get("default", False):
for key, value in env_var.items():
if key not in ["prompt", "key_name", "default"]:
# Only add default if the key is already set in os.environ
if key in os.environ:
llm_params[key] = value
self.llm = LLM(**llm_params)
else:
# For any other type, attempt to extract relevant attributes
llm_params = {
"model": getattr(self.llm, "model_name", None)
or getattr(self.llm, "deployment_name", None)
or str(self.llm),
"temperature": getattr(self.llm, "temperature", None),
"max_tokens": getattr(self.llm, "max_tokens", None),
"logprobs": getattr(self.llm, "logprobs", None),
"timeout": getattr(self.llm, "timeout", None),
"max_retries": getattr(self.llm, "max_retries", None),
"api_key": getattr(self.llm, "api_key", None),
"base_url": getattr(self.llm, "base_url", None),
"organization": getattr(self.llm, "organization", None),
}
# Remove None values to avoid passing unnecessary parameters
llm_params = {k: v for k, v in llm_params.items() if v is not None}
self.llm = LLM(**llm_params)
# Similar handling for function_calling_llm
if self.function_calling_llm:
if isinstance(self.function_calling_llm, str):
self.function_calling_llm = LLM(model=self.function_calling_llm)
elif not isinstance(self.function_calling_llm, LLM):
self.function_calling_llm = LLM(
model=getattr(self.function_calling_llm, "model_name", None)
or getattr(self.function_calling_llm, "deployment_name", None)
or str(self.function_calling_llm)
)
if not self.agent_executor: if not self.agent_executor:
self._setup_agent_executor() self._setup_agent_executor()
@@ -332,11 +423,6 @@ class Agent(BaseAgent):
tools = agent_tools.tools() tools = agent_tools.tools()
return tools return tools
def get_multimodal_tools(self) -> List[Tool]:
from crewai.tools.agent_tools.add_image_tool import AddImageTool
return [AddImageTool()]
def get_code_execution_tools(self): def get_code_execution_tools(self):
try: try:
from crewai_tools import CodeInterpreterTool from crewai_tools import CodeInterpreterTool

View File

@@ -19,10 +19,15 @@ class CrewAgentExecutorMixin:
agent: Optional["BaseAgent"] agent: Optional["BaseAgent"]
task: Optional["Task"] task: Optional["Task"]
iterations: int iterations: int
have_forced_answer: bool
max_iter: int max_iter: int
_i18n: I18N _i18n: I18N
_printer: Printer = Printer() _printer: Printer = Printer()
def _should_force_answer(self) -> bool:
"""Determine if a forced answer is required based on iteration count."""
return (self.iterations >= self.max_iter) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None: def _create_short_term_memory(self, output) -> None:
"""Create and save a short-term memory item if conditions are met.""" """Create and save a short-term memory item if conditions are met."""
if ( if (

View File

@@ -1,7 +1,7 @@
import json import json
import re import re
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any, Callable, Dict, List, Optional, Union from typing import Any, Dict, List, Union
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
@@ -50,7 +50,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
original_tools: List[Any] = [], original_tools: List[Any] = [],
function_calling_llm: Any = None, function_calling_llm: Any = None,
respect_context_window: bool = False, respect_context_window: bool = False,
request_within_rpm_limit: Optional[Callable[[], bool]] = None, request_within_rpm_limit: Any = None,
callbacks: List[Any] = [], callbacks: List[Any] = [],
): ):
self._i18n: I18N = I18N() self._i18n: I18N = I18N()
@@ -77,6 +77,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.messages: List[Dict[str, str]] = [] self.messages: List[Dict[str, str]] = []
self.iterations = 0 self.iterations = 0
self.log_error_after = 3 self.log_error_after = 3
self.have_forced_answer = False
self.tool_name_to_tool_map: Dict[str, BaseTool] = { self.tool_name_to_tool_map: Dict[str, BaseTool] = {
tool.name: tool for tool in self.tools tool.name: tool for tool in self.tools
} }
@@ -107,151 +108,93 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._create_long_term_memory(formatted_answer) self._create_long_term_memory(formatted_answer)
return {"output": formatted_answer.output} return {"output": formatted_answer.output}
def _invoke_loop(self): def _invoke_loop(self, formatted_answer=None):
""" try:
Main loop to invoke the agent's thought process until it reaches a conclusion while not isinstance(formatted_answer, AgentFinish):
or the maximum number of iterations is reached. if not self.request_within_rpm_limit or self.request_within_rpm_limit():
""" answer = self.llm.call(
formatted_answer = None self.messages,
while not isinstance(formatted_answer, AgentFinish): callbacks=self.callbacks,
try:
if self._has_reached_max_iterations():
formatted_answer = self._handle_max_iterations_exceeded(
formatted_answer
)
break
self._enforce_rpm_limit()
answer = self._get_llm_response()
formatted_answer = self._process_llm_response(answer)
if isinstance(formatted_answer, AgentAction):
tool_result = self._execute_tool_and_check_finality(
formatted_answer
)
formatted_answer = self._handle_agent_action(
formatted_answer, tool_result
) )
self._invoke_step_callback(formatted_answer) if answer is None or answer == "":
self._append_message(formatted_answer.text, role="assistant") self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError(
"Invalid response from LLM call - None or empty."
)
except OutputParserException as e: if not self.use_stop_words:
formatted_answer = self._handle_output_parser_exception(e) try:
self._format_answer(answer)
except OutputParserException as e:
if (
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE
in e.error
):
answer = answer.split("Observation:")[0].strip()
except Exception as e: self.iterations += 1
if self._is_context_length_exceeded(e): formatted_answer = self._format_answer(answer)
self._handle_context_length()
continue if isinstance(formatted_answer, AgentAction):
else: tool_result = self._execute_tool_and_check_finality(
raise e formatted_answer
)
formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result
if tool_result.result_as_answer:
return AgentFinish(
thought="",
output=tool_result.result,
text=formatted_answer.text,
)
self._show_logs(formatted_answer)
if self.step_callback:
self.step_callback(formatted_answer)
if self._should_force_answer():
if self.have_forced_answer:
return AgentFinish(
thought="",
output=self._i18n.errors(
"force_final_answer_error"
).format(formatted_answer.text),
text=formatted_answer.text,
)
else:
formatted_answer.text += (
f'\n{self._i18n.errors("force_final_answer")}'
)
self.have_forced_answer = True
self.messages.append(
self._format_msg(formatted_answer.text, role="assistant")
)
except OutputParserException as e:
self.messages.append({"role": "user", "content": e.error})
if self.iterations > self.log_error_after:
self._printer.print(
content=f"Error parsing LLM output, agent will retry: {e.error}",
color="red",
)
return self._invoke_loop(formatted_answer)
except Exception as e:
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
self._handle_context_length()
return self._invoke_loop(formatted_answer)
else:
raise e
self._show_logs(formatted_answer) self._show_logs(formatted_answer)
return formatted_answer return formatted_answer
def _has_reached_max_iterations(self) -> bool:
"""Check if the maximum number of iterations has been reached."""
return self.iterations >= self.max_iter
def _enforce_rpm_limit(self) -> None:
"""Enforce the requests per minute (RPM) limit if applicable."""
if self.request_within_rpm_limit:
self.request_within_rpm_limit()
def _get_llm_response(self) -> str:
"""Call the LLM and return the response, handling any invalid responses."""
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
if not answer:
self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError("Invalid response from LLM call - None or empty.")
return answer
def _process_llm_response(self, answer: str) -> Union[AgentAction, AgentFinish]:
"""Process the LLM response and format it into an AgentAction or AgentFinish."""
if not self.use_stop_words:
try:
# Preliminary parsing to check for errors.
self._format_answer(answer)
except OutputParserException as e:
if FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE in e.error:
answer = answer.split("Observation:")[0].strip()
self.iterations += 1
return self._format_answer(answer)
def _handle_agent_action(
self, formatted_answer: AgentAction, tool_result: ToolResult
) -> Union[AgentAction, AgentFinish]:
"""Handle the AgentAction, execute tools, and process the results."""
add_image_tool = self._i18n.tools("add_image")
if (
isinstance(add_image_tool, dict)
and formatted_answer.tool.casefold().strip()
== add_image_tool.get("name", "").casefold().strip()
):
self.messages.append(tool_result.result)
return formatted_answer # Continue the loop
if self.step_callback:
self.step_callback(tool_result)
formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result
if tool_result.result_as_answer:
return AgentFinish(
thought="",
output=tool_result.result,
text=formatted_answer.text,
)
self._show_logs(formatted_answer)
return formatted_answer
def _invoke_step_callback(self, formatted_answer) -> None:
"""Invoke the step callback if it exists."""
if self.step_callback:
self.step_callback(formatted_answer)
def _append_message(self, text: str, role: str = "assistant") -> None:
"""Append a message to the message list with the given role."""
self.messages.append(self._format_msg(text, role=role))
def _handle_output_parser_exception(self, e: OutputParserException) -> AgentAction:
"""Handle OutputParserException by updating messages and formatted_answer."""
self.messages.append({"role": "user", "content": e.error})
formatted_answer = AgentAction(
text=e.error,
tool="",
tool_input="",
thought="",
)
if self.iterations > self.log_error_after:
self._printer.print(
content=f"Error parsing LLM output, agent will retry: {e.error}",
color="red",
)
return formatted_answer
def _is_context_length_exceeded(self, exception: Exception) -> bool:
"""Check if the exception is due to context length exceeding."""
return LLMContextLengthExceededException(
str(exception)
)._is_context_limit_error(str(exception))
def _show_start_logs(self): def _show_start_logs(self):
if self.agent is None: if self.agent is None:
raise ValueError("Agent cannot be None") raise ValueError("Agent cannot be None")
@@ -356,7 +299,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._i18n.slice("summarizer_system_message"), role="system" self._i18n.slice("summarizer_system_message"), role="system"
), ),
self._format_msg( self._format_msg(
self._i18n.slice("summarize_instruction").format(group=group), self._i18n.slice("sumamrize_instruction").format(group=group),
), ),
], ],
callbacks=self.callbacks, callbacks=self.callbacks,
@@ -467,6 +410,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
""" """
while self.ask_for_human_input: while self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output) human_feedback = self._ask_human_input(formatted_answer.output)
print("Human feedback: ", human_feedback)
if self.crew and self.crew._train: if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback) self._handle_crew_training_output(formatted_answer, human_feedback)
@@ -531,45 +475,3 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.ask_for_human_input = False self.ask_for_human_input = False
return formatted_answer return formatted_answer
def _handle_max_iterations_exceeded(self, formatted_answer):
"""
Handles the case when the maximum number of iterations is exceeded.
Performs one more LLM call to get the final answer.
Parameters:
formatted_answer: The last formatted answer from the agent.
Returns:
The final formatted answer after exceeding max iterations.
"""
self._printer.print(
content="Maximum iterations reached. Requesting final answer.",
color="yellow",
)
if formatted_answer and hasattr(formatted_answer, "text"):
assistant_message = (
formatted_answer.text + f'\n{self._i18n.errors("force_final_answer")}'
)
else:
assistant_message = self._i18n.errors("force_final_answer")
self.messages.append(self._format_msg(assistant_message, role="assistant"))
# Perform one more LLM call to get the final answer
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
if answer is None or answer == "":
self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError("Invalid response from LLM call - None or empty.")
formatted_answer = self._format_answer(answer)
# Return the formatted answer, regardless of its type
return formatted_answer

View File

@@ -1,6 +1,5 @@
import re import re
from typing import Any, Union from typing import Any, Union
from json_repair import repair_json from json_repair import repair_json
from crewai.utilities import I18N from crewai.utilities import I18N

View File

@@ -5,10 +5,9 @@ from typing import Any, Dict
import requests import requests
from rich.console import Console from rich.console import Console
from crewai.cli.tools.main import ToolCommand
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
from .utils import TokenManager, validate_token from .utils import TokenManager, validate_token
from crewai.cli.tools.main import ToolCommand
console = Console() console = Console()
@@ -80,9 +79,7 @@ class AuthenticationCommand:
style="yellow", style="yellow",
) )
console.print( console.print("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
"\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n"
)
return return
if token_data["error"] not in ("authorization_pending", "slow_down"): if token_data["error"] not in ("authorization_pending", "slow_down"):

View File

@@ -1,9 +1,10 @@
from .utils import TokenManager from .utils import TokenManager
def get_auth_token() -> str: def get_auth_token() -> str:
"""Get the authentication token.""" """Get the authentication token."""
access_token = TokenManager().get_token() access_token = TokenManager().get_token()
if not access_token: if not access_token:
raise Exception() raise Exception()
return access_token return access_token

View File

@@ -1,13 +1,11 @@
import os from typing import Optional
from importlib.metadata import version as get_version
from typing import Optional, Tuple
import click import click
import pkg_resources
from crewai.cli.add_crew_to_flow import add_crew_to_flow from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.create_crew import create_crew from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow from crewai.cli.create_flow import create_flow
from crewai.cli.crew_chat import run_chat
from crewai.memory.storage.kickoff_task_outputs_storage import ( from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage, KickoffTaskOutputsSQLiteStorage,
) )
@@ -27,7 +25,6 @@ from .update_crew import update_crew
@click.group() @click.group()
@click.version_option(get_version("crewai"))
def crewai(): def crewai():
"""Top-level command group for crewai.""" """Top-level command group for crewai."""
@@ -53,17 +50,14 @@ def create(type, name, provider, skip_provider=False):
) )
def version(tools): def version(tools):
"""Show the installed version of crewai.""" """Show the installed version of crewai."""
try: crewai_version = pkg_resources.get_distribution("crewai").version
crewai_version = get_version("crewai")
except Exception:
crewai_version = "unknown version"
click.echo(f"crewai version: {crewai_version}") click.echo(f"crewai version: {crewai_version}")
if tools: if tools:
try: try:
tools_version = get_version("crewai") tools_version = pkg_resources.get_distribution("crewai-tools").version
click.echo(f"crewai tools version: {tools_version}") click.echo(f"crewai tools version: {tools_version}")
except Exception: except pkg_resources.DistributionNotFound:
click.echo("crewai tools not installed") click.echo("crewai tools not installed")
@@ -344,15 +338,5 @@ def flow_add_crew(crew_name):
add_crew_to_flow(crew_name) add_crew_to_flow(crew_name)
@crewai.command()
def chat():
"""
Start a conversation with the Crew, collecting user-supplied inputs,
and using the Chat LLM to generate responses.
"""
click.echo("Starting a conversation with the Crew")
run_chat()
if __name__ == "__main__": if __name__ == "__main__":
crewai() crewai()

View File

@@ -1,9 +1,8 @@
import requests import requests
from requests.exceptions import JSONDecodeError from requests.exceptions import JSONDecodeError
from rich.console import Console from rich.console import Console
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI from crewai.cli.plus_api import PlusAPI
from crewai.cli.authentication.token import get_auth_token
from crewai.telemetry.telemetry import Telemetry from crewai.telemetry.telemetry import Telemetry
console = Console() console = Console()

View File

@@ -1,19 +1,13 @@
import json import json
from pathlib import Path from pathlib import Path
from typing import Optional
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import Optional
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json" DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
class Settings(BaseModel): class Settings(BaseModel):
tool_repository_username: Optional[str] = Field( tool_repository_username: Optional[str] = Field(None, description="Username for interacting with the Tool Repository")
None, description="Username for interacting with the Tool Repository" tool_repository_password: Optional[str] = Field(None, description="Password for interacting with the Tool Repository")
)
tool_repository_password: Optional[str] = Field(
None, description="Password for interacting with the Tool Repository"
)
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True) config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data): def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):

View File

@@ -17,12 +17,6 @@ ENV_VARS = {
"key_name": "GEMINI_API_KEY", "key_name": "GEMINI_API_KEY",
} }
], ],
"nvidia_nim": [
{
"prompt": "Enter your NVIDIA API key (press Enter to skip)",
"key_name": "NVIDIA_NIM_API_KEY",
}
],
"groq": [ "groq": [
{ {
"prompt": "Enter your GROQ API key (press Enter to skip)", "prompt": "Enter your GROQ API key (press Enter to skip)",
@@ -91,12 +85,6 @@ ENV_VARS = {
"key_name": "CEREBRAS_API_KEY", "key_name": "CEREBRAS_API_KEY",
}, },
], ],
"sambanova": [
{
"prompt": "Enter your SambaNovaCloud API key (press Enter to skip)",
"key_name": "SAMBANOVA_API_KEY",
}
],
} }
@@ -104,14 +92,12 @@ PROVIDERS = [
"openai", "openai",
"anthropic", "anthropic",
"gemini", "gemini",
"nvidia_nim",
"groq", "groq",
"ollama", "ollama",
"watson", "watson",
"bedrock", "bedrock",
"azure", "azure",
"cerebras", "cerebras",
"sambanova",
] ]
MODELS = { MODELS = {
@@ -128,75 +114,6 @@ MODELS = {
"gemini/gemini-gemma-2-9b-it", "gemini/gemini-gemma-2-9b-it",
"gemini/gemini-gemma-2-27b-it", "gemini/gemini-gemma-2-27b-it",
], ],
"nvidia_nim": [
"nvidia_nim/nvidia/mistral-nemo-minitron-8b-8k-instruct",
"nvidia_nim/nvidia/nemotron-4-mini-hindi-4b-instruct",
"nvidia_nim/nvidia/llama-3.1-nemotron-70b-instruct",
"nvidia_nim/nvidia/llama3-chatqa-1.5-8b",
"nvidia_nim/nvidia/llama3-chatqa-1.5-70b",
"nvidia_nim/nvidia/vila",
"nvidia_nim/nvidia/neva-22",
"nvidia_nim/nvidia/nemotron-mini-4b-instruct",
"nvidia_nim/nvidia/usdcode-llama3-70b-instruct",
"nvidia_nim/nvidia/nemotron-4-340b-instruct",
"nvidia_nim/meta/codellama-70b",
"nvidia_nim/meta/llama2-70b",
"nvidia_nim/meta/llama3-8b-instruct",
"nvidia_nim/meta/llama3-70b-instruct",
"nvidia_nim/meta/llama-3.1-8b-instruct",
"nvidia_nim/meta/llama-3.1-70b-instruct",
"nvidia_nim/meta/llama-3.1-405b-instruct",
"nvidia_nim/meta/llama-3.2-1b-instruct",
"nvidia_nim/meta/llama-3.2-3b-instruct",
"nvidia_nim/meta/llama-3.2-11b-vision-instruct",
"nvidia_nim/meta/llama-3.2-90b-vision-instruct",
"nvidia_nim/meta/llama-3.1-70b-instruct",
"nvidia_nim/google/gemma-7b",
"nvidia_nim/google/gemma-2b",
"nvidia_nim/google/codegemma-7b",
"nvidia_nim/google/codegemma-1.1-7b",
"nvidia_nim/google/recurrentgemma-2b",
"nvidia_nim/google/gemma-2-9b-it",
"nvidia_nim/google/gemma-2-27b-it",
"nvidia_nim/google/gemma-2-2b-it",
"nvidia_nim/google/deplot",
"nvidia_nim/google/paligemma",
"nvidia_nim/mistralai/mistral-7b-instruct-v0.2",
"nvidia_nim/mistralai/mixtral-8x7b-instruct-v0.1",
"nvidia_nim/mistralai/mistral-large",
"nvidia_nim/mistralai/mixtral-8x22b-instruct-v0.1",
"nvidia_nim/mistralai/mistral-7b-instruct-v0.3",
"nvidia_nim/nv-mistralai/mistral-nemo-12b-instruct",
"nvidia_nim/mistralai/mamba-codestral-7b-v0.1",
"nvidia_nim/microsoft/phi-3-mini-128k-instruct",
"nvidia_nim/microsoft/phi-3-mini-4k-instruct",
"nvidia_nim/microsoft/phi-3-small-8k-instruct",
"nvidia_nim/microsoft/phi-3-small-128k-instruct",
"nvidia_nim/microsoft/phi-3-medium-4k-instruct",
"nvidia_nim/microsoft/phi-3-medium-128k-instruct",
"nvidia_nim/microsoft/phi-3.5-mini-instruct",
"nvidia_nim/microsoft/phi-3.5-moe-instruct",
"nvidia_nim/microsoft/kosmos-2",
"nvidia_nim/microsoft/phi-3-vision-128k-instruct",
"nvidia_nim/microsoft/phi-3.5-vision-instruct",
"nvidia_nim/databricks/dbrx-instruct",
"nvidia_nim/snowflake/arctic",
"nvidia_nim/aisingapore/sea-lion-7b-instruct",
"nvidia_nim/ibm/granite-8b-code-instruct",
"nvidia_nim/ibm/granite-34b-code-instruct",
"nvidia_nim/ibm/granite-3.0-8b-instruct",
"nvidia_nim/ibm/granite-3.0-3b-a800m-instruct",
"nvidia_nim/mediatek/breeze-7b-instruct",
"nvidia_nim/upstage/solar-10.7b-instruct",
"nvidia_nim/writer/palmyra-med-70b-32k",
"nvidia_nim/writer/palmyra-med-70b",
"nvidia_nim/writer/palmyra-fin-70b-32k",
"nvidia_nim/01-ai/yi-large",
"nvidia_nim/deepseek-ai/deepseek-coder-6.7b-instruct",
"nvidia_nim/rakuten/rakutenai-7b-instruct",
"nvidia_nim/rakuten/rakutenai-7b-chat",
"nvidia_nim/baichuan-inc/baichuan2-13b-chat",
],
"groq": [ "groq": [
"groq/llama-3.1-8b-instant", "groq/llama-3.1-8b-instant",
"groq/llama-3.1-70b-versatile", "groq/llama-3.1-70b-versatile",
@@ -239,24 +156,6 @@ MODELS = {
"bedrock/mistral.mistral-7b-instruct-v0:2", "bedrock/mistral.mistral-7b-instruct-v0:2",
"bedrock/mistral.mixtral-8x7b-instruct-v0:1", "bedrock/mistral.mixtral-8x7b-instruct-v0:1",
], ],
"sambanova": [
"sambanova/Meta-Llama-3.3-70B-Instruct",
"sambanova/QwQ-32B-Preview",
"sambanova/Qwen2.5-72B-Instruct",
"sambanova/Qwen2.5-Coder-32B-Instruct",
"sambanova/Meta-Llama-3.1-405B-Instruct",
"sambanova/Meta-Llama-3.1-70B-Instruct",
"sambanova/Meta-Llama-3.1-8B-Instruct",
"sambanova/Llama-3.2-90B-Vision-Instruct",
"sambanova/Llama-3.2-11B-Vision-Instruct",
"sambanova/Meta-Llama-3.2-3B-Instruct",
"sambanova/Meta-Llama-3.2-1B-Instruct",
],
} }
DEFAULT_LLM_MODEL = "gpt-4o-mini"
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json" JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
LITELLM_PARAMS = ["api_key", "api_base", "api_version"]

View File

@@ -1,413 +0,0 @@
import json
import re
import sys
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple
import click
import tomli
from crewai.crew import Crew
from crewai.llm import LLM
from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm
def run_chat():
"""
Runs an interactive chat loop using the Crew's chat LLM with function calling.
Incorporates crew_name, crew_description, and input fields to build a tool schema.
Exits if crew_name or crew_description are missing.
"""
crew, crew_name = load_crew_and_name()
chat_llm = initialize_chat_llm(crew)
if not chat_llm:
return
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs)
# Call the LLM to generate the introductory message
introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}]
)
click.secho(f"\nAssistant: {introductory_message}\n", fg="green")
messages = [
{"role": "system", "content": system_message},
{"role": "assistant", "content": introductory_message},
]
available_functions = {
crew_chat_inputs.crew_name: create_tool_function(crew, messages),
}
click.secho(
"\nEntering an interactive chat loop with function-calling.\n"
"Type 'exit' or Ctrl+C to quit.\n",
fg="cyan",
)
chat_loop(chat_llm, messages, crew_tool_schema, available_functions)
def initialize_chat_llm(crew: Crew) -> Optional[LLM]:
"""Initializes the chat LLM and handles exceptions."""
try:
return create_llm(crew.chat_llm)
except Exception as e:
click.secho(
f"Unable to find a Chat LLM. Please make sure you set chat_llm on the crew: {e}",
fg="red",
)
return None
def build_system_message(crew_chat_inputs: ChatInputs) -> str:
"""Builds the initial system message for the chat."""
required_fields_str = (
", ".join(
f"{field.name} (desc: {field.description or 'n/a'})"
for field in crew_chat_inputs.inputs
)
or "(No required fields detected)"
)
return (
"You are a helpful AI assistant for the CrewAI platform. "
"Your primary purpose is to assist users with the crew's specific tasks. "
"You can answer general questions, but should guide users back to the crew's purpose afterward. "
"For example, after answering a general question, remind the user of your main purpose, such as generating a research report, and prompt them to specify a topic or task related to the crew's purpose. "
"You have a function (tool) you can call by name if you have all required inputs. "
f"Those required inputs are: {required_fields_str}. "
"Once you have them, call the function. "
"Please keep your responses concise and friendly. "
"If a user asks a question outside the crew's scope, provide a brief answer and remind them of the crew's purpose. "
"After calling the tool, be prepared to take user feedback and make adjustments as needed. "
"If you are ever unsure about a user's request or need clarification, ask the user for more information."
"Before doing anything else, introduce yourself with a friendly message like: 'Hey! I'm here to help you with [crew's purpose]. Could you please provide me with [inputs] so we can get started?' "
"For example: 'Hey! I'm here to help you with uncovering and reporting cutting-edge developments through thorough research and detailed analysis. Could you please provide me with a topic you're interested in? This will help us generate a comprehensive research report and detailed analysis.'"
f"\nCrew Name: {crew_chat_inputs.crew_name}"
f"\nCrew Description: {crew_chat_inputs.crew_description}"
)
def create_tool_function(crew: Crew, messages: List[Dict[str, str]]) -> Any:
"""Creates a wrapper function for running the crew tool with messages."""
def run_crew_tool_with_messages(**kwargs):
return run_crew_tool(crew, messages, **kwargs)
return run_crew_tool_with_messages
def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
"""Main chat loop for interacting with the user."""
while True:
try:
user_input = click.prompt("You", type=str)
if user_input.strip().lower() in ["exit", "quit"]:
click.echo("Exiting chat. Goodbye!")
break
messages.append({"role": "user", "content": user_input})
final_response = chat_llm.call(
messages=messages,
tools=[crew_tool_schema],
available_functions=available_functions,
)
messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green")
except KeyboardInterrupt:
click.echo("\nExiting chat. Goodbye!")
break
except Exception as e:
click.secho(f"An error occurred: {e}", fg="red")
break
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
"""
Dynamically build a Littellm 'function' schema for the given crew.
crew_name: The name of the crew (used for the function 'name').
crew_inputs: A ChatInputs object containing crew_description
and a list of input fields (each with a name & description).
"""
properties = {}
for field in crew_inputs.inputs:
properties[field.name] = {
"type": "string",
"description": field.description or "No description provided",
}
required_fields = [field.name for field in crew_inputs.inputs]
return {
"type": "function",
"function": {
"name": crew_inputs.crew_name,
"description": crew_inputs.crew_description or "No crew description",
"parameters": {
"type": "object",
"properties": properties,
"required": required_fields,
},
},
}
def run_crew_tool(crew: Crew, messages: List[Dict[str, str]], **kwargs):
"""
Runs the crew using crew.kickoff(inputs=kwargs) and returns the output.
Args:
crew (Crew): The crew instance to run.
messages (List[Dict[str, str]]): The chat messages up to this point.
**kwargs: The inputs collected from the user.
Returns:
str: The output from the crew's execution.
Raises:
SystemExit: Exits the chat if an error occurs during crew execution.
"""
try:
# Serialize 'messages' to JSON string before adding to kwargs
kwargs["crew_chat_messages"] = json.dumps(messages)
# Run the crew with the provided inputs
crew_output = crew.kickoff(inputs=kwargs)
# Convert CrewOutput to a string to send back to the user
result = str(crew_output)
return result
except Exception as e:
# Exit the chat and show the error message
click.secho("An error occurred while running the crew:", fg="red")
click.secho(str(e), fg="red")
sys.exit(1)
def load_crew_and_name() -> Tuple[Crew, str]:
"""
Loads the crew by importing the crew class from the user's project.
Returns:
Tuple[Crew, str]: A tuple containing the Crew instance and the name of the crew.
"""
# Get the current working directory
cwd = Path.cwd()
# Path to the pyproject.toml file
pyproject_path = cwd / "pyproject.toml"
if not pyproject_path.exists():
raise FileNotFoundError("pyproject.toml not found in the current directory.")
# Load the pyproject.toml file using 'tomli'
with pyproject_path.open("rb") as f:
pyproject_data = tomli.load(f)
# Get the project name from the 'project' section
project_name = pyproject_data["project"]["name"]
folder_name = project_name
# Derive the crew class name from the project name
# E.g., if project_name is 'my_project', crew_class_name is 'MyProject'
crew_class_name = project_name.replace("_", " ").title().replace(" ", "")
# Add the 'src' directory to sys.path
src_path = cwd / "src"
if str(src_path) not in sys.path:
sys.path.insert(0, str(src_path))
# Import the crew module
crew_module_name = f"{folder_name}.crew"
try:
crew_module = __import__(crew_module_name, fromlist=[crew_class_name])
except ImportError as e:
raise ImportError(f"Failed to import crew module {crew_module_name}: {e}")
# Get the crew class from the module
try:
crew_class = getattr(crew_module, crew_class_name)
except AttributeError:
raise AttributeError(
f"Crew class {crew_class_name} not found in module {crew_module_name}"
)
# Instantiate the crew
crew_instance = crew_class().crew()
return crew_instance, crew_class_name
def generate_crew_chat_inputs(crew: Crew, crew_name: str, chat_llm) -> ChatInputs:
"""
Generates the ChatInputs required for the crew by analyzing the tasks and agents.
Args:
crew (Crew): The crew object containing tasks and agents.
crew_name (str): The name of the crew.
chat_llm: The chat language model to use for AI calls.
Returns:
ChatInputs: An object containing the crew's name, description, and input fields.
"""
# Extract placeholders from tasks and agents
required_inputs = fetch_required_inputs(crew)
# Generate descriptions for each input using AI
input_fields = []
for input_name in required_inputs:
description = generate_input_description_with_ai(input_name, crew, chat_llm)
input_fields.append(ChatInputField(name=input_name, description=description))
# Generate crew description using AI
crew_description = generate_crew_description_with_ai(crew, chat_llm)
return ChatInputs(
crew_name=crew_name, crew_description=crew_description, inputs=input_fields
)
def fetch_required_inputs(crew: Crew) -> Set[str]:
"""
Extracts placeholders from the crew's tasks and agents.
Args:
crew (Crew): The crew object.
Returns:
Set[str]: A set of placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)\}")
required_inputs: Set[str] = set()
# Scan tasks
for task in crew.tasks:
text = f"{task.description or ''} {task.expected_output or ''}"
required_inputs.update(placeholder_pattern.findall(text))
# Scan agents
for agent in crew.agents:
text = f"{agent.role or ''} {agent.goal or ''} {agent.backstory or ''}"
required_inputs.update(placeholder_pattern.findall(text))
return required_inputs
def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) -> str:
"""
Generates an input description using AI based on the context of the crew.
Args:
input_name (str): The name of the input placeholder.
crew (Crew): The crew object.
chat_llm: The chat language model to use for AI calls.
Returns:
str: A concise description of the input.
"""
# Gather context from tasks and agents where the input is used
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)\}")
for task in crew.tasks:
if (
f"{{{input_name}}}" in task.description
or f"{{{input_name}}}" in task.expected_output
):
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
if (
f"{{{input_name}}}" in agent.role
or f"{{{input_name}}}" in agent.goal
or f"{{{input_name}}}" in agent.backstory
):
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")
context = "\n".join(context_texts)
if not context:
# If no context is found for the input, raise an exception as per instruction
raise ValueError(f"No context found for input '{input_name}'.")
prompt = (
f"Based on the following context, write a concise description (15 words or less) of the input '{input_name}'.\n"
"Provide only the description, without any extra text or labels. Do not include placeholders like '{topic}' in the description.\n"
"Context:\n"
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
description = response.strip()
return description
def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
"""
Generates a brief description of the crew using AI.
Args:
crew (Crew): The crew object.
chat_llm: The chat language model to use for AI calls.
Returns:
str: A concise description of the crew's purpose (15 words or less).
"""
# Gather context from tasks and agents
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)\}")
for task in crew.tasks:
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_backstory = placeholder_pattern.sub(lambda m: m.group(1), agent.backstory)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")
context = "\n".join(context_texts)
if not context:
raise ValueError("No context found for generating crew description.")
prompt = (
"Based on the following context, write a concise, action-oriented description (15 words or less) of the crew's purpose.\n"
"Provide only the description, without any extra text or labels. Do not include placeholders like '{topic}' in the description.\n"
"Context:\n"
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
crew_description = response.strip()
return crew_description

View File

@@ -1,10 +1,8 @@
from os import getenv
from typing import Optional from typing import Optional
from urllib.parse import urljoin
import requests import requests
from os import getenv
from crewai.cli.version import get_crewai_version from crewai.cli.version import get_crewai_version
from urllib.parse import urljoin
class PlusAPI: class PlusAPI:

View File

@@ -1,12 +1,11 @@
import subprocess import subprocess
import click import click
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.memory.entity.entity_memory import EntityMemory from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
def reset_memories_command( def reset_memories_command(

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation ## Installation
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience. Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv: First, if you haven't already, install uv:

View File

@@ -2,7 +2,7 @@ research_task:
description: > description: >
Conduct a thorough research about {topic} Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given Make sure you find any interesting and relevant information given
the current year is {current_year}. the current year is 2024.
expected_output: > expected_output: >
A list with 10 bullet points of the most relevant information about {topic} A list with 10 bullet points of the most relevant information about {topic}
agent: researcher agent: researcher
@@ -12,6 +12,6 @@ reporting_task:
Review the context you got and expand each topic into a full section for a report. Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information. Make sure the report is detailed and contains any and all relevant information.
expected_output: > expected_output: >
A fully fledged report with the main topics, each with a full section of information. A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```' Formatted as markdown without '```'
agent: reporting_analyst agent: reporting_analyst

View File

@@ -1,26 +1,37 @@
from crewai import Agent, Crew, Process, Task from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
# Uncomment the following line to use an example of a custom tool
# from {{folder_name}}.tools.custom_tool import MyCustomTool
# Uncomment the following line to use an example of a knowledge source
# from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
# If you want to run a snippet of code before or after the crew starts, # Check our tools documentations for more information on how to use them
# you can use the @before_kickoff and @after_kickoff decorators # from crewai_tools import SerperDevTool
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
@CrewBase @CrewBase
class {{crew_name}}(): class {{crew_name}}():
"""{{crew_name}} crew""" """{{crew_name}} crew"""
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
agents_config = 'config/agents.yaml' agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml' tasks_config = 'config/tasks.yaml'
# If you would like to add tools to your agents, you can learn more about it here: @before_kickoff # Optional hook to be executed before the crew starts
# https://docs.crewai.com/concepts/agents#agent-tools def pull_data_example(self, inputs):
# Example of pulling data from an external API, dynamically changing the inputs
inputs['extra_data'] = "This is extra data"
return inputs
@after_kickoff # Optional hook to be executed after the crew has finished
def log_results(self, output):
# Example of logging results, dynamically changing the output
print(f"Results: {output}")
return output
@agent @agent
def researcher(self) -> Agent: def researcher(self) -> Agent:
return Agent( return Agent(
config=self.agents_config['researcher'], config=self.agents_config['researcher'],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True verbose=True
) )
@@ -31,9 +42,6 @@ class {{crew_name}}():
verbose=True verbose=True
) )
# To learn more about structured task outputs,
# task dependencies, and task callbacks, check out the documentation:
# https://docs.crewai.com/concepts/tasks#overview-of-a-task
@task @task
def research_task(self) -> Task: def research_task(self) -> Task:
return Task( return Task(
@@ -50,8 +58,14 @@ class {{crew_name}}():
@crew @crew
def crew(self) -> Crew: def crew(self) -> Crew:
"""Creates the {{crew_name}} crew""" """Creates the {{crew_name}} crew"""
# To learn how to add knowledge sources to your crew, check out the documentation: # You can add knowledge sources here
# https://docs.crewai.com/concepts/knowledge#what-is-knowledge # knowledge_path = "user_preference.txt"
# sources = [
# TextFileKnowledgeSource(
# file_path="knowledge/user_preference.txt",
# metadata={"preference": "personal"}
# ),
# ]
return Crew( return Crew(
agents=self.agents, # Automatically created by the @agent decorator agents=self.agents, # Automatically created by the @agent decorator
@@ -59,4 +73,5 @@ class {{crew_name}}():
process=Process.sequential, process=Process.sequential,
verbose=True, verbose=True,
# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/ # process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
# knowledge_sources=sources, # In the case you want to add knowledge sources
) )

View File

@@ -2,8 +2,6 @@
import sys import sys
import warnings import warnings
from datetime import datetime
from {{folder_name}}.crew import {{crew_name}} from {{folder_name}}.crew import {{crew_name}}
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd") warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
@@ -18,14 +16,9 @@ def run():
Run the crew. Run the crew.
""" """
inputs = { inputs = {
'topic': 'AI LLMs', 'topic': 'AI LLMs'
'current_year': str(datetime.now().year)
} }
{{crew_name}}().crew().kickoff(inputs=inputs)
try:
{{crew_name}}().crew().kickoff(inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while running the crew: {e}")
def train(): def train():
@@ -62,4 +55,4 @@ def test():
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs) {{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)
except Exception as e: except Exception as e:
raise Exception(f"An error occurred while testing the crew: {e}") raise Exception(f"An error occurred while replaying the crew: {e}")

View File

@@ -3,9 +3,9 @@ name = "{{folder_name}}"
version = "0.1.0" version = "0.1.0"
description = "{{name}} using crewAI" description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.95.0,<1.0.0" "crewai[tools]>=0.85.0,<1.0.0"
] ]
[project.scripts] [project.scripts]
@@ -18,6 +18,3 @@ test = "{{folder_name}}.main:test"
[build-system] [build-system]
requires = ["hatchling"] requires = ["hatchling"]
build-backend = "hatchling.build" build-backend = "hatchling.build"
[tool.crewai]
type = "crew"

View File

@@ -10,7 +10,7 @@ class MyCustomToolInput(BaseModel):
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
description: str = ( description: str = (
"Clear description for what this tool is useful for, your agent will need this information to use it." "Clear description for what this tool is useful for, you agent will need this information to use it."
) )
args_schema: Type[BaseModel] = MyCustomToolInput args_schema: Type[BaseModel] = MyCustomToolInput

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation ## Installation
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience. Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv: First, if you haven't already, install uv:

View File

@@ -1,47 +1,31 @@
from crewai import Agent, Crew, Process, Task from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task from crewai.project import CrewBase, agent, crew, task
# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
@CrewBase @CrewBase
class PoemCrew: class PoemCrew():
"""Poem Crew""" """Poem Crew"""
# Learn more about YAML configuration files here: agents_config = 'config/agents.yaml'
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended tasks_config = 'config/tasks.yaml'
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
# If you would lik to add tools to your crew, you can learn more about it here: @agent
# https://docs.crewai.com/concepts/agents#agent-tools def poem_writer(self) -> Agent:
@agent return Agent(
def poem_writer(self) -> Agent: config=self.agents_config['poem_writer'],
return Agent( )
config=self.agents_config["poem_writer"],
)
# To learn more about structured task outputs, @task
# task dependencies, and task callbacks, check out the documentation: def write_poem(self) -> Task:
# https://docs.crewai.com/concepts/tasks#overview-of-a-task return Task(
@task config=self.tasks_config['write_poem'],
def write_poem(self) -> Task: )
return Task(
config=self.tasks_config["write_poem"],
)
@crew @crew
def crew(self) -> Crew: def crew(self) -> Crew:
"""Creates the Research Crew""" """Creates the Research Crew"""
# To learn how to add knowledge sources to your crew, check out the documentation: return Crew(
# https://docs.crewai.com/concepts/knowledge#what-is-knowledge agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
return Crew( process=Process.sequential,
agents=self.agents, # Automatically created by the @agent decorator verbose=True,
tasks=self.tasks, # Automatically created by the @task decorator )
process=Process.sequential,
verbose=True,
)

View File

@@ -5,7 +5,7 @@ from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from {{folder_name}}.crews.poem_crew.poem_crew import PoemCrew from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel): class PoemState(BaseModel):

View File

@@ -3,9 +3,9 @@ name = "{{folder_name}}"
version = "0.1.0" version = "0.1.0"
description = "{{name}} using crewAI" description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.95.0,<1.0.0", "crewai[tools]>=0.85.0,<1.0.0",
] ]
[project.scripts] [project.scripts]
@@ -15,6 +15,3 @@ plot = "{{folder_name}}.main:plot"
[build-system] [build-system]
requires = ["hatchling"] requires = ["hatchling"]
build-backend = "hatchling.build" build-backend = "hatchling.build"
[tool.crewai]
type = "flow"

View File

@@ -13,7 +13,7 @@ class MyCustomToolInput(BaseModel):
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
description: str = ( description: str = (
"Clear description for what this tool is useful for, your agent will need this information to use it." "Clear description for what this tool is useful for, you agent will need this information to use it."
) )
args_schema: Type[BaseModel] = MyCustomToolInput args_schema: Type[BaseModel] = MyCustomToolInput

View File

@@ -5,7 +5,7 @@ custom tools to power up your crews.
## Installing ## Installing
Ensure you have Python >=3.10 <3.13 installed on your system. This project Ensure you have Python >=3.10 <=3.13 installed on your system. This project
uses [UV](https://docs.astral.sh/uv/) for dependency management and package uses [UV](https://docs.astral.sh/uv/) for dependency management and package
handling, offering a seamless setup and execution experience. handling, offering a seamless setup and execution experience.

View File

@@ -3,10 +3,8 @@ name = "{{folder_name}}"
version = "0.1.0" version = "0.1.0"
description = "Power up your crews with {{folder_name}}" description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.95.0" "crewai[tools]>=0.85.0"
] ]
[tool.crewai]
type = "tool"

View File

@@ -117,7 +117,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
published_handle = publish_response.json()["handle"] published_handle = publish_response.json()["handle"]
console.print( console.print(
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}", f"Succesfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
style="bold green", style="bold green",
) )
@@ -138,7 +138,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
self._add_package(get_response.json()) self._add_package(get_response.json())
console.print(f"Successfully installed {handle}", style="bold green") console.print(f"Succesfully installed {handle}", style="bold green")
def login(self): def login(self):
login_response = self.plus_api_client.login_to_tool_repository() login_response = self.plus_api_client.login_to_tool_repository()

View File

@@ -33,6 +33,26 @@ def copy_template(src, dst, name, class_name, folder_name):
click.secho(f" - Created {dst}", fg="green") click.secho(f" - Created {dst}", fg="green")
# Drop the simple_toml_parser when we move to python3.11
def simple_toml_parser(content):
result = {}
current_section = result
for line in content.split("\n"):
line = line.strip()
if line.startswith("[") and line.endswith("]"):
# New section
section = line[1:-1].split(".")
current_section = result
for key in section:
current_section = current_section.setdefault(key, {})
elif "=" in line:
key, value = line.split("=", 1)
key = key.strip()
value = value.strip().strip('"')
current_section[key] = value
return result
def read_toml(file_path: str = "pyproject.toml"): def read_toml(file_path: str = "pyproject.toml"):
"""Read the content of a TOML file and return it as a dictionary.""" """Read the content of a TOML file and return it as a dictionary."""
with open(file_path, "rb") as f: with open(file_path, "rb") as f:
@@ -43,7 +63,7 @@ def read_toml(file_path: str = "pyproject.toml"):
def parse_toml(content): def parse_toml(content):
if sys.version_info >= (3, 11): if sys.version_info >= (3, 11):
return tomllib.loads(content) return tomllib.loads(content)
return tomli.loads(content) return simple_toml_parser(content)
def get_project_name( def get_project_name(

View File

@@ -1,6 +1,6 @@
import importlib.metadata import importlib.metadata
def get_crewai_version() -> str: def get_crewai_version() -> str:
"""Get the version number of CrewAI running the CLI""" """Get the version number of CrewAI running the CLI"""
return importlib.metadata.version("crewai") return importlib.metadata.version("crewai")

View File

@@ -1,11 +1,11 @@
import asyncio import asyncio
import json import json
import re import os
import uuid import uuid
import warnings import warnings
from concurrent.futures import Future from concurrent.futures import Future
from hashlib import md5 from hashlib import md5
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from pydantic import ( from pydantic import (
UUID4, UUID4,
@@ -36,8 +36,6 @@ from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.types.crew_chat import ChatInputs
from crewai.types.usage_metrics import UsageMetrics from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import TRAINING_DATA_FILE from crewai.utilities.constants import TRAINING_DATA_FILE
@@ -51,10 +49,12 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
try: agentops = None
import agentops # type: ignore if os.environ.get("AGENTOPS_API_KEY"):
except ImportError: try:
agentops = None import agentops # type: ignore
except ImportError:
pass
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd") warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
@@ -205,10 +205,6 @@ class Crew(BaseModel):
default=None, default=None,
description="Knowledge sources for the crew. Add knowledge sources to the knowledge object.", description="Knowledge sources for the crew. Add knowledge sources to the knowledge object.",
) )
chat_llm: Optional[Any] = Field(
default=None,
description="LLM used to handle chatting with the crew.",
)
_knowledge: Optional[Knowledge] = PrivateAttr( _knowledge: Optional[Knowledge] = PrivateAttr(
default=None, default=None,
) )
@@ -540,6 +536,9 @@ class Crew(BaseModel):
if not agent.function_calling_llm: # type: ignore # "BaseAgent" has no attribute "function_calling_llm" if not agent.function_calling_llm: # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
agent.function_calling_llm = self.function_calling_llm # type: ignore # "BaseAgent" has no attribute "function_calling_llm" agent.function_calling_llm = self.function_calling_llm # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
if agent.allow_code_execution: # type: ignore # BaseAgent" has no attribute "allow_code_execution"
agent.tools += agent.get_code_execution_tools() # type: ignore # "BaseAgent" has no attribute "get_code_execution_tools"; maybe "get_delegation_tools"?
if not agent.step_callback: # type: ignore # "BaseAgent" has no attribute "step_callback" if not agent.step_callback: # type: ignore # "BaseAgent" has no attribute "step_callback"
agent.step_callback = self.step_callback # type: ignore # "BaseAgent" has no attribute "step_callback" agent.step_callback = self.step_callback # type: ignore # "BaseAgent" has no attribute "step_callback"
@@ -676,6 +675,7 @@ class Crew(BaseModel):
) )
manager.tools = [] manager.tools = []
raise Exception("Manager agent should not have tools") raise Exception("Manager agent should not have tools")
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else: else:
self.manager_llm = ( self.manager_llm = (
getattr(self.manager_llm, "model_name", None) getattr(self.manager_llm, "model_name", None)
@@ -687,7 +687,6 @@ class Crew(BaseModel):
goal=i18n.retrieve("hierarchical_manager_agent", "goal"), goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"), backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
tools=AgentTools(agents=self.agents).tools(), tools=AgentTools(agents=self.agents).tools(),
allow_delegation=True,
llm=self.manager_llm, llm=self.manager_llm,
verbose=self.verbose, verbose=self.verbose,
) )
@@ -730,10 +729,7 @@ class Crew(BaseModel):
f"No agent available for task: {task.description}. Ensure that either the task has an assigned agent or a manager agent is provided." f"No agent available for task: {task.description}. Ensure that either the task has an assigned agent or a manager agent is provided."
) )
# Determine which tools to use - task tools take precedence over agent tools self._prepare_agent_tools(task)
tools_for_task = task.tools or agent_to_use.tools or []
tools_for_task = self._prepare_tools(agent_to_use, task, tools_for_task)
self._log_task_start(task, agent_to_use.role) self._log_task_start(task, agent_to_use.role)
if isinstance(task, ConditionalTask): if isinstance(task, ConditionalTask):
@@ -750,7 +746,7 @@ class Crew(BaseModel):
future = task.execute_async( future = task.execute_async(
agent=agent_to_use, agent=agent_to_use,
context=context, context=context,
tools=tools_for_task, tools=agent_to_use.tools,
) )
futures.append((task, future, task_index)) futures.append((task, future, task_index))
else: else:
@@ -762,7 +758,7 @@ class Crew(BaseModel):
task_output = task.execute_sync( task_output = task.execute_sync(
agent=agent_to_use, agent=agent_to_use,
context=context, context=context,
tools=tools_for_task, tools=agent_to_use.tools,
) )
task_outputs = [task_output] task_outputs = [task_output]
self._process_task_result(task, task_output) self._process_task_result(task, task_output)
@@ -799,77 +795,45 @@ class Crew(BaseModel):
return skipped_task_output return skipped_task_output
return None return None
def _prepare_tools( def _prepare_agent_tools(self, task: Task):
self, agent: BaseAgent, task: Task, tools: List[Tool] if self.process == Process.hierarchical:
) -> List[Tool]: if self.manager_agent:
# Add delegation tools if agent allows delegation self._update_manager_tools(task)
if agent.allow_delegation: else:
if self.process == Process.hierarchical: raise ValueError("Manager agent is required for hierarchical process.")
if self.manager_agent: elif task.agent and task.agent.allow_delegation:
tools = self._update_manager_tools(task, tools) self._add_delegation_tools(task)
else:
raise ValueError(
"Manager agent is required for hierarchical process."
)
elif agent and agent.allow_delegation:
tools = self._add_delegation_tools(task, tools)
# Add code execution tools if agent allows code execution
if agent.allow_code_execution:
tools = self._add_code_execution_tools(agent, tools)
if agent and agent.multimodal:
tools = self._add_multimodal_tools(agent, tools)
return tools
def _get_agent_to_use(self, task: Task) -> Optional[BaseAgent]: def _get_agent_to_use(self, task: Task) -> Optional[BaseAgent]:
if self.process == Process.hierarchical: if self.process == Process.hierarchical:
return self.manager_agent return self.manager_agent
return task.agent return task.agent
def _merge_tools( def _add_delegation_tools(self, task: Task):
self, existing_tools: List[Tool], new_tools: List[Tool]
) -> List[Tool]:
"""Merge new tools into existing tools list, avoiding duplicates by tool name."""
if not new_tools:
return existing_tools
# Create mapping of tool names to new tools
new_tool_map = {tool.name: tool for tool in new_tools}
# Remove any existing tools that will be replaced
tools = [tool for tool in existing_tools if tool.name not in new_tool_map]
# Add all new tools
tools.extend(new_tools)
return tools
def _inject_delegation_tools(
self, tools: List[Tool], task_agent: BaseAgent, agents: List[BaseAgent]
):
delegation_tools = task_agent.get_delegation_tools(agents)
return self._merge_tools(tools, delegation_tools)
def _add_multimodal_tools(self, agent: BaseAgent, tools: List[Tool]):
multimodal_tools = agent.get_multimodal_tools()
return self._merge_tools(tools, multimodal_tools)
def _add_code_execution_tools(self, agent: BaseAgent, tools: List[Tool]):
code_tools = agent.get_code_execution_tools()
return self._merge_tools(tools, code_tools)
def _add_delegation_tools(self, task: Task, tools: List[Tool]):
agents_for_delegation = [agent for agent in self.agents if agent != task.agent] agents_for_delegation = [agent for agent in self.agents if agent != task.agent]
if len(self.agents) > 1 and len(agents_for_delegation) > 0 and task.agent: if len(self.agents) > 1 and len(agents_for_delegation) > 0 and task.agent:
if not tools: delegation_tools = task.agent.get_delegation_tools(agents_for_delegation)
tools = []
tools = self._inject_delegation_tools( # Add tools if they are not already in task.tools
tools, task.agent, agents_for_delegation for new_tool in delegation_tools:
) # Find the index of the tool with the same name
return tools existing_tool_index = next(
(
index
for index, tool in enumerate(task.tools or [])
if tool.name == new_tool.name
),
None,
)
if not task.tools:
task.tools = []
if existing_tool_index is not None:
# Replace the existing tool
task.tools[existing_tool_index] = new_tool
else:
# Add the new tool
task.tools.append(new_tool)
def _log_task_start(self, task: Task, role: str = "None"): def _log_task_start(self, task: Task, role: str = "None"):
if self.output_log_file: if self.output_log_file:
@@ -877,15 +841,14 @@ class Crew(BaseModel):
task_name=task.name, task=task.description, agent=role, status="started" task_name=task.name, task=task.description, agent=role, status="started"
) )
def _update_manager_tools(self, task: Task, tools: List[Tool]): def _update_manager_tools(self, task: Task):
if self.manager_agent: if self.manager_agent:
if task.agent: if task.agent:
tools = self._inject_delegation_tools(tools, task.agent, [task.agent]) self.manager_agent.tools = task.agent.get_delegation_tools([task.agent])
else: else:
tools = self._inject_delegation_tools( self.manager_agent.tools = self.manager_agent.get_delegation_tools(
tools, self.manager_agent, self.agents self.agents
) )
return tools
def _get_context(self, task: Task, task_outputs: List[TaskOutput]): def _get_context(self, task: Task, task_outputs: List[TaskOutput]):
context = ( context = (
@@ -997,31 +960,6 @@ class Crew(BaseModel):
return self._knowledge.query(query) return self._knowledge.query(query)
return None return None
def fetch_inputs(self) -> Set[str]:
"""
Gathers placeholders (e.g., {something}) referenced in tasks or agents.
Scans each task's 'description' + 'expected_output', and each agent's
'role', 'goal', and 'backstory'.
Returns a set of all discovered placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)\}")
required_inputs: Set[str] = set()
# Scan tasks for inputs
for task in self.tasks:
# description and expected_output might contain e.g. {topic}, {user_name}, etc.
text = f"{task.description or ''} {task.expected_output or ''}"
required_inputs.update(placeholder_pattern.findall(text))
# Scan agents for inputs
for agent in self.agents:
# role, goal, backstory might have placeholders like {role_detail}, etc.
text = f"{agent.role or ''} {agent.goal or ''} {agent.backstory or ''}"
required_inputs.update(placeholder_pattern.findall(text))
return required_inputs
def copy(self): def copy(self):
"""Create a deep copy of the Crew.""" """Create a deep copy of the Crew."""
@@ -1077,7 +1015,7 @@ class Crew(BaseModel):
def _interpolate_inputs(self, inputs: Dict[str, Any]) -> None: def _interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolates the inputs in the tasks and agents.""" """Interpolates the inputs in the tasks and agents."""
[ [
task.interpolate_inputs_and_add_conversation_history( task.interpolate_inputs(
# type: ignore # "interpolate_inputs" of "Task" does not return a value (it only ever returns None) # type: ignore # "interpolate_inputs" of "Task" does not return a value (it only ever returns None)
inputs inputs
) )
@@ -1094,7 +1032,6 @@ class Crew(BaseModel):
agentops.end_session( agentops.end_session(
end_state="Success", end_state="Success",
end_state_reason="Finished Execution", end_state_reason="Finished Execution",
is_auto_end=True,
) )
self._telemetry.end_crew(self, final_string_output) self._telemetry.end_crew(self, final_string_output)

View File

@@ -14,15 +14,8 @@ from typing import (
cast, cast,
) )
from blinker import Signal
from pydantic import BaseModel, ValidationError from pydantic import BaseModel, ValidationError
from crewai.flow.flow_events import (
FlowFinishedEvent,
FlowStartedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from crewai.flow.flow_visualizer import plot_flow from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.utils import get_possible_return_constants from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
@@ -30,47 +23,7 @@ from crewai.telemetry import Telemetry
T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]]) T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]])
def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable: def start(condition=None):
"""
Marks a method as a flow's starting point.
This decorator designates a method as an entry point for the flow execution.
It can optionally specify conditions that trigger the start based on other
method executions.
Parameters
----------
condition : Optional[Union[str, dict, Callable]], optional
Defines when the start method should execute. Can be:
- str: Name of a method that triggers this start
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
- Callable: A method reference that triggers this start
Default is None, meaning unconditional start.
Returns
-------
Callable
A decorator function that marks the method as a flow start point.
Raises
------
ValueError
If the condition format is invalid.
Examples
--------
>>> @start() # Unconditional start
>>> def begin_flow(self):
... pass
>>> @start("method_name") # Start after specific method
>>> def conditional_start(self):
... pass
>>> @start(and_("method1", "method2")) # Start after multiple methods
>>> def complex_start(self):
... pass
"""
def decorator(func): def decorator(func):
func.__is_start_method__ = True func.__is_start_method__ = True
if condition is not None: if condition is not None:
@@ -95,42 +48,8 @@ def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
return decorator return decorator
def listen(condition: Union[str, dict, Callable]) -> Callable:
"""
Creates a listener that executes when specified conditions are met.
This decorator sets up a method to execute in response to other method def listen(condition):
executions in the flow. It supports both simple and complex triggering
conditions.
Parameters
----------
condition : Union[str, dict, Callable]
Specifies when the listener should execute. Can be:
- str: Name of a method that triggers this listener
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
- Callable: A method reference that triggers this listener
Returns
-------
Callable
A decorator function that sets up the method as a listener.
Raises
------
ValueError
If the condition format is invalid.
Examples
--------
>>> @listen("process_data") # Listen to single method
>>> def handle_processed_data(self):
... pass
>>> @listen(or_("success", "failure")) # Listen to multiple methods
>>> def handle_completion(self):
... pass
"""
def decorator(func): def decorator(func):
if isinstance(condition, str): if isinstance(condition, str):
func.__trigger_methods__ = [condition] func.__trigger_methods__ = [condition]
@@ -154,103 +73,16 @@ def listen(condition: Union[str, dict, Callable]) -> Callable:
return decorator return decorator
def router(condition: Union[str, dict, Callable]) -> Callable: def router(method):
"""
Creates a routing method that directs flow execution based on conditions.
This decorator marks a method as a router, which can dynamically determine
the next steps in the flow based on its return value. Routers are triggered
by specified conditions and can return constants that determine which path
the flow should take.
Parameters
----------
condition : Union[str, dict, Callable]
Specifies when the router should execute. Can be:
- str: Name of a method that triggers this router
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
- Callable: A method reference that triggers this router
Returns
-------
Callable
A decorator function that sets up the method as a router.
Raises
------
ValueError
If the condition format is invalid.
Examples
--------
>>> @router("check_status")
>>> def route_based_on_status(self):
... if self.state.status == "success":
... return SUCCESS
... return FAILURE
>>> @router(and_("validate", "process"))
>>> def complex_routing(self):
... if all([self.state.valid, self.state.processed]):
... return CONTINUE
... return STOP
"""
def decorator(func): def decorator(func):
func.__is_router__ = True func.__is_router__ = True
if isinstance(condition, str): func.__router_for__ = method.__name__
func.__trigger_methods__ = [condition]
func.__condition_type__ = "OR"
elif (
isinstance(condition, dict)
and "type" in condition
and "methods" in condition
):
func.__trigger_methods__ = condition["methods"]
func.__condition_type__ = condition["type"]
elif callable(condition) and hasattr(condition, "__name__"):
func.__trigger_methods__ = [condition.__name__]
func.__condition_type__ = "OR"
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
)
return func return func
return decorator return decorator
def or_(*conditions: Union[str, dict, Callable]) -> dict:
"""
Combines multiple conditions with OR logic for flow control.
Creates a condition that is satisfied when any of the specified conditions def or_(*conditions):
are met. This is used with @start, @listen, or @router decorators to create
complex triggering conditions.
Parameters
----------
*conditions : Union[str, dict, Callable]
Variable number of conditions that can be:
- str: Method names
- dict: Existing condition dictionaries
- Callable: Method references
Returns
-------
dict
A condition dictionary with format:
{"type": "OR", "methods": list_of_method_names}
Raises
------
ValueError
If any condition is invalid.
Examples
--------
>>> @listen(or_("success", "timeout"))
>>> def handle_completion(self):
... pass
"""
methods = [] methods = []
for condition in conditions: for condition in conditions:
if isinstance(condition, dict) and "methods" in condition: if isinstance(condition, dict) and "methods" in condition:
@@ -264,39 +96,7 @@ def or_(*conditions: Union[str, dict, Callable]) -> dict:
return {"type": "OR", "methods": methods} return {"type": "OR", "methods": methods}
def and_(*conditions: Union[str, dict, Callable]) -> dict: def and_(*conditions):
"""
Combines multiple conditions with AND logic for flow control.
Creates a condition that is satisfied only when all specified conditions
are met. This is used with @start, @listen, or @router decorators to create
complex triggering conditions.
Parameters
----------
*conditions : Union[str, dict, Callable]
Variable number of conditions that can be:
- str: Method names
- dict: Existing condition dictionaries
- Callable: Method references
Returns
-------
dict
A condition dictionary with format:
{"type": "AND", "methods": list_of_method_names}
Raises
------
ValueError
If any condition is invalid.
Examples
--------
>>> @listen(and_("validated", "processed"))
>>> def handle_complete_data(self):
... pass
"""
methods = [] methods = []
for condition in conditions: for condition in conditions:
if isinstance(condition, dict) and "methods" in condition: if isinstance(condition, dict) and "methods" in condition:
@@ -316,8 +116,8 @@ class FlowMeta(type):
start_methods = [] start_methods = []
listeners = {} listeners = {}
routers = {}
router_paths = {} router_paths = {}
routers = set()
for attr_name, attr_value in dct.items(): for attr_name, attr_value in dct.items():
if hasattr(attr_value, "__is_start_method__"): if hasattr(attr_value, "__is_start_method__"):
@@ -330,11 +130,18 @@ class FlowMeta(type):
methods = attr_value.__trigger_methods__ methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR") condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods) listeners[attr_name] = (condition_type, methods)
if hasattr(attr_value, "__is_router__") and attr_value.__is_router__:
routers.add(attr_name) elif hasattr(attr_value, "__is_router__"):
possible_returns = get_possible_return_constants(attr_value) routers[attr_value.__router_for__] = attr_name
if possible_returns: possible_returns = get_possible_return_constants(attr_value)
router_paths[attr_name] = possible_returns if possible_returns:
router_paths[attr_name] = possible_returns
# Register router as a listener to its triggering method
trigger_method_name = attr_value.__router_for__
methods = [trigger_method_name]
condition_type = "OR"
listeners[attr_name] = (condition_type, methods)
setattr(cls, "_start_methods", start_methods) setattr(cls, "_start_methods", start_methods)
setattr(cls, "_listeners", listeners) setattr(cls, "_listeners", listeners)
@@ -349,10 +156,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
_start_methods: List[str] = [] _start_methods: List[str] = []
_listeners: Dict[str, tuple[str, List[str]]] = {} _listeners: Dict[str, tuple[str, List[str]]] = {}
_routers: Set[str] = set() _routers: Dict[str, str] = {}
_router_paths: Dict[str, List[str]] = {} _router_paths: Dict[str, List[str]] = {}
initial_state: Union[Type[T], T, None] = None initial_state: Union[Type[T], T, None] = None
event_emitter = Signal("event_emitter")
def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]: def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]:
class _FlowGeneric(cls): # type: ignore class _FlowGeneric(cls): # type: ignore
@@ -396,10 +202,20 @@ class Flow(Generic[T], metaclass=FlowMeta):
return self._method_outputs return self._method_outputs
def _initialize_state(self, inputs: Dict[str, Any]) -> None: def _initialize_state(self, inputs: Dict[str, Any]) -> None:
if isinstance(self._state, BaseModel): """
# Structured state Initializes or updates the state with the provided inputs.
try:
Args:
inputs: Dictionary of inputs to initialize or update the state.
Raises:
ValueError: If inputs do not match the structured state model.
TypeError: If state is neither a BaseModel instance nor a dictionary.
"""
if isinstance(self._state, BaseModel):
# Structured state management
try:
# Define a function to create the dynamic class
def create_model_with_extra_forbid( def create_model_with_extra_forbid(
base_model: Type[BaseModel], base_model: Type[BaseModel],
) -> Type[BaseModel]: ) -> Type[BaseModel]:
@@ -409,33 +225,50 @@ class Flow(Generic[T], metaclass=FlowMeta):
return ModelWithExtraForbid return ModelWithExtraForbid
# Create the dynamic class
ModelWithExtraForbid = create_model_with_extra_forbid( ModelWithExtraForbid = create_model_with_extra_forbid(
self._state.__class__ self._state.__class__
) )
# Create a new instance using the combined state and inputs
self._state = cast( self._state = cast(
T, ModelWithExtraForbid(**{**self._state.model_dump(), **inputs}) T, ModelWithExtraForbid(**{**self._state.model_dump(), **inputs})
) )
except ValidationError as e: except ValidationError as e:
raise ValueError(f"Invalid inputs for structured state: {e}") from e raise ValueError(f"Invalid inputs for structured state: {e}") from e
elif isinstance(self._state, dict): elif isinstance(self._state, dict):
# Unstructured state management
self._state.update(inputs) self._state.update(inputs)
else: else:
raise TypeError("State must be a BaseModel instance or a dictionary.") raise TypeError("State must be a BaseModel instance or a dictionary.")
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any: def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
self.event_emitter.send( """
self, Starts the execution of the flow synchronously.
event=FlowStartedEvent(
type="flow_started",
flow_name=self.__class__.__name__,
),
)
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None: if inputs is not None:
self._initialize_state(inputs) self._initialize_state(inputs)
return asyncio.run(self.kickoff_async()) return asyncio.run(self.kickoff_async())
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any: async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""
Starts the execution of the flow asynchronously.
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
if not self._start_methods: if not self._start_methods:
raise ValueError("No start method defined") raise ValueError("No start method defined")
@@ -443,42 +276,22 @@ class Flow(Generic[T], metaclass=FlowMeta):
self.__class__.__name__, list(self._methods.keys()) self.__class__.__name__, list(self._methods.keys())
) )
# Create tasks for all start methods
tasks = [ tasks = [
self._execute_start_method(start_method) self._execute_start_method(start_method)
for start_method in self._start_methods for start_method in self._start_methods
] ]
# Run all start methods concurrently
await asyncio.gather(*tasks) await asyncio.gather(*tasks)
final_output = self._method_outputs[-1] if self._method_outputs else None # Return the final output (from the last executed method)
if self._method_outputs:
self.event_emitter.send( return self._method_outputs[-1]
self, else:
event=FlowFinishedEvent( return None # Or raise an exception if no methods were executed
type="flow_finished",
flow_name=self.__class__.__name__,
result=final_output,
),
)
return final_output
async def _execute_start_method(self, start_method_name: str) -> None: async def _execute_start_method(self, start_method_name: str) -> None:
"""
Executes a flow's start method and its triggered listeners.
This internal method handles the execution of methods marked with @start
decorator and manages the subsequent chain of listener executions.
Parameters
----------
start_method_name : str
The name of the start method to execute.
Notes
-----
- Executes the start method and captures its result
- Triggers execution of any listeners waiting on this start method
- Part of the flow's initialization sequence
"""
result = await self._execute_method( result = await self._execute_method(
start_method_name, self._methods[start_method_name] start_method_name, self._methods[start_method_name]
) )
@@ -492,181 +305,70 @@ class Flow(Generic[T], metaclass=FlowMeta):
if asyncio.iscoroutinefunction(method) if asyncio.iscoroutinefunction(method)
else method(*args, **kwargs) else method(*args, **kwargs)
) )
self._method_outputs.append(result) self._method_outputs.append(result) # Store the output
# Track method execution counts
self._method_execution_counts[method_name] = ( self._method_execution_counts[method_name] = (
self._method_execution_counts.get(method_name, 0) + 1 self._method_execution_counts.get(method_name, 0) + 1
) )
return result return result
async def _execute_listeners(self, trigger_method: str, result: Any) -> None: async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
""" listener_tasks = []
Executes all listeners and routers triggered by a method completion.
This internal method manages the execution flow by: if trigger_method in self._routers:
1. First executing all triggered routers sequentially router_method = self._methods[self._routers[trigger_method]]
2. Then executing all triggered listeners in parallel path = await self._execute_method(
self._routers[trigger_method], router_method
Parameters
----------
trigger_method : str
The name of the method that triggered these listeners.
result : Any
The result from the triggering method, passed to listeners
that accept parameters.
Notes
-----
- Routers are executed sequentially to maintain flow control
- Each router's result becomes the new trigger_method
- Normal listeners are executed in parallel for efficiency
- Listeners can receive the trigger method's result as a parameter
"""
# First, handle routers repeatedly until no router triggers anymore
while True:
routers_triggered = self._find_triggered_methods(
trigger_method, router_only=True
) )
if not routers_triggered: trigger_method = path
break
for router_name in routers_triggered:
await self._execute_single_listener(router_name, result)
# After executing router, the router's result is the path
# The last router executed sets the trigger_method
# The router result is the last element in self._method_outputs
trigger_method = self._method_outputs[-1]
# Now that no more routers are triggered by current trigger_method,
# execute normal listeners
listeners_triggered = self._find_triggered_methods(
trigger_method, router_only=False
)
if listeners_triggered:
tasks = [
self._execute_single_listener(listener_name, result)
for listener_name in listeners_triggered
]
await asyncio.gather(*tasks)
def _find_triggered_methods(
self, trigger_method: str, router_only: bool
) -> List[str]:
"""
Finds all methods that should be triggered based on conditions.
This internal method evaluates both OR and AND conditions to determine
which methods should be executed next in the flow.
Parameters
----------
trigger_method : str
The name of the method that just completed execution.
router_only : bool
If True, only consider router methods.
If False, only consider non-router methods.
Returns
-------
List[str]
Names of methods that should be triggered.
Notes
-----
- Handles both OR and AND conditions:
* OR: Triggers if any condition is met
* AND: Triggers only when all conditions are met
- Maintains state for AND conditions using _pending_and_listeners
- Separates router and normal listener evaluation
"""
triggered = []
for listener_name, (condition_type, methods) in self._listeners.items(): for listener_name, (condition_type, methods) in self._listeners.items():
is_router = listener_name in self._routers
if router_only != is_router:
continue
if condition_type == "OR": if condition_type == "OR":
# If the trigger_method matches any in methods, run this
if trigger_method in methods: if trigger_method in methods:
triggered.append(listener_name) # Schedule the listener without preventing re-execution
listener_tasks.append(
self._execute_single_listener(listener_name, result)
)
elif condition_type == "AND": elif condition_type == "AND":
# Initialize pending methods for this listener if not already done # Initialize pending methods for this listener if not already done
if listener_name not in self._pending_and_listeners: if listener_name not in self._pending_and_listeners:
self._pending_and_listeners[listener_name] = set(methods) self._pending_and_listeners[listener_name] = set(methods)
# Remove the trigger method from pending methods # Remove the trigger method from pending methods
if trigger_method in self._pending_and_listeners[listener_name]: self._pending_and_listeners[listener_name].discard(trigger_method)
self._pending_and_listeners[listener_name].discard(trigger_method)
if not self._pending_and_listeners[listener_name]: if not self._pending_and_listeners[listener_name]:
# All required methods have been executed # All required methods have been executed
triggered.append(listener_name) listener_tasks.append(
self._execute_single_listener(listener_name, result)
)
# Reset pending methods for this listener # Reset pending methods for this listener
self._pending_and_listeners.pop(listener_name, None) self._pending_and_listeners.pop(listener_name, None)
return triggered # Run all listener tasks concurrently and wait for them to complete
if listener_tasks:
await asyncio.gather(*listener_tasks)
async def _execute_single_listener(self, listener_name: str, result: Any) -> None: async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
"""
Executes a single listener method with proper event handling.
This internal method manages the execution of an individual listener,
including parameter inspection, event emission, and error handling.
Parameters
----------
listener_name : str
The name of the listener method to execute.
result : Any
The result from the triggering method, which may be passed
to the listener if it accepts parameters.
Notes
-----
- Inspects method signature to determine if it accepts the trigger result
- Emits events for method execution start and finish
- Handles errors gracefully with detailed logging
- Recursively triggers listeners of this listener
- Supports both parameterized and parameter-less listeners
Error Handling
-------------
Catches and logs any exceptions during execution, preventing
individual listener failures from breaking the entire flow.
"""
try: try:
method = self._methods[listener_name] method = self._methods[listener_name]
self.event_emitter.send(
self,
event=MethodExecutionStartedEvent(
type="method_execution_started",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
sig = inspect.signature(method) sig = inspect.signature(method)
params = list(sig.parameters.values()) params = list(sig.parameters.values())
# Exclude 'self' parameter
method_params = [p for p in params if p.name != "self"] method_params = [p for p in params if p.name != "self"]
if method_params: if method_params:
# If listener expects parameters, pass the result
listener_result = await self._execute_method( listener_result = await self._execute_method(
listener_name, method, result listener_name, method, result
) )
else: else:
# If listener does not expect parameters, call without arguments
listener_result = await self._execute_method(listener_name, method) listener_result = await self._execute_method(listener_name, method)
self.event_emitter.send( # Execute listeners of this listener
self,
event=MethodExecutionFinishedEvent(
type="method_execution_finished",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
# Execute listeners (and possibly routers) of this listener
await self._execute_listeners(listener_name, listener_result) await self._execute_listeners(listener_name, listener_result)
except Exception as e: except Exception as e:
print( print(
f"[Flow._execute_single_listener] Error in method {listener_name}: {e}" f"[Flow._execute_single_listener] Error in method {listener_name}: {e}"
@@ -679,4 +381,5 @@ class Flow(Generic[T], metaclass=FlowMeta):
self._telemetry.flow_plotting_span( self._telemetry.flow_plotting_span(
self.__class__.__name__, list(self._methods.keys()) self.__class__.__name__, list(self._methods.keys())
) )
plot_flow(self, filename) plot_flow(self, filename)

View File

@@ -1,33 +0,0 @@
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@dataclass
class Event:
type: str
flow_name: str
timestamp: datetime = field(init=False)
def __post_init__(self):
self.timestamp = datetime.now()
@dataclass
class FlowStartedEvent(Event):
pass
@dataclass
class MethodExecutionStartedEvent(Event):
method_name: str
@dataclass
class MethodExecutionFinishedEvent(Event):
method_name: str
@dataclass
class FlowFinishedEvent(Event):
result: Optional[Any] = None

View File

@@ -1,14 +1,12 @@
# flow_visualizer.py # flow_visualizer.py
import os import os
from pathlib import Path
from pyvis.network import Network from pyvis.network import Network
from crewai.flow.config import COLORS, NODE_STYLES from crewai.flow.config import COLORS, NODE_STYLES
from crewai.flow.html_template_handler import HTMLTemplateHandler from crewai.flow.html_template_handler import HTMLTemplateHandler
from crewai.flow.legend_generator import generate_legend_items_html, get_legend_items from crewai.flow.legend_generator import generate_legend_items_html, get_legend_items
from crewai.flow.path_utils import safe_path_join, validate_path_exists
from crewai.flow.utils import calculate_node_levels from crewai.flow.utils import calculate_node_levels
from crewai.flow.visualization_utils import ( from crewai.flow.visualization_utils import (
add_edges, add_edges,
@@ -18,209 +16,89 @@ from crewai.flow.visualization_utils import (
class FlowPlot: class FlowPlot:
"""Handles the creation and rendering of flow visualization diagrams."""
def __init__(self, flow): def __init__(self, flow):
"""
Initialize FlowPlot with a flow object.
Parameters
----------
flow : Flow
A Flow instance to visualize.
Raises
------
ValueError
If flow object is invalid or missing required attributes.
"""
if not hasattr(flow, '_methods'):
raise ValueError("Invalid flow object: missing '_methods' attribute")
if not hasattr(flow, '_listeners'):
raise ValueError("Invalid flow object: missing '_listeners' attribute")
if not hasattr(flow, '_start_methods'):
raise ValueError("Invalid flow object: missing '_start_methods' attribute")
self.flow = flow self.flow = flow
self.colors = COLORS self.colors = COLORS
self.node_styles = NODE_STYLES self.node_styles = NODE_STYLES
def plot(self, filename): def plot(self, filename):
""" net = Network(
Generate and save an HTML visualization of the flow. directed=True,
height="750px",
width="100%",
bgcolor=self.colors["bg"],
layout=None,
)
Parameters # Set options to disable physics
---------- net.set_options(
filename : str
Name of the output file (without extension).
Raises
------
ValueError
If filename is invalid or network generation fails.
IOError
If file operations fail or visualization cannot be generated.
RuntimeError
If network visualization generation fails.
"""
if not filename or not isinstance(filename, str):
raise ValueError("Filename must be a non-empty string")
try:
# Initialize network
net = Network(
directed=True,
height="750px",
width="100%",
bgcolor=self.colors["bg"],
layout=None,
)
# Set options to disable physics
net.set_options(
"""
var options = {
"nodes": {
"font": {
"multi": "html"
}
},
"physics": {
"enabled": false
}
}
""" """
) var options = {
"nodes": {
"font": {
"multi": "html"
}
},
"physics": {
"enabled": false
}
}
"""
)
# Calculate levels for nodes # Calculate levels for nodes
try: node_levels = calculate_node_levels(self.flow)
node_levels = calculate_node_levels(self.flow)
except Exception as e:
raise ValueError(f"Failed to calculate node levels: {str(e)}")
# Compute positions # Compute positions
try: node_positions = compute_positions(self.flow, node_levels)
node_positions = compute_positions(self.flow, node_levels)
except Exception as e:
raise ValueError(f"Failed to compute node positions: {str(e)}")
# Add nodes to the network # Add nodes to the network
try: add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
except Exception as e:
raise RuntimeError(f"Failed to add nodes to network: {str(e)}")
# Add edges to the network # Add edges to the network
try: add_edges(net, self.flow, node_positions, self.colors)
add_edges(net, self.flow, node_positions, self.colors)
except Exception as e:
raise RuntimeError(f"Failed to add edges to network: {str(e)}")
# Generate HTML network_html = net.generate_html()
try: final_html_content = self._generate_final_html(network_html)
network_html = net.generate_html()
final_html_content = self._generate_final_html(network_html)
except Exception as e:
raise RuntimeError(f"Failed to generate network visualization: {str(e)}")
# Save the final HTML content to the file # Save the final HTML content to the file
try: with open(f"{filename}.html", "w", encoding="utf-8") as f:
with open(f"{filename}.html", "w", encoding="utf-8") as f: f.write(final_html_content)
f.write(final_html_content) print(f"Plot saved as {filename}.html")
print(f"Plot saved as {filename}.html")
except IOError as e:
raise IOError(f"Failed to save flow visualization to {filename}.html: {str(e)}")
except (ValueError, RuntimeError, IOError) as e: self._cleanup_pyvis_lib()
raise e
except Exception as e:
raise RuntimeError(f"Unexpected error during flow visualization: {str(e)}")
finally:
self._cleanup_pyvis_lib()
def _generate_final_html(self, network_html): def _generate_final_html(self, network_html):
""" # Extract just the body content from the generated HTML
Generate the final HTML content with network visualization and legend. current_dir = os.path.dirname(__file__)
template_path = os.path.join(
current_dir, "assets", "crewai_flow_visual_template.html"
)
logo_path = os.path.join(current_dir, "assets", "crewai_logo.svg")
Parameters html_handler = HTMLTemplateHandler(template_path, logo_path)
---------- network_body = html_handler.extract_body_content(network_html)
network_html : str
HTML content generated by pyvis Network.
Returns # Generate the legend items HTML
------- legend_items = get_legend_items(self.colors)
str legend_items_html = generate_legend_items_html(legend_items)
Complete HTML content with styling and legend. final_html_content = html_handler.generate_final_html(
network_body, legend_items_html
Raises )
------ return final_html_content
IOError
If template or logo files cannot be accessed.
ValueError
If network_html is invalid.
"""
if not network_html:
raise ValueError("Invalid network HTML content")
try:
# Extract just the body content from the generated HTML
current_dir = os.path.dirname(__file__)
template_path = safe_path_join("assets", "crewai_flow_visual_template.html", root=current_dir)
logo_path = safe_path_join("assets", "crewai_logo.svg", root=current_dir)
if not os.path.exists(template_path):
raise IOError(f"Template file not found: {template_path}")
if not os.path.exists(logo_path):
raise IOError(f"Logo file not found: {logo_path}")
html_handler = HTMLTemplateHandler(template_path, logo_path)
network_body = html_handler.extract_body_content(network_html)
# Generate the legend items HTML
legend_items = get_legend_items(self.colors)
legend_items_html = generate_legend_items_html(legend_items)
final_html_content = html_handler.generate_final_html(
network_body, legend_items_html
)
return final_html_content
except Exception as e:
raise IOError(f"Failed to generate visualization HTML: {str(e)}")
def _cleanup_pyvis_lib(self): def _cleanup_pyvis_lib(self):
""" # Clean up the generated lib folder
Clean up the generated lib folder from pyvis. lib_folder = os.path.join(os.getcwd(), "lib")
This method safely removes the temporary lib directory created by pyvis
during network visualization generation.
"""
try: try:
lib_folder = safe_path_join("lib", root=os.getcwd())
if os.path.exists(lib_folder) and os.path.isdir(lib_folder): if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
import shutil import shutil
shutil.rmtree(lib_folder) shutil.rmtree(lib_folder)
except ValueError as e:
print(f"Error validating lib folder path: {e}")
except Exception as e: except Exception as e:
print(f"Error cleaning up lib folder: {e}") print(f"Error cleaning up {lib_folder}: {e}")
def plot_flow(flow, filename="flow_plot"): def plot_flow(flow, filename="flow_plot"):
"""
Convenience function to create and save a flow visualization.
Parameters
----------
flow : Flow
Flow instance to visualize.
filename : str, optional
Output filename without extension, by default "flow_plot".
Raises
------
ValueError
If flow object or filename is invalid.
IOError
If file operations fail.
"""
visualizer = FlowPlot(flow) visualizer = FlowPlot(flow)
visualizer.plot(filename) visualizer.plot(filename)

View File

@@ -1,53 +1,26 @@
import base64 import base64
import re import re
from pathlib import Path
from crewai.flow.path_utils import safe_path_join, validate_path_exists
class HTMLTemplateHandler: class HTMLTemplateHandler:
"""Handles HTML template processing and generation for flow visualization diagrams."""
def __init__(self, template_path, logo_path): def __init__(self, template_path, logo_path):
""" self.template_path = template_path
Initialize HTMLTemplateHandler with validated template and logo paths. self.logo_path = logo_path
Parameters
----------
template_path : str
Path to the HTML template file.
logo_path : str
Path to the logo image file.
Raises
------
ValueError
If template or logo paths are invalid or files don't exist.
"""
try:
self.template_path = validate_path_exists(template_path, "file")
self.logo_path = validate_path_exists(logo_path, "file")
except ValueError as e:
raise ValueError(f"Invalid template or logo path: {e}")
def read_template(self): def read_template(self):
"""Read and return the HTML template file contents."""
with open(self.template_path, "r", encoding="utf-8") as f: with open(self.template_path, "r", encoding="utf-8") as f:
return f.read() return f.read()
def encode_logo(self): def encode_logo(self):
"""Convert the logo SVG file to base64 encoded string."""
with open(self.logo_path, "rb") as logo_file: with open(self.logo_path, "rb") as logo_file:
logo_svg_data = logo_file.read() logo_svg_data = logo_file.read()
return base64.b64encode(logo_svg_data).decode("utf-8") return base64.b64encode(logo_svg_data).decode("utf-8")
def extract_body_content(self, html): def extract_body_content(self, html):
"""Extract and return content between body tags from HTML string."""
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL) match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
return match.group(1) if match else "" return match.group(1) if match else ""
def generate_legend_items_html(self, legend_items): def generate_legend_items_html(self, legend_items):
"""Generate HTML markup for the legend items."""
legend_items_html = "" legend_items_html = ""
for item in legend_items: for item in legend_items:
if "border" in item: if "border" in item:
@@ -75,7 +48,6 @@ class HTMLTemplateHandler:
return legend_items_html return legend_items_html
def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"): def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"):
"""Combine all components into final HTML document with network visualization."""
html_template = self.read_template() html_template = self.read_template()
logo_svg_base64 = self.encode_logo() logo_svg_base64 = self.encode_logo()

View File

@@ -1,4 +1,3 @@
def get_legend_items(colors): def get_legend_items(colors):
return [ return [
{"label": "Start Method", "color": colors["start"]}, {"label": "Start Method", "color": colors["start"]},

View File

@@ -1,135 +0,0 @@
"""
Path utilities for secure file operations in CrewAI flow module.
This module provides utilities for secure path handling to prevent directory
traversal attacks and ensure paths remain within allowed boundaries.
"""
import os
from pathlib import Path
from typing import List, Union
def safe_path_join(*parts: str, root: Union[str, Path, None] = None) -> str:
"""
Safely join path components and ensure the result is within allowed boundaries.
Parameters
----------
*parts : str
Variable number of path components to join.
root : Union[str, Path, None], optional
Root directory to use as base. If None, uses current working directory.
Returns
-------
str
String representation of the resolved path.
Raises
------
ValueError
If the resulting path would be outside the root directory
or if any path component is invalid.
"""
if not parts:
raise ValueError("No path components provided")
try:
# Convert all parts to strings and clean them
clean_parts = [str(part).strip() for part in parts if part]
if not clean_parts:
raise ValueError("No valid path components provided")
# Establish root directory
root_path = Path(root).resolve() if root else Path.cwd()
# Join and resolve the full path
full_path = Path(root_path, *clean_parts).resolve()
# Check if the resolved path is within root
if not str(full_path).startswith(str(root_path)):
raise ValueError(
f"Invalid path: Potential directory traversal. Path must be within {root_path}"
)
return str(full_path)
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Invalid path components: {str(e)}")
def validate_path_exists(path: Union[str, Path], file_type: str = "file") -> str:
"""
Validate that a path exists and is of the expected type.
Parameters
----------
path : Union[str, Path]
Path to validate.
file_type : str, optional
Expected type ('file' or 'directory'), by default 'file'.
Returns
-------
str
Validated path as string.
Raises
------
ValueError
If path doesn't exist or is not of expected type.
"""
try:
path_obj = Path(path).resolve()
if not path_obj.exists():
raise ValueError(f"Path does not exist: {path}")
if file_type == "file" and not path_obj.is_file():
raise ValueError(f"Path is not a file: {path}")
elif file_type == "directory" and not path_obj.is_dir():
raise ValueError(f"Path is not a directory: {path}")
return str(path_obj)
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Invalid path: {str(e)}")
def list_files(directory: Union[str, Path], pattern: str = "*") -> List[str]:
"""
Safely list files in a directory matching a pattern.
Parameters
----------
directory : Union[str, Path]
Directory to search in.
pattern : str, optional
Glob pattern to match files against, by default "*".
Returns
-------
List[str]
List of matching file paths.
Raises
------
ValueError
If directory is invalid or inaccessible.
"""
try:
dir_path = Path(directory).resolve()
if not dir_path.is_dir():
raise ValueError(f"Not a directory: {directory}")
return [str(p) for p in dir_path.glob(pattern) if p.is_file()]
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Error listing files: {str(e)}")

View File

@@ -1,25 +1,9 @@
"""
Utility functions for flow visualization and dependency analysis.
This module provides core functionality for analyzing and manipulating flow structures,
including node level calculation, ancestor tracking, and return value analysis.
Functions in this module are primarily used by the visualization system to create
accurate and informative flow diagrams.
Example
-------
>>> flow = Flow()
>>> node_levels = calculate_node_levels(flow)
>>> ancestors = build_ancestor_dict(flow)
"""
import ast import ast
import inspect import inspect
import textwrap import textwrap
from typing import Any, Dict, List, Optional, Set, Union
def get_possible_return_constants(function: Any) -> Optional[List[str]]: def get_possible_return_constants(function):
try: try:
source = inspect.getsource(function) source = inspect.getsource(function)
except OSError: except OSError:
@@ -47,80 +31,23 @@ def get_possible_return_constants(function: Any) -> Optional[List[str]]:
print(f"Source code:\n{source}") print(f"Source code:\n{source}")
return None return None
return_values = set() return_values = []
dict_definitions = {}
class DictionaryAssignmentVisitor(ast.NodeVisitor):
def visit_Assign(self, node):
# Check if this assignment is assigning a dictionary literal to a variable
if isinstance(node.value, ast.Dict) and len(node.targets) == 1:
target = node.targets[0]
if isinstance(target, ast.Name):
var_name = target.id
dict_values = []
# Extract string values from the dictionary
for val in node.value.values:
if isinstance(val, ast.Constant) and isinstance(val.value, str):
dict_values.append(val.value)
# If non-string, skip or just ignore
if dict_values:
dict_definitions[var_name] = dict_values
self.generic_visit(node)
class ReturnVisitor(ast.NodeVisitor): class ReturnVisitor(ast.NodeVisitor):
def visit_Return(self, node): def visit_Return(self, node):
# Direct string return # Check if the return value is a constant (Python 3.8+)
if isinstance(node.value, ast.Constant) and isinstance( if isinstance(node.value, ast.Constant):
node.value.value, str return_values.append(node.value.value)
):
return_values.add(node.value.value)
# Dictionary-based return, like return paths[result]
elif isinstance(node.value, ast.Subscript):
# Check if we're subscripting a known dictionary variable
if isinstance(node.value.value, ast.Name):
var_name = node.value.value.id
if var_name in dict_definitions:
# Add all possible dictionary values
for v in dict_definitions[var_name]:
return_values.add(v)
self.generic_visit(node)
# First pass: identify dictionary assignments
DictionaryAssignmentVisitor().visit(code_ast)
# Second pass: identify returns
ReturnVisitor().visit(code_ast) ReturnVisitor().visit(code_ast)
return return_values
return list(return_values) if return_values else None
def calculate_node_levels(flow: Any) -> Dict[str, int]: def calculate_node_levels(flow):
""" levels = {}
Calculate the hierarchical level of each node in the flow. queue = []
visited = set()
Performs a breadth-first traversal of the flow graph to assign levels pending_and_listeners = {}
to nodes, starting with start methods at level 0.
Parameters
----------
flow : Any
The flow instance containing methods, listeners, and router configurations.
Returns
-------
Dict[str, int]
Dictionary mapping method names to their hierarchical levels.
Notes
-----
- Start methods are assigned level 0
- Each subsequent connected node is assigned level = parent_level + 1
- Handles both OR and AND conditions for listeners
- Processes router paths separately
"""
levels: Dict[str, int] = {}
queue: List[str] = []
visited: Set[str] = set()
pending_and_listeners: Dict[str, Set[str]] = {}
# Make all start methods at level 0 # Make all start methods at level 0
for method_name, method in flow._methods.items(): for method_name, method in flow._methods.items():
@@ -134,7 +61,10 @@ def calculate_node_levels(flow: Any) -> Dict[str, int]:
current_level = levels[current] current_level = levels[current]
visited.add(current) visited.add(current)
for listener_name, (condition_type, trigger_methods) in flow._listeners.items(): for listener_name, (
condition_type,
trigger_methods,
) in flow._listeners.items():
if condition_type == "OR": if condition_type == "OR":
if current in trigger_methods: if current in trigger_methods:
if ( if (
@@ -159,7 +89,7 @@ def calculate_node_levels(flow: Any) -> Dict[str, int]:
queue.append(listener_name) queue.append(listener_name)
# Handle router connections # Handle router connections
if current in flow._routers: if current in flow._routers.values():
router_method_name = current router_method_name = current
paths = flow._router_paths.get(router_method_name, []) paths = flow._router_paths.get(router_method_name, [])
for path in paths: for path in paths:
@@ -175,24 +105,10 @@ def calculate_node_levels(flow: Any) -> Dict[str, int]:
levels[listener_name] = current_level + 1 levels[listener_name] = current_level + 1
if listener_name not in visited: if listener_name not in visited:
queue.append(listener_name) queue.append(listener_name)
return levels return levels
def count_outgoing_edges(flow: Any) -> Dict[str, int]: def count_outgoing_edges(flow):
"""
Count the number of outgoing edges for each method in the flow.
Parameters
----------
flow : Any
The flow instance to analyze.
Returns
-------
Dict[str, int]
Dictionary mapping method names to their outgoing edge count.
"""
counts = {} counts = {}
for method_name in flow._methods: for method_name in flow._methods:
counts[method_name] = 0 counts[method_name] = 0
@@ -204,53 +120,16 @@ def count_outgoing_edges(flow: Any) -> Dict[str, int]:
return counts return counts
def build_ancestor_dict(flow: Any) -> Dict[str, Set[str]]: def build_ancestor_dict(flow):
""" ancestors = {node: set() for node in flow._methods}
Build a dictionary mapping each node to its ancestor nodes. visited = set()
Parameters
----------
flow : Any
The flow instance to analyze.
Returns
-------
Dict[str, Set[str]]
Dictionary mapping each node to a set of its ancestor nodes.
"""
ancestors: Dict[str, Set[str]] = {node: set() for node in flow._methods}
visited: Set[str] = set()
for node in flow._methods: for node in flow._methods:
if node not in visited: if node not in visited:
dfs_ancestors(node, ancestors, visited, flow) dfs_ancestors(node, ancestors, visited, flow)
return ancestors return ancestors
def dfs_ancestors( def dfs_ancestors(node, ancestors, visited, flow):
node: str,
ancestors: Dict[str, Set[str]],
visited: Set[str],
flow: Any
) -> None:
"""
Perform depth-first search to build ancestor relationships.
Parameters
----------
node : str
Current node being processed.
ancestors : Dict[str, Set[str]]
Dictionary tracking ancestor relationships.
visited : Set[str]
Set of already visited nodes.
flow : Any
The flow instance being analyzed.
Notes
-----
This function modifies the ancestors dictionary in-place to build
the complete ancestor graph.
"""
if node in visited: if node in visited:
return return
visited.add(node) visited.add(node)
@@ -263,7 +142,7 @@ def dfs_ancestors(
dfs_ancestors(listener_name, ancestors, visited, flow) dfs_ancestors(listener_name, ancestors, visited, flow)
# Handle router methods separately # Handle router methods separately
if node in flow._routers: if node in flow._routers.values():
router_method_name = node router_method_name = node
paths = flow._router_paths.get(router_method_name, []) paths = flow._router_paths.get(router_method_name, [])
for path in paths: for path in paths:
@@ -274,48 +153,12 @@ def dfs_ancestors(
dfs_ancestors(listener_name, ancestors, visited, flow) dfs_ancestors(listener_name, ancestors, visited, flow)
def is_ancestor(node: str, ancestor_candidate: str, ancestors: Dict[str, Set[str]]) -> bool: def is_ancestor(node, ancestor_candidate, ancestors):
"""
Check if one node is an ancestor of another.
Parameters
----------
node : str
The node to check ancestors for.
ancestor_candidate : str
The potential ancestor node.
ancestors : Dict[str, Set[str]]
Dictionary containing ancestor relationships.
Returns
-------
bool
True if ancestor_candidate is an ancestor of node, False otherwise.
"""
return ancestor_candidate in ancestors.get(node, set()) return ancestor_candidate in ancestors.get(node, set())
def build_parent_children_dict(flow: Any) -> Dict[str, List[str]]: def build_parent_children_dict(flow):
""" parent_children = {}
Build a dictionary mapping parent nodes to their children.
Parameters
----------
flow : Any
The flow instance to analyze.
Returns
-------
Dict[str, List[str]]
Dictionary mapping parent method names to lists of their child method names.
Notes
-----
- Maps listeners to their trigger methods
- Maps router methods to their paths and listeners
- Children lists are sorted for consistent ordering
"""
parent_children: Dict[str, List[str]] = {}
# Map listeners to their trigger methods # Map listeners to their trigger methods
for listener_name, (_, trigger_methods) in flow._listeners.items(): for listener_name, (_, trigger_methods) in flow._listeners.items():
@@ -339,24 +182,7 @@ def build_parent_children_dict(flow: Any) -> Dict[str, List[str]]:
return parent_children return parent_children
def get_child_index(parent: str, child: str, parent_children: Dict[str, List[str]]) -> int: def get_child_index(parent, child, parent_children):
"""
Get the index of a child node in its parent's sorted children list.
Parameters
----------
parent : str
The parent node name.
child : str
The child node name to find the index for.
parent_children : Dict[str, List[str]]
Dictionary mapping parents to their children lists.
Returns
-------
int
Zero-based index of the child in its parent's sorted children list.
"""
children = parent_children.get(parent, []) children = parent_children.get(parent, [])
children.sort() children.sort()
return children.index(child) return children.index(child)

View File

@@ -1,23 +1,5 @@
"""
Utilities for creating visual representations of flow structures.
This module provides functions for generating network visualizations of flows,
including node placement, edge creation, and visual styling. It handles the
conversion of flow structures into visual network graphs with appropriate
styling and layout.
Example
-------
>>> flow = Flow()
>>> net = Network(directed=True)
>>> node_positions = compute_positions(flow, node_levels)
>>> add_nodes_to_network(net, flow, node_positions, node_styles)
>>> add_edges(net, flow, node_positions, colors)
"""
import ast import ast
import inspect import inspect
from typing import Any, Dict, List, Optional, Tuple, Union
from .utils import ( from .utils import (
build_ancestor_dict, build_ancestor_dict,
@@ -27,25 +9,8 @@ from .utils import (
) )
def method_calls_crew(method: Any) -> bool: def method_calls_crew(method):
""" """Check if the method calls `.crew()`."""
Check if the method contains a call to `.crew()`.
Parameters
----------
method : Any
The method to analyze for crew() calls.
Returns
-------
bool
True if the method calls .crew(), False otherwise.
Notes
-----
Uses AST analysis to detect method calls, specifically looking for
attribute access of 'crew'.
"""
try: try:
source = inspect.getsource(method) source = inspect.getsource(method)
source = inspect.cleandoc(source) source = inspect.cleandoc(source)
@@ -55,7 +20,6 @@ def method_calls_crew(method: Any) -> bool:
return False return False
class CrewCallVisitor(ast.NodeVisitor): class CrewCallVisitor(ast.NodeVisitor):
"""AST visitor to detect .crew() method calls."""
def __init__(self): def __init__(self):
self.found = False self.found = False
@@ -70,34 +34,7 @@ def method_calls_crew(method: Any) -> bool:
return visitor.found return visitor.found
def add_nodes_to_network( def add_nodes_to_network(net, flow, node_positions, node_styles):
net: Any,
flow: Any,
node_positions: Dict[str, Tuple[float, float]],
node_styles: Dict[str, Dict[str, Any]]
) -> None:
"""
Add nodes to the network visualization with appropriate styling.
Parameters
----------
net : Any
The pyvis Network instance to add nodes to.
flow : Any
The flow instance containing method information.
node_positions : Dict[str, Tuple[float, float]]
Dictionary mapping node names to their (x, y) positions.
node_styles : Dict[str, Dict[str, Any]]
Dictionary containing style configurations for different node types.
Notes
-----
Node types include:
- Start methods
- Router methods
- Crew methods
- Regular methods
"""
def human_friendly_label(method_name): def human_friendly_label(method_name):
return method_name.replace("_", " ").title() return method_name.replace("_", " ").title()
@@ -136,33 +73,9 @@ def add_nodes_to_network(
) )
def compute_positions( def compute_positions(flow, node_levels, y_spacing=150, x_spacing=150):
flow: Any, level_nodes = {}
node_levels: Dict[str, int], node_positions = {}
y_spacing: float = 150,
x_spacing: float = 150
) -> Dict[str, Tuple[float, float]]:
"""
Compute the (x, y) positions for each node in the flow graph.
Parameters
----------
flow : Any
The flow instance to compute positions for.
node_levels : Dict[str, int]
Dictionary mapping node names to their hierarchical levels.
y_spacing : float, optional
Vertical spacing between levels, by default 150.
x_spacing : float, optional
Horizontal spacing between nodes, by default 150.
Returns
-------
Dict[str, Tuple[float, float]]
Dictionary mapping node names to their (x, y) coordinates.
"""
level_nodes: Dict[int, List[str]] = {}
node_positions: Dict[str, Tuple[float, float]] = {}
for method_name, level in node_levels.items(): for method_name, level in node_levels.items():
level_nodes.setdefault(level, []).append(method_name) level_nodes.setdefault(level, []).append(method_name)
@@ -177,44 +90,16 @@ def compute_positions(
return node_positions return node_positions
def add_edges( def add_edges(net, flow, node_positions, colors):
net: Any,
flow: Any,
node_positions: Dict[str, Tuple[float, float]],
colors: Dict[str, str]
) -> None:
edge_smooth: Dict[str, Union[str, float]] = {"type": "continuous"} # Default value
"""
Add edges to the network visualization with appropriate styling.
Parameters
----------
net : Any
The pyvis Network instance to add edges to.
flow : Any
The flow instance containing edge information.
node_positions : Dict[str, Tuple[float, float]]
Dictionary mapping node names to their positions.
colors : Dict[str, str]
Dictionary mapping edge types to their colors.
Notes
-----
- Handles both normal listener edges and router edges
- Applies appropriate styling (color, dashes) based on edge type
- Adds curvature to edges when needed (cycles or multiple children)
"""
ancestors = build_ancestor_dict(flow) ancestors = build_ancestor_dict(flow)
parent_children = build_parent_children_dict(flow) parent_children = build_parent_children_dict(flow)
# Edges for normal listeners
for method_name in flow._listeners: for method_name in flow._listeners:
condition_type, trigger_methods = flow._listeners[method_name] condition_type, trigger_methods = flow._listeners[method_name]
is_and_condition = condition_type == "AND" is_and_condition = condition_type == "AND"
for trigger in trigger_methods: for trigger in trigger_methods:
# Check if nodes exist before adding edges if trigger in flow._methods or trigger in flow._routers.values():
if trigger in node_positions and method_name in node_positions:
is_router_edge = any( is_router_edge = any(
trigger in paths for paths in flow._router_paths.values() trigger in paths for paths in flow._router_paths.values()
) )
@@ -239,7 +124,7 @@ def add_edges(
else: else:
edge_smooth = {"type": "cubicBezier"} edge_smooth = {"type": "cubicBezier"}
else: else:
edge_smooth.update({"type": "continuous"}) edge_smooth = False
edge_style = { edge_style = {
"color": edge_color, "color": edge_color,
@@ -250,22 +135,7 @@ def add_edges(
} }
net.add_edge(trigger, method_name, **edge_style) net.add_edge(trigger, method_name, **edge_style)
else:
# Nodes not found in node_positions. Check if it's a known router outcome and a known method.
is_router_edge = any(
trigger in paths for paths in flow._router_paths.values()
)
# Check if method_name is a known method
method_known = method_name in flow._methods
# If it's a known router edge and the method is known, don't warn.
# This means the path is legitimate, just not reflected as nodes here.
if not (is_router_edge and method_known):
print(
f"Warning: No node found for '{trigger}' or '{method_name}'. Skipping edge."
)
# Edges for router return paths
for router_method_name, paths in flow._router_paths.items(): for router_method_name, paths in flow._router_paths.items():
for path in paths: for path in paths:
for listener_name, ( for listener_name, (
@@ -273,49 +143,36 @@ def add_edges(
trigger_methods, trigger_methods,
) in flow._listeners.items(): ) in flow._listeners.items():
if path in trigger_methods: if path in trigger_methods:
if ( is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
router_method_name in node_positions parent_has_multiple_children = (
and listener_name in node_positions len(parent_children.get(router_method_name, [])) > 1
): )
is_cycle_edge = is_ancestor( needs_curvature = is_cycle_edge or parent_has_multiple_children
router_method_name, listener_name, ancestors
)
parent_has_multiple_children = (
len(parent_children.get(router_method_name, [])) > 1
)
needs_curvature = is_cycle_edge or parent_has_multiple_children
if needs_curvature: if needs_curvature:
source_pos = node_positions.get(router_method_name) source_pos = node_positions.get(router_method_name)
target_pos = node_positions.get(listener_name) target_pos = node_positions.get(listener_name)
if source_pos and target_pos: if source_pos and target_pos:
dx = target_pos[0] - source_pos[0] dx = target_pos[0] - source_pos[0]
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW" smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
index = get_child_index( index = get_child_index(
router_method_name, listener_name, parent_children router_method_name, listener_name, parent_children
)
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth.update({"type": "continuous"})
edge_style = {
"color": colors["router_edge"],
"width": 2,
"arrows": "to",
"dashes": True,
"smooth": edge_smooth,
}
net.add_edge(router_method_name, listener_name, **edge_style)
else:
# Same check here: known router edge and known method?
method_known = listener_name in flow._methods
if not method_known:
print(
f"Warning: No node found for '{router_method_name}' or '{listener_name}'. Skipping edge."
) )
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth = False
edge_style = {
"color": colors["router_edge"],
"width": 2,
"arrows": "to",
"dashes": True,
"smooth": edge_smooth,
}
net.add_edge(router_method_name, listener_name, **edge_style)

View File

@@ -1,10 +1,11 @@
import os import os
from typing import Any, Dict, List, Optional
from typing import List, Optional, Dict, Any
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.utilities.constants import DEFAULT_SCORE_THRESHOLD
os.environ["TOKENIZERS_PARALLELISM"] = "false" # removes logging from fastembed os.environ["TOKENIZERS_PARALLELISM"] = "false" # removes logging from fastembed
@@ -14,13 +15,13 @@ class Knowledge(BaseModel):
Knowledge is a collection of sources and setup for the vector store to save and query relevant context. Knowledge is a collection of sources and setup for the vector store to save and query relevant context.
Args: Args:
sources: List[BaseKnowledgeSource] = Field(default_factory=list) sources: List[BaseKnowledgeSource] = Field(default_factory=list)
storage: Optional[KnowledgeStorage] = Field(default=None) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
embedder_config: Optional[Dict[str, Any]] = None embedder_config: Optional[Dict[str, Any]] = None
""" """
sources: List[BaseKnowledgeSource] = Field(default_factory=list) sources: List[BaseKnowledgeSource] = Field(default_factory=list)
model_config = ConfigDict(arbitrary_types_allowed=True) model_config = ConfigDict(arbitrary_types_allowed=True)
storage: Optional[KnowledgeStorage] = Field(default=None) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
embedder_config: Optional[Dict[str, Any]] = None embedder_config: Optional[Dict[str, Any]] = None
collection_name: Optional[str] = None collection_name: Optional[str] = None
@@ -45,20 +46,19 @@ class Knowledge(BaseModel):
source.storage = self.storage source.storage = self.storage
source.add() source.add()
def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]: def query(
self, query: List[str], limit: int = 3, preference: Optional[str] = None
) -> List[Dict[str, Any]]:
""" """
Query across all knowledge sources to find the most relevant information. Query across all knowledge sources to find the most relevant information.
Returns the top_k most relevant chunks. Returns the top_k most relevant chunks.
Raises:
ValueError: If storage is not initialized.
""" """
if self.storage is None:
raise ValueError("Storage is not initialized.")
results = self.storage.search( results = self.storage.search(
query, query,
limit, limit,
filter={"preference": preference} if preference else None,
score_threshold=DEFAULT_SCORE_THRESHOLD,
) )
return results return results

View File

@@ -1,42 +1,30 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from pathlib import Path from pathlib import Path
from typing import Dict, List, Optional, Union from typing import Union, List, Dict, Any
from pydantic import Field, field_validator from pydantic import Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.utilities.logger import Logger
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC): class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Base class for knowledge sources that load content from files.""" """Base class for knowledge sources that load content from files."""
_logger: Logger = Logger(verbose=True) _logger: Logger = Logger(verbose=True)
file_path: Optional[Union[Path, List[Path], str, List[str]]] = Field( file_path: Union[Path, List[Path], str, List[str]] = Field(
default=None, ..., description="The path to the file"
description="[Deprecated] The path to the file. Use file_paths instead.",
)
file_paths: Optional[Union[Path, List[Path], str, List[str]]] = Field(
default_factory=list, description="The path to the file"
) )
content: Dict[Path, str] = Field(init=False, default_factory=dict) content: Dict[Path, str] = Field(init=False, default_factory=dict)
storage: Optional[KnowledgeStorage] = Field(default=None) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
safe_file_paths: List[Path] = Field(default_factory=list) safe_file_paths: List[Path] = Field(default_factory=list)
@field_validator("file_path", "file_paths", mode="before")
def validate_file_path(cls, v, info):
"""Validate that at least one of file_path or file_paths is provided."""
# Single check if both are None, O(1) instead of nested conditions
if v is None and info.data.get("file_path" if info.field_name == "file_paths" else "file_paths") is None:
raise ValueError("Either file_path or file_paths must be provided")
return v
def model_post_init(self, _): def model_post_init(self, _):
"""Post-initialization method to load content.""" """Post-initialization method to load content."""
self.safe_file_paths = self._process_file_paths() self.safe_file_paths = self._process_file_paths()
self.validate_content() self.validate_paths()
self.content = self.load_content() self.content = self.load_content()
@abstractmethod @abstractmethod
@@ -44,13 +32,13 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory.""" """Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory."""
pass pass
def validate_content(self): def validate_paths(self):
"""Validate the paths.""" """Validate the paths."""
for path in self.safe_file_paths: for path in self.safe_file_paths:
if not path.exists(): if not path.exists():
self._logger.log( self._logger.log(
"error", "error",
f"File not found: {path}. Try adding sources to the knowledge directory. If it's inside the knowledge directory, use the relative path.", f"File not found: {path}. Try adding sources to the knowledge directory. If its inside the knowledge directory, use the relative path.",
color="red", color="red",
) )
raise FileNotFoundError(f"File not found: {path}") raise FileNotFoundError(f"File not found: {path}")
@@ -61,12 +49,10 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
color="red", color="red",
) )
def _save_documents(self): def save_documents(self, metadata: Dict[str, Any]):
"""Save the documents to the storage.""" """Save the documents to the storage."""
if self.storage: chunk_metadatas = [metadata.copy() for _ in self.chunks]
self.storage.save(self.chunks) self.storage.save(self.chunks, chunk_metadatas)
else:
raise ValueError("No storage found to save documents.")
def convert_to_path(self, path: Union[Path, str]) -> Path: def convert_to_path(self, path: Union[Path, str]) -> Path:
"""Convert a path to a Path object.""" """Convert a path to a Path object."""
@@ -74,30 +60,13 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
def _process_file_paths(self) -> List[Path]: def _process_file_paths(self) -> List[Path]:
"""Convert file_path to a list of Path objects.""" """Convert file_path to a list of Path objects."""
paths = (
if hasattr(self, "file_path") and self.file_path is not None: [self.file_path]
self._logger.log( if isinstance(self.file_path, (str, Path))
"warning", else self.file_path
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
self.file_paths = self.file_path
if self.file_paths is None:
raise ValueError("Your source must be provided with a file_paths: []")
# Convert single path to list
path_list: List[Union[Path, str]] = (
[self.file_paths]
if isinstance(self.file_paths, (str, Path))
else list(self.file_paths)
if isinstance(self.file_paths, list)
else []
) )
if not path_list: if not isinstance(paths, list):
raise ValueError( raise ValueError("file_path must be a Path, str, or a list of these types")
"file_path/file_paths must be a Path, str, or a list of these types"
)
return [self.convert_to_path(path) for path in path_list] return [self.convert_to_path(path) for path in paths]

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional from typing import List, Dict, Any, Optional
import numpy as np import numpy as np
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field
@@ -16,12 +16,12 @@ class BaseKnowledgeSource(BaseModel, ABC):
chunk_embeddings: List[np.ndarray] = Field(default_factory=list) chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
model_config = ConfigDict(arbitrary_types_allowed=True) model_config = ConfigDict(arbitrary_types_allowed=True)
storage: Optional[KnowledgeStorage] = Field(default=None) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused metadata: Dict[str, Any] = Field(default_factory=dict)
collection_name: Optional[str] = Field(default=None) collection_name: Optional[str] = Field(default=None)
@abstractmethod @abstractmethod
def validate_content(self) -> Any: def load_content(self) -> Dict[Any, str]:
"""Load and preprocess content from the source.""" """Load and preprocess content from the source."""
pass pass
@@ -41,12 +41,9 @@ class BaseKnowledgeSource(BaseModel, ABC):
for i in range(0, len(text), self.chunk_size - self.chunk_overlap) for i in range(0, len(text), self.chunk_size - self.chunk_overlap)
] ]
def _save_documents(self): def save_documents(self, metadata: Dict[str, Any]):
""" """
Save the documents to the storage. Save the documents to the storage.
This method should be called after the chunks and embeddings are generated. This method should be called after the chunks and embeddings are generated.
""" """
if self.storage: self.storage.save(self.chunks, metadata)
self.storage.save(self.chunks)
else:
raise ValueError("No storage found to save documents.")

View File

@@ -1,133 +0,0 @@
from pathlib import Path
from typing import Iterator, List, Optional, Union
from urllib.parse import urlparse
try:
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter
from docling.exceptions import ConversionError
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
from docling_core.types.doc.document import DoclingDocument
DOCLING_AVAILABLE = True
except ImportError:
DOCLING_AVAILABLE = False
from pydantic import Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
class CrewDoclingSource(BaseKnowledgeSource):
"""Default Source class for converting documents to markdown or json
This will auto support PDF, DOCX, and TXT, XLSX, Images, and HTML files without any additional dependencies and follows the docling package as the source of truth.
"""
def __init__(self, *args, **kwargs):
if not DOCLING_AVAILABLE:
raise ImportError(
"The docling package is required to use CrewDoclingSource. "
"Please install it using: uv add docling"
)
super().__init__(*args, **kwargs)
_logger: Logger = Logger(verbose=True)
file_path: Optional[List[Union[Path, str]]] = Field(default=None)
file_paths: List[Union[Path, str]] = Field(default_factory=list)
chunks: List[str] = Field(default_factory=list)
safe_file_paths: List[Union[Path, str]] = Field(default_factory=list)
content: List[DoclingDocument] = Field(default_factory=list)
document_converter: DocumentConverter = Field(
default_factory=lambda: DocumentConverter(
allowed_formats=[
InputFormat.MD,
InputFormat.ASCIIDOC,
InputFormat.PDF,
InputFormat.DOCX,
InputFormat.HTML,
InputFormat.IMAGE,
InputFormat.XLSX,
InputFormat.PPTX,
]
)
)
def model_post_init(self, _) -> None:
if self.file_path:
self._logger.log(
"warning",
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
self.file_paths = self.file_path
self.safe_file_paths = self.validate_content()
self.content = self._load_content()
def _load_content(self) -> List[DoclingDocument]:
try:
return self._convert_source_to_docling_documents()
except ConversionError as e:
self._logger.log(
"error",
f"Error loading content: {e}. Supported formats: {self.document_converter.allowed_formats}",
"red",
)
raise e
except Exception as e:
self._logger.log("error", f"Error loading content: {e}")
raise e
def add(self) -> None:
if self.content is None:
return
for doc in self.content:
new_chunks_iterable = self._chunk_doc(doc)
self.chunks.extend(list(new_chunks_iterable))
self._save_documents()
def _convert_source_to_docling_documents(self) -> List[DoclingDocument]:
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
return [result.document for result in conv_results_iter]
def _chunk_doc(self, doc: DoclingDocument) -> Iterator[str]:
chunker = HierarchicalChunker()
for chunk in chunker.chunk(doc):
yield chunk.text
def validate_content(self) -> List[Union[Path, str]]:
processed_paths: List[Union[Path, str]] = []
for path in self.file_paths:
if isinstance(path, str):
if path.startswith(("http://", "https://")):
try:
if self._validate_url(path):
processed_paths.append(path)
else:
raise ValueError(f"Invalid URL format: {path}")
except Exception as e:
raise ValueError(f"Invalid URL: {path}. Error: {str(e)}")
else:
local_path = Path(KNOWLEDGE_DIRECTORY + "/" + path)
if local_path.exists():
processed_paths.append(local_path)
else:
raise FileNotFoundError(f"File not found: {local_path}")
else:
# this is an instance of Path
processed_paths.append(path)
return processed_paths
def _validate_url(self, url: str) -> bool:
try:
result = urlparse(url)
return all(
[
result.scheme in ("http", "https"),
result.netloc,
len(result.netloc.split(".")) >= 2, # Ensure domain has TLD
]
)
except Exception:
return False

View File

@@ -1,6 +1,6 @@
import csv import csv
from pathlib import Path
from typing import Dict, List from typing import Dict, List
from pathlib import Path
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -30,7 +30,7 @@ class CSVKnowledgeSource(BaseFileKnowledgeSource):
) )
new_chunks = self._chunk_text(content_str) new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self._save_documents() self.save_documents(metadata=self.metadata)
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,6 +1,5 @@
from pathlib import Path
from typing import Dict, List from typing import Dict, List
from pathlib import Path
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -45,7 +44,7 @@ class ExcelKnowledgeSource(BaseFileKnowledgeSource):
new_chunks = self._chunk_text(content_str) new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self._save_documents() self.save_documents(metadata=self.metadata)
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,6 +1,6 @@
import json import json
from pathlib import Path
from typing import Any, Dict, List from typing import Any, Dict, List
from pathlib import Path
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -42,7 +42,7 @@ class JSONKnowledgeSource(BaseFileKnowledgeSource):
) )
new_chunks = self._chunk_text(content_str) new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self._save_documents() self.save_documents(metadata=self.metadata)
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,5 +1,5 @@
from typing import List, Dict
from pathlib import Path from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -43,7 +43,7 @@ class PDFKnowledgeSource(BaseFileKnowledgeSource):
for _, text in self.content.items(): for _, text in self.content.items():
new_chunks = self._chunk_text(text) new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self._save_documents() self.save_documents(metadata=self.metadata)
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -13,9 +13,9 @@ class StringKnowledgeSource(BaseKnowledgeSource):
def model_post_init(self, _): def model_post_init(self, _):
"""Post-initialization method to validate content.""" """Post-initialization method to validate content."""
self.validate_content() self.load_content()
def validate_content(self): def load_content(self):
"""Validate string content.""" """Validate string content."""
if not isinstance(self.content, str): if not isinstance(self.content, str):
raise ValueError("StringKnowledgeSource only accepts string content") raise ValueError("StringKnowledgeSource only accepts string content")
@@ -24,7 +24,7 @@ class StringKnowledgeSource(BaseKnowledgeSource):
"""Add string content to the knowledge source, chunk it, compute embeddings, and save them.""" """Add string content to the knowledge source, chunk it, compute embeddings, and save them."""
new_chunks = self._chunk_text(self.content) new_chunks = self._chunk_text(self.content)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self._save_documents() self.save_documents(metadata=self.metadata)
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Dict, List from typing import Dict, List
from pathlib import Path
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -24,7 +24,7 @@ class TextFileKnowledgeSource(BaseFileKnowledgeSource):
for _, text in self.content.items(): for _, text in self.content.items():
new_chunks = self._chunk_text(text) new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self._save_documents() self.save_documents(metadata=self.metadata)
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional from typing import Dict, Any, List, Optional
class BaseKnowledgeStorage(ABC): class BaseKnowledgeStorage(ABC):

View File

@@ -1,22 +1,18 @@
import contextlib import contextlib
import hashlib
import io import io
import logging import logging
import os
import shutil
from typing import Any, Dict, List, Optional, Union, cast
import chromadb import chromadb
import chromadb.errors import os
from chromadb.api import ClientAPI
from chromadb.api.types import OneOrMany
from chromadb.config import Settings
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage import chromadb.errors
from crewai.utilities import EmbeddingConfigurator
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
from crewai.utilities.paths import db_storage_path from crewai.utilities.paths import db_storage_path
from typing import Optional, List, Dict, Any, Union
from crewai.utilities import EmbeddingConfigurator
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
import hashlib
from chromadb.config import Settings
from chromadb.api import ClientAPI
from crewai.utilities.logger import Logger
@contextlib.contextmanager @contextlib.contextmanager
@@ -107,77 +103,53 @@ class KnowledgeStorage(BaseKnowledgeStorage):
raise Exception("Failed to create or get collection") raise Exception("Failed to create or get collection")
def reset(self): def reset(self):
base_path = os.path.join(db_storage_path(), KNOWLEDGE_DIRECTORY) if self.app:
if not self.app: self.app.reset()
else:
base_path = os.path.join(db_storage_path(), "knowledge")
self.app = chromadb.PersistentClient( self.app = chromadb.PersistentClient(
path=base_path, path=base_path,
settings=Settings(allow_reset=True), settings=Settings(allow_reset=True),
) )
self.app.reset()
self.app.reset()
shutil.rmtree(base_path)
self.app = None
self.collection = None
def save( def save(
self, self,
documents: List[str], documents: List[str],
metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, metadata: Union[Dict[str, Any], List[Dict[str, Any]]],
): ):
if not self.collection: if self.collection:
try:
metadatas = [metadata] if isinstance(metadata, dict) else metadata
ids = [
hashlib.sha256(doc.encode("utf-8")).hexdigest() for doc in documents
]
self.collection.upsert(
documents=documents,
metadatas=metadatas,
ids=ids,
)
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except Exception as e:
Logger(verbose=True).log(
"error", f"Failed to upsert documents: {e}", "red"
)
raise
else:
raise Exception("Collection not initialized") raise Exception("Collection not initialized")
try:
# Create a dictionary to store unique documents
unique_docs = {}
# Generate IDs and create a mapping of id -> (document, metadata)
for idx, doc in enumerate(documents):
doc_id = hashlib.sha256(doc.encode("utf-8")).hexdigest()
doc_metadata = None
if metadata is not None:
if isinstance(metadata, list):
doc_metadata = metadata[idx]
else:
doc_metadata = metadata
unique_docs[doc_id] = (doc, doc_metadata)
# Prepare filtered lists for ChromaDB
filtered_docs = []
filtered_metadata = []
filtered_ids = []
# Build the filtered lists
for doc_id, (doc, meta) in unique_docs.items():
filtered_docs.append(doc)
filtered_metadata.append(meta)
filtered_ids.append(doc_id)
# If we have no metadata at all, set it to None
final_metadata: Optional[OneOrMany[chromadb.Metadata]] = (
None if all(m is None for m in filtered_metadata) else filtered_metadata
)
self.collection.upsert(
documents=filtered_docs,
metadatas=final_metadata,
ids=filtered_ids,
)
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except Exception as e:
Logger(verbose=True).log("error", f"Failed to upsert documents: {e}", "red")
raise
def _create_default_embedding_function(self): def _create_default_embedding_function(self):
from chromadb.utils.embedding_functions.openai_embedding_function import ( from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction, OpenAIEmbeddingFunction,

View File

@@ -1,27 +1,17 @@
import json
import logging import logging
import os
import sys import sys
import threading import threading
import warnings import warnings
from contextlib import contextmanager from contextlib import contextmanager
from typing import Any, Dict, List, Optional, Union, cast from typing import Any, Dict, List, Optional, Union
from dotenv import load_dotenv
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
import litellm
from litellm import Choices, get_supported_openai_params
from litellm.types.utils import ModelResponse
import litellm
from litellm import get_supported_openai_params
from crewai.utilities.exceptions.context_window_exceeding_exception import ( from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException, LLMContextLengthExceededException,
) )
load_dotenv()
class FilteredStream: class FilteredStream:
def __init__(self, original_stream): def __init__(self, original_stream):
@@ -30,7 +20,6 @@ class FilteredStream:
def write(self, s) -> int: def write(self, s) -> int:
with self._lock: with self._lock:
# Filter out extraneous messages from LiteLLM
if ( if (
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new" "Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
in s in s
@@ -53,11 +42,6 @@ LLM_CONTEXT_WINDOW_SIZES = {
"gpt-4-turbo": 128000, "gpt-4-turbo": 128000,
"o1-preview": 128000, "o1-preview": 128000,
"o1-mini": 128000, "o1-mini": 128000,
# gemini
"gemini-2.0-flash": 1048576,
"gemini-1.5-pro": 2097152,
"gemini-1.5-flash": 1048576,
"gemini-1.5-flash-8b": 1048576,
# deepseek # deepseek
"deepseek-chat": 128000, "deepseek-chat": 128000,
# groq # groq
@@ -74,42 +58,24 @@ LLM_CONTEXT_WINDOW_SIZES = {
"llama3-70b-8192": 8192, "llama3-70b-8192": 8192,
"llama3-8b-8192": 8192, "llama3-8b-8192": 8192,
"mixtral-8x7b-32768": 32768, "mixtral-8x7b-32768": 32768,
"llama-3.3-70b-versatile": 128000,
"llama-3.3-70b-instruct": 128000,
# sambanova
"Meta-Llama-3.3-70B-Instruct": 131072,
"QwQ-32B-Preview": 8192,
"Qwen2.5-72B-Instruct": 8192,
"Qwen2.5-Coder-32B-Instruct": 8192,
"Meta-Llama-3.1-405B-Instruct": 8192,
"Meta-Llama-3.1-70B-Instruct": 131072,
"Meta-Llama-3.1-8B-Instruct": 131072,
"Llama-3.2-90B-Vision-Instruct": 16384,
"Llama-3.2-11B-Vision-Instruct": 16384,
"Meta-Llama-3.2-3B-Instruct": 4096,
"Meta-Llama-3.2-1B-Instruct": 16384,
} }
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
CONTEXT_WINDOW_USAGE_RATIO = 0.75
@contextmanager @contextmanager
def suppress_warnings(): def suppress_warnings():
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.filterwarnings("ignore") warnings.filterwarnings("ignore")
warnings.filterwarnings(
"ignore", message="open_text is deprecated*", category=DeprecationWarning
)
# Redirect stdout and stderr # Redirect stdout and stderr
old_stdout = sys.stdout old_stdout = sys.stdout
old_stderr = sys.stderr old_stderr = sys.stderr
sys.stdout = FilteredStream(old_stdout) sys.stdout = FilteredStream(old_stdout)
sys.stderr = FilteredStream(old_stderr) sys.stderr = FilteredStream(old_stderr)
try: try:
yield yield
finally: finally:
# Restore stdout and stderr
sys.stdout = old_stdout sys.stdout = old_stdout
sys.stderr = old_stderr sys.stderr = old_stderr
@@ -130,12 +96,13 @@ class LLM:
logit_bias: Optional[Dict[int, float]] = None, logit_bias: Optional[Dict[int, float]] = None,
response_format: Optional[Dict[str, Any]] = None, response_format: Optional[Dict[str, Any]] = None,
seed: Optional[int] = None, seed: Optional[int] = None,
logprobs: Optional[int] = None, logprobs: Optional[bool] = None,
top_logprobs: Optional[int] = None, top_logprobs: Optional[int] = None,
base_url: Optional[str] = None, base_url: Optional[str] = None,
api_version: Optional[str] = None, api_version: Optional[str] = None,
api_key: Optional[str] = None, api_key: Optional[str] = None,
callbacks: List[Any] = [], callbacks: List[Any] = [],
**kwargs,
): ):
self.model = model self.model = model
self.timeout = timeout self.timeout = timeout
@@ -156,41 +123,18 @@ class LLM:
self.api_version = api_version self.api_version = api_version
self.api_key = api_key self.api_key = api_key
self.callbacks = callbacks self.callbacks = callbacks
self.context_window_size = 0 self.kwargs = kwargs
litellm.drop_params = True litellm.drop_params = True
litellm.set_verbose = False
self.set_callbacks(callbacks) self.set_callbacks(callbacks)
self.set_env_callbacks()
def call( def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
self,
messages: List[Dict[str, str]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> str:
"""
High-level call method that:
1) Calls litellm.completion
2) Checks for function/tool calls
3) If a tool call is found:
a) executes the function
b) returns the result
4) If no tool call, returns the text response
:param messages: The conversation messages
:param tools: Optional list of function schemas for function calling
:param callbacks: Optional list of callbacks
:param available_functions: A dictionary mapping function_name -> actual Python function
:return: Final text response from the LLM or the tool result
"""
with suppress_warnings(): with suppress_warnings():
if callbacks and len(callbacks) > 0: if callbacks and len(callbacks) > 0:
self.set_callbacks(callbacks) self.set_callbacks(callbacks)
try: try:
# --- 1) Make the completion call
params = { params = {
"model": self.model, "model": self.model,
"messages": messages, "messages": messages,
@@ -211,58 +155,21 @@ class LLM:
"api_version": self.api_version, "api_version": self.api_version,
"api_key": self.api_key, "api_key": self.api_key,
"stream": False, "stream": False,
"tools": tools, # pass the tool schema **self.kwargs,
} }
# Remove None values to avoid passing unnecessary parameters
params = {k: v for k, v in params.items() if v is not None} params = {k: v for k, v in params.items() if v is not None}
response = litellm.completion(**params) response = litellm.completion(**params)
response_message = cast(Choices, cast(ModelResponse, response).choices)[ return response["choices"][0]["message"]["content"]
0
].message
text_response = response_message.content or ""
tool_calls = getattr(response_message, "tool_calls", [])
# --- 2) If no tool calls, return the text response
if not tool_calls or not available_functions:
return text_response
# --- 3) Handle the tool call
tool_call = tool_calls[0]
function_name = tool_call.function.name
if function_name in available_functions:
try:
function_args = json.loads(tool_call.function.arguments)
except json.JSONDecodeError as e:
logging.warning(f"Failed to parse function arguments: {e}")
return text_response
fn = available_functions[function_name]
try:
# Call the actual tool function
result = fn(**function_args)
return result
except Exception as e:
logging.error(
f"Error executing function '{function_name}': {e}"
)
return text_response
else:
logging.warning(
f"Tool call requested unknown function '{function_name}'"
)
return text_response
except Exception as e: except Exception as e:
if not LLMContextLengthExceededException( if not LLMContextLengthExceededException(
str(e) str(e)
)._is_context_limit_error(str(e)): )._is_context_limit_error(str(e)):
logging.error(f"LiteLLM call failed: {str(e)}") logging.error(f"LiteLLM call failed: {str(e)}")
raise
raise # Re-raise the exception after logging
def supports_function_calling(self) -> bool: def supports_function_calling(self) -> bool:
try: try:
@@ -281,71 +188,17 @@ class LLM:
return False return False
def get_context_window_size(self) -> int: def get_context_window_size(self) -> int:
""" # Only using 75% of the context window size to avoid cutting the message in the middle
Returns the context window size, using 75% of the maximum to avoid return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
cutting off messages mid-thread.
"""
if self.context_window_size != 0:
return self.context_window_size
self.context_window_size = int(
DEFAULT_CONTEXT_WINDOW_SIZE * CONTEXT_WINDOW_USAGE_RATIO
)
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
if self.model.startswith(key):
self.context_window_size = int(value * CONTEXT_WINDOW_USAGE_RATIO)
return self.context_window_size
def set_callbacks(self, callbacks: List[Any]): def set_callbacks(self, callbacks: List[Any]):
""" callback_types = [type(callback) for callback in callbacks]
Attempt to keep a single set of callbacks in litellm by removing old for callback in litellm.success_callback[:]:
duplicates and adding new ones. if type(callback) in callback_types:
""" litellm.success_callback.remove(callback)
with suppress_warnings():
callback_types = [type(callback) for callback in callbacks]
for callback in litellm.success_callback[:]:
if type(callback) in callback_types:
litellm.success_callback.remove(callback)
for callback in litellm._async_success_callback[:]: for callback in litellm._async_success_callback[:]:
if type(callback) in callback_types: if type(callback) in callback_types:
litellm._async_success_callback.remove(callback) litellm._async_success_callback.remove(callback)
litellm.callbacks = callbacks litellm.callbacks = callbacks
def set_env_callbacks(self):
"""
Sets the success and failure callbacks for the LiteLLM library from environment variables.
This method reads the `LITELLM_SUCCESS_CALLBACKS` and `LITELLM_FAILURE_CALLBACKS`
environment variables, which should contain comma-separated lists of callback names.
It then assigns these lists to `litellm.success_callback` and `litellm.failure_callback`,
respectively.
If the environment variables are not set or are empty, the corresponding callback lists
will be set to empty lists.
Example:
LITELLM_SUCCESS_CALLBACKS="langfuse,langsmith"
LITELLM_FAILURE_CALLBACKS="langfuse"
This will set `litellm.success_callback` to ["langfuse", "langsmith"] and
`litellm.failure_callback` to ["langfuse"].
"""
with suppress_warnings():
success_callbacks_str = os.environ.get("LITELLM_SUCCESS_CALLBACKS", "")
success_callbacks = []
if success_callbacks_str:
success_callbacks = [
cb.strip() for cb in success_callbacks_str.split(",") if cb.strip()
]
failure_callbacks_str = os.environ.get("LITELLM_FAILURE_CALLBACKS", "")
failure_callbacks = []
if failure_callbacks_str:
failure_callbacks = [
cb.strip() for cb in failure_callbacks_str.split(",") if cb.strip()
]
litellm.success_callback = success_callbacks
litellm.failure_callback = failure_callbacks

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, Optional from typing import Optional, Dict, Any
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, List, Optional from typing import Any, Dict, Optional, List
from crewai.memory.storage.rag_storage import RAGStorage from crewai.memory.storage.rag_storage import RAGStorage

View File

@@ -1,5 +1,4 @@
from typing import Any, Dict, Optional from typing import Any, Dict, Optional
from crewai.memory.memory import Memory from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage from crewai.memory.storage.rag_storage import RAGStorage
@@ -33,10 +32,7 @@ class ShortTermMemory(Memory):
storage storage
if storage if storage
else RAGStorage( else RAGStorage(
type="short_term", type="short_term", embedder_config=embedder_config, crew=crew, path=path
embedder_config=embedder_config,
crew=crew,
path=path,
) )
) )
super().__init__(storage) super().__init__(storage)

View File

@@ -2,7 +2,6 @@ import os
from typing import Any, Dict, List from typing import Any, Dict, List
from mem0 import MemoryClient from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage from crewai.memory.storage.interface import Storage
@@ -27,18 +26,10 @@ class Mem0Storage(Storage):
raise ValueError("User ID is required for user memory type") raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable # API key in memory config overrides the environment variable
config = self.memory_config.get("config", {}) mem0_api_key = self.memory_config.get("config", {}).get("api_key") or os.getenv(
mem0_api_key = config.get("api_key") or os.getenv("MEM0_API_KEY") "MEM0_API_KEY"
mem0_org_id = config.get("org_id") )
mem0_project_id = config.get("project_id") self.memory = MemoryClient(api_key=mem0_api_key)
# Initialize MemoryClient with available parameters
if mem0_org_id and mem0_project_id:
self.memory = MemoryClient(
api_key=mem0_api_key, org_id=mem0_org_id, project_id=mem0_project_id
)
else:
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str: def _sanitize_role(self, role: str) -> str:
""" """
@@ -65,7 +56,7 @@ class Mem0Storage(Storage):
metadata={"type": "long_term", **metadata}, metadata={"type": "long_term", **metadata},
) )
elif self.memory_type == "entities": elif self.memory_type == "entities":
entity_name = self._get_agent_name() entity_name = None
self.memory.add( self.memory.add(
value, user_id=entity_name, metadata={"type": "entity", **metadata} value, user_id=entity_name, metadata={"type": "entity", **metadata}
) )

View File

@@ -4,14 +4,12 @@ import logging
import os import os
import shutil import shutil
import uuid import uuid
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from chromadb.api import ClientAPI from chromadb.api import ClientAPI
from crewai.memory.storage.base_rag_storage import BaseRAGStorage from crewai.memory.storage.base_rag_storage import BaseRAGStorage
from crewai.utilities import EmbeddingConfigurator
from crewai.utilities.constants import MAX_FILE_NAME_LENGTH
from crewai.utilities.paths import db_storage_path from crewai.utilities.paths import db_storage_path
from crewai.utilities import EmbeddingConfigurator
@contextlib.contextmanager @contextlib.contextmanager
@@ -39,15 +37,12 @@ class RAGStorage(BaseRAGStorage):
app: ClientAPI | None = None app: ClientAPI | None = None
def __init__( def __init__(self, type, allow_reset=True, embedder_config=None, crew=None, path=None):
self, type, allow_reset=True, embedder_config=None, crew=None, path=None
):
super().__init__(type, allow_reset, embedder_config, crew) super().__init__(type, allow_reset, embedder_config, crew)
agents = crew.agents if crew else [] agents = crew.agents if crew else []
agents = [self._sanitize_role(agent.role) for agent in agents] agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents) agents = "_".join(agents)
self.agents = agents self.agents = agents
self.storage_file_name = self._build_storage_file_name(type, agents)
self.type = type self.type = type
@@ -65,7 +60,7 @@ class RAGStorage(BaseRAGStorage):
self._set_embedder_config() self._set_embedder_config()
chroma_client = chromadb.PersistentClient( chroma_client = chromadb.PersistentClient(
path=self.path if self.path else self.storage_file_name, path=self.path if self.path else f"{db_storage_path()}/{self.type}/{self.agents}",
settings=Settings(allow_reset=self.allow_reset), settings=Settings(allow_reset=self.allow_reset),
) )
@@ -86,20 +81,6 @@ class RAGStorage(BaseRAGStorage):
""" """
return role.replace("\n", "").replace(" ", "_").replace("/", "_") return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def _build_storage_file_name(self, type: str, file_name: str) -> str:
"""
Ensures file name does not exceed max allowed by OS
"""
base_path = f"{db_storage_path()}/{type}"
if len(file_name) > MAX_FILE_NAME_LENGTH:
logging.warning(
f"Trimming file name from {len(file_name)} to {MAX_FILE_NAME_LENGTH} characters."
)
file_name = file_name[:MAX_FILE_NAME_LENGTH]
return f"{base_path}/{file_name}"
def save(self, value: Any, metadata: Dict[str, Any]) -> None: def save(self, value: Any, metadata: Dict[str, Any]) -> None:
if not hasattr(self, "app") or not hasattr(self, "collection"): if not hasattr(self, "app") or not hasattr(self, "collection"):
self._initialize_app() self._initialize_app()
@@ -150,11 +131,9 @@ class RAGStorage(BaseRAGStorage):
def reset(self) -> None: def reset(self) -> None:
try: try:
shutil.rmtree(f"{db_storage_path()}/{self.type}")
if self.app: if self.app:
self.app.reset() self.app.reset()
shutil.rmtree(f"{db_storage_path()}/{self.type}")
self.app = None
self.collection = None
except Exception as e: except Exception as e:
if "attempt to write a readonly database" in str(e): if "attempt to write a readonly database" in str(e):
# Ignore this specific error # Ignore this specific error

View File

@@ -37,7 +37,7 @@ class UserMemory(Memory):
limit: int = 3, limit: int = 3,
score_threshold: float = 0.35, score_threshold: float = 0.35,
): ):
results = self.storage.search( results = super().search(
query=query, query=query,
limit=limit, limit=limit,
score_threshold=score_threshold, score_threshold=score_threshold,

View File

@@ -4,23 +4,18 @@ from typing import Callable
from crewai import Crew from crewai import Crew
from crewai.project.utils import memoize from crewai.project.utils import memoize
"""Decorators for defining crew components and their behaviors."""
def before_kickoff(func): def before_kickoff(func):
"""Marks a method to execute before crew kickoff."""
func.is_before_kickoff = True func.is_before_kickoff = True
return func return func
def after_kickoff(func): def after_kickoff(func):
"""Marks a method to execute after crew kickoff."""
func.is_after_kickoff = True func.is_after_kickoff = True
return func return func
def task(func): def task(func):
"""Marks a method as a crew task."""
func.is_task = True func.is_task = True
@wraps(func) @wraps(func)
@@ -34,53 +29,43 @@ def task(func):
def agent(func): def agent(func):
"""Marks a method as a crew agent."""
func.is_agent = True func.is_agent = True
func = memoize(func) func = memoize(func)
return func return func
def llm(func): def llm(func):
"""Marks a method as an LLM provider."""
func.is_llm = True func.is_llm = True
func = memoize(func) func = memoize(func)
return func return func
def output_json(cls): def output_json(cls):
"""Marks a class as JSON output format."""
cls.is_output_json = True cls.is_output_json = True
return cls return cls
def output_pydantic(cls): def output_pydantic(cls):
"""Marks a class as Pydantic output format."""
cls.is_output_pydantic = True cls.is_output_pydantic = True
return cls return cls
def tool(func): def tool(func):
"""Marks a method as a crew tool."""
func.is_tool = True func.is_tool = True
return memoize(func) return memoize(func)
def callback(func): def callback(func):
"""Marks a method as a crew callback."""
func.is_callback = True func.is_callback = True
return memoize(func) return memoize(func)
def cache_handler(func): def cache_handler(func):
"""Marks a method as a cache handler."""
func.is_cache_handler = True func.is_cache_handler = True
return memoize(func) return memoize(func)
def crew(func) -> Callable[..., Crew]: def crew(func) -> Callable[..., Crew]:
"""Marks a method as the main crew execution point."""
@wraps(func)
def wrapper(self, *args, **kwargs) -> Crew: def wrapper(self, *args, **kwargs) -> Crew:
instantiated_tasks = [] instantiated_tasks = []
instantiated_agents = [] instantiated_agents = []

View File

@@ -9,10 +9,8 @@ load_dotenv()
T = TypeVar("T", bound=type) T = TypeVar("T", bound=type)
"""Base decorator for creating crew classes with configuration and function management."""
def CrewBase(cls: T) -> T: def CrewBase(cls: T) -> T:
"""Wraps a class with crew functionality and configuration management."""
class WrappedClass(cls): # type: ignore class WrappedClass(cls): # type: ignore
is_crew_class: bool = True # type: ignore is_crew_class: bool = True # type: ignore
@@ -215,8 +213,4 @@ def CrewBase(cls: T) -> T:
callback_functions[callback]() for callback in callbacks callback_functions[callback]() for callback in callbacks
] ]
# Include base class (qual)name in the wrapper class (qual)name.
WrappedClass.__name__ = CrewBase.__name__ + "(" + cls.__name__ + ")"
WrappedClass.__qualname__ = CrewBase.__qualname__ + "(" + cls.__name__ + ")"
return cast(T, WrappedClass) return cast(T, WrappedClass)

View File

@@ -1,14 +1,11 @@
from functools import wraps
def memoize(func): def memoize(func):
cache = {} cache = {}
@wraps(func)
def memoized_func(*args, **kwargs): def memoized_func(*args, **kwargs):
key = (args, tuple(kwargs.items())) key = (args, tuple(kwargs.items()))
if key not in cache: if key not in cache:
cache[key] = func(*args, **kwargs) cache[key] = func(*args, **kwargs)
return cache[key] return cache[key]
memoized_func.__dict__.update(func.__dict__)
return memoized_func return memoized_func

View File

@@ -1,25 +1,12 @@
import datetime import datetime
import inspect
import json import json
import logging import os
import threading import threading
import uuid import uuid
from concurrent.futures import Future from concurrent.futures import Future
from copy import copy from copy import copy
from hashlib import md5 from hashlib import md5
from pathlib import Path from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
from typing import (
Any,
Callable,
ClassVar,
Dict,
List,
Optional,
Set,
Tuple,
Type,
Union,
)
from opentelemetry.trace import Span from opentelemetry.trace import Span
from pydantic import ( from pydantic import (
@@ -33,7 +20,6 @@ from pydantic import (
from pydantic_core import PydanticCustomError from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.guardrail_result import GuardrailResult
from crewai.tasks.output_format import OutputFormat from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry from crewai.telemetry.telemetry import Telemetry
@@ -41,7 +27,6 @@ from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config from crewai.utilities.config import process_config
from crewai.utilities.converter import Converter, convert_to_model from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.i18n import I18N from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
class Task(BaseModel): class Task(BaseModel):
@@ -64,7 +49,6 @@ class Task(BaseModel):
""" """
__hash__ = object.__hash__ # type: ignore __hash__ = object.__hash__ # type: ignore
logger: ClassVar[logging.Logger] = logging.getLogger(__name__)
used_tools: int = 0 used_tools: int = 0
tools_errors: int = 0 tools_errors: int = 0
delegations: int = 0 delegations: int = 0
@@ -126,69 +110,13 @@ class Task(BaseModel):
default=None, default=None,
) )
processed_by_agents: Set[str] = Field(default_factory=set) processed_by_agents: Set[str] = Field(default_factory=set)
guardrail: Optional[Callable[[TaskOutput], Tuple[bool, Any]]] = Field(
default=None,
description="Function to validate task output before proceeding to next task",
)
max_retries: int = Field(
default=3, description="Maximum number of retries when guardrail fails"
)
retry_count: int = Field(default=0, description="Current number of retries")
start_time: Optional[datetime.datetime] = Field(
default=None, description="Start time of the task execution"
)
end_time: Optional[datetime.datetime] = Field(
default=None, description="End time of the task execution"
)
@field_validator("guardrail")
@classmethod
def validate_guardrail_function(cls, v: Optional[Callable]) -> Optional[Callable]:
"""Validate that the guardrail function has the correct signature and behavior.
While type hints provide static checking, this validator ensures runtime safety by:
1. Verifying the function accepts exactly one parameter (the TaskOutput)
2. Checking return type annotations match Tuple[bool, Any] if present
3. Providing clear, immediate error messages for debugging
This runtime validation is crucial because:
- Type hints are optional and can be ignored at runtime
- Function signatures need immediate validation before task execution
- Clear error messages help users debug guardrail implementation issues
Args:
v: The guardrail function to validate
Returns:
The validated guardrail function
Raises:
ValueError: If the function signature is invalid or return annotation
doesn't match Tuple[bool, Any]
"""
if v is not None:
sig = inspect.signature(v)
if len(sig.parameters) != 1:
raise ValueError("Guardrail function must accept exactly one parameter")
# Check return annotation if present, but don't require it
return_annotation = sig.return_annotation
if return_annotation != inspect.Signature.empty:
if not (
return_annotation == Tuple[bool, Any]
or str(return_annotation) == "Tuple[bool, Any]"
):
raise ValueError(
"If return type is annotated, it must be Tuple[bool, Any]"
)
return v
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry) _telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
_execution_span: Optional[Span] = PrivateAttr(default=None) _execution_span: Optional[Span] = PrivateAttr(default=None)
_original_description: Optional[str] = PrivateAttr(default=None) _original_description: Optional[str] = PrivateAttr(default=None)
_original_expected_output: Optional[str] = PrivateAttr(default=None) _original_expected_output: Optional[str] = PrivateAttr(default=None)
_original_output_file: Optional[str] = PrivateAttr(default=None)
_thread: Optional[threading.Thread] = PrivateAttr(default=None) _thread: Optional[threading.Thread] = PrivateAttr(default=None)
_execution_time: Optional[float] = PrivateAttr(default=None)
@model_validator(mode="before") @model_validator(mode="before")
@classmethod @classmethod
@@ -213,54 +141,16 @@ class Task(BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {} "may_not_set_field", "This field is not to be set by the user.", {}
) )
def _set_start_execution_time(self) -> float:
return datetime.datetime.now().timestamp()
def _set_end_execution_time(self, start_time: float) -> None:
self._execution_time = datetime.datetime.now().timestamp() - start_time
@field_validator("output_file") @field_validator("output_file")
@classmethod @classmethod
def output_file_validation(cls, value: Optional[str]) -> Optional[str]: def output_file_validation(cls, value: str) -> str:
"""Validate the output file path. """Validate the output file path by removing the / from the beginning of the path."""
Args:
value: The output file path to validate. Can be None or a string.
If the path contains template variables (e.g. {var}), leading slashes are preserved.
For regular paths, leading slashes are stripped.
Returns:
The validated and potentially modified path, or None if no path was provided.
Raises:
ValueError: If the path contains invalid characters, path traversal attempts,
or other security concerns.
"""
if value is None:
return None
# Basic security checks
if ".." in value:
raise ValueError(
"Path traversal attempts are not allowed in output_file paths"
)
# Check for shell expansion first
if value.startswith("~") or value.startswith("$"):
raise ValueError(
"Shell expansion characters are not allowed in output_file paths"
)
# Then check other shell special characters
if any(char in value for char in ["|", ">", "<", "&", ";"]):
raise ValueError(
"Shell special characters are not allowed in output_file paths"
)
# Don't strip leading slash if it's a template path with variables
if "{" in value or "}" in value:
# Validate template variable format
template_vars = [part.split("}")[0] for part in value.split("{")[1:]]
for var in template_vars:
if not var.isidentifier():
raise ValueError(f"Invalid template variable name: {var}")
return value
# Strip leading slash for regular paths
if value.startswith("/"): if value.startswith("/"):
return value[1:] return value[1:]
return value return value
@@ -309,12 +199,6 @@ class Task(BaseModel):
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest() return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
@property
def execution_duration(self) -> float | None:
if not self.start_time or not self.end_time:
return None
return (self.end_time - self.start_time).total_seconds()
def execute_async( def execute_async(
self, self,
agent: BaseAgent | None = None, agent: BaseAgent | None = None,
@@ -355,7 +239,7 @@ class Task(BaseModel):
f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, like hierarchical." f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, like hierarchical."
) )
self.start_time = datetime.datetime.now() start_time = self._set_start_execution_time()
self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self) self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self)
self.prompt_context = context self.prompt_context = context
@@ -370,6 +254,7 @@ class Task(BaseModel):
) )
pydantic_output, json_output = self._export_output(result) pydantic_output, json_output = self._export_output(result)
task_output = TaskOutput( task_output = TaskOutput(
name=self.name, name=self.name,
description=self.description, description=self.description,
@@ -380,46 +265,9 @@ class Task(BaseModel):
agent=agent.role, agent=agent.role,
output_format=self._get_output_format(), output_format=self._get_output_format(),
) )
if self.guardrail:
guardrail_result = GuardrailResult.from_tuple(self.guardrail(task_output))
if not guardrail_result.success:
if self.retry_count >= self.max_retries:
raise Exception(
f"Task failed guardrail validation after {self.max_retries} retries. "
f"Last error: {guardrail_result.error}"
)
self.retry_count += 1
context = self.i18n.errors("validation_error").format(
guardrail_result_error=guardrail_result.error,
task_output=task_output.raw,
)
printer = Printer()
printer.print(
content=f"Guardrail blocked, retrying, due to: {guardrail_result.error}\n",
color="yellow",
)
return self._execute_core(agent, context, tools)
if guardrail_result.result is None:
raise Exception(
"Task guardrail returned None as result. This is not allowed."
)
if isinstance(guardrail_result.result, str):
task_output.raw = guardrail_result.result
pydantic_output, json_output = self._export_output(
guardrail_result.result
)
task_output.pydantic = pydantic_output
task_output.json_dict = json_output
elif isinstance(guardrail_result.result, TaskOutput):
task_output = guardrail_result.result
self.output = task_output self.output = task_output
self.end_time = datetime.datetime.now()
self._set_end_execution_time(start_time)
if self.callback: if self.callback:
self.callback(self.output) self.callback(self.output)
@@ -451,127 +299,16 @@ class Task(BaseModel):
tasks_slices = [self.description, output] tasks_slices = [self.description, output]
return "\n".join(tasks_slices) return "\n".join(tasks_slices)
def interpolate_inputs_and_add_conversation_history( def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
self, inputs: Dict[str, Union[str, int, float]] """Interpolate inputs into the task description and expected output."""
) -> None:
"""Interpolate inputs into the task description, expected output, and output file path.
Add conversation history if present.
Args:
inputs: Dictionary mapping template variables to their values.
Supported value types are strings, integers, and floats.
Raises:
ValueError: If a required template variable is missing from inputs.
"""
if self._original_description is None: if self._original_description is None:
self._original_description = self.description self._original_description = self.description
if self._original_expected_output is None: if self._original_expected_output is None:
self._original_expected_output = self.expected_output self._original_expected_output = self.expected_output
if self.output_file is not None and self._original_output_file is None:
self._original_output_file = self.output_file
if not inputs: if inputs:
return
try:
self.description = self._original_description.format(**inputs) self.description = self._original_description.format(**inputs)
except KeyError as e: self.expected_output = self._original_expected_output.format(**inputs)
raise ValueError(
f"Missing required template variable '{e.args[0]}' in description"
) from e
except ValueError as e:
raise ValueError(f"Error interpolating description: {str(e)}") from e
try:
self.expected_output = self.interpolate_only(
input_string=self._original_expected_output, inputs=inputs
)
except (KeyError, ValueError) as e:
raise ValueError(f"Error interpolating expected_output: {str(e)}") from e
if self.output_file is not None:
try:
self.output_file = self.interpolate_only(
input_string=self._original_output_file, inputs=inputs
)
except (KeyError, ValueError) as e:
raise ValueError(
f"Error interpolating output_file path: {str(e)}"
) from e
if "crew_chat_messages" in inputs and inputs["crew_chat_messages"]:
conversation_instruction = self.i18n.slice(
"conversation_history_instruction"
)
crew_chat_messages_json = str(inputs["crew_chat_messages"])
try:
crew_chat_messages = json.loads(crew_chat_messages_json)
except json.JSONDecodeError as e:
print("An error occurred while parsing crew chat messages:", e)
raise
conversation_history = "\n".join(
f"{msg['role'].capitalize()}: {msg['content']}"
for msg in crew_chat_messages
if isinstance(msg, dict) and "role" in msg and "content" in msg
)
self.description += (
f"\n\n{conversation_instruction}\n\n{conversation_history}"
)
def interpolate_only(
self, input_string: Optional[str], inputs: Dict[str, Union[str, int, float]]
) -> str:
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched.
Args:
input_string: The string containing template variables to interpolate.
Can be None or empty, in which case an empty string is returned.
inputs: Dictionary mapping template variables to their values.
Supported value types are strings, integers, and floats.
If input_string is empty or has no placeholders, inputs can be empty.
Returns:
The interpolated string with all template variables replaced with their values.
Empty string if input_string is None or empty.
Raises:
ValueError: If a required template variable is missing from inputs.
KeyError: If a template variable is not found in the inputs dictionary.
"""
if input_string is None or not input_string:
return ""
if "{" not in input_string and "}" not in input_string:
return input_string
if not inputs:
raise ValueError(
"Inputs dictionary cannot be empty when interpolating variables"
)
try:
# Validate input types
for key, value in inputs.items():
if not isinstance(value, (str, int, float)):
raise ValueError(
f"Value for key '{key}' must be a string, integer, or float, got {type(value).__name__}"
)
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
for key in inputs.keys():
escaped_string = escaped_string.replace(f"{{{{{key}}}}}", f"{{{key}}}")
return escaped_string.format(**inputs)
except KeyError as e:
raise KeyError(
f"Template variable '{e.args[0]}' not found in inputs dictionary"
) from e
except ValueError as e:
raise ValueError(f"Error during string interpolation: {str(e)}") from e
def increment_tools_errors(self) -> None: def increment_tools_errors(self) -> None:
"""Increment the tools errors counter.""" """Increment the tools errors counter."""
@@ -653,34 +390,21 @@ class Task(BaseModel):
return OutputFormat.RAW return OutputFormat.RAW
def _save_file(self, result: Any) -> None: def _save_file(self, result: Any) -> None:
"""Save task output to a file.
Args:
result: The result to save to the file. Can be a dict or any stringifiable object.
Raises:
ValueError: If output_file is not set
RuntimeError: If there is an error writing to the file
"""
if self.output_file is None: if self.output_file is None:
raise ValueError("output_file is not set.") raise ValueError("output_file is not set.")
try: directory = os.path.dirname(self.output_file) # type: ignore # Value of type variable "AnyOrLiteralStr" of "dirname" cannot be "str | None"
resolved_path = Path(self.output_file).expanduser().resolve()
directory = resolved_path.parent
if not directory.exists(): if directory and not os.path.exists(directory):
directory.mkdir(parents=True, exist_ok=True) os.makedirs(directory)
with resolved_path.open("w", encoding="utf-8") as file: with open(self.output_file, "w", encoding="utf-8") as file:
if isinstance(result, dict): if isinstance(result, dict):
import json import json
json.dump(result, file, ensure_ascii=False, indent=2) json.dump(result, file, ensure_ascii=False, indent=2)
else: else:
file.write(str(result)) file.write(str(result))
except (OSError, IOError) as e:
raise RuntimeError(f"Failed to save output file: {e}")
return None return None
def __repr__(self): def __repr__(self):

View File

@@ -1,56 +0,0 @@
"""
Module for handling task guardrail validation results.
This module provides the GuardrailResult class which standardizes
the way task guardrails return their validation results.
"""
from typing import Any, Optional, Tuple, Union
from pydantic import BaseModel, field_validator
class GuardrailResult(BaseModel):
"""Result from a task guardrail execution.
This class standardizes the return format of task guardrails,
converting tuple responses into a structured format that can
be easily handled by the task execution system.
Attributes:
success (bool): Whether the guardrail validation passed
result (Any, optional): The validated/transformed result if successful
error (str, optional): Error message if validation failed
"""
success: bool
result: Optional[Any] = None
error: Optional[str] = None
@field_validator("result", "error")
@classmethod
def validate_result_error_exclusivity(cls, v: Any, info) -> Any:
values = info.data
if "success" in values:
if values["success"] and v and "error" in values and values["error"]:
raise ValueError("Cannot have both result and error when success is True")
if not values["success"] and v and "result" in values and values["result"]:
raise ValueError("Cannot have both result and error when success is False")
return v
@classmethod
def from_tuple(cls, result: Tuple[bool, Union[Any, str]]) -> "GuardrailResult":
"""Create a GuardrailResult from a validation tuple.
Args:
result: A tuple of (success, data) where data is either
the validated result or error message.
Returns:
GuardrailResult: A new instance with the tuple data.
"""
success, data = result
return cls(
success=success,
result=data if success else None,
error=data if not success else None
)

View File

@@ -6,7 +6,6 @@ import os
import platform import platform
import warnings import warnings
from contextlib import contextmanager from contextlib import contextmanager
from importlib.metadata import version
from typing import TYPE_CHECKING, Any, Optional from typing import TYPE_CHECKING, Any, Optional
@@ -17,10 +16,12 @@ def suppress_warnings():
yield yield
with suppress_warnings():
import pkg_resources
from opentelemetry import trace # noqa: E402 from opentelemetry import trace # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import ( from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402
OTLPSpanExporter, # noqa: E402
)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402 from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402 from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402 from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
@@ -103,7 +104,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
span, span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
self._add_attribute(span, "python_version", platform.python_version()) self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(span, "crew_key", crew.key) self._add_attribute(span, "crew_key", crew.key)
@@ -305,7 +306,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
span, span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
self._add_attribute(span, "tool_name", tool_name) self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts) self._add_attribute(span, "attempts", attempts)
@@ -325,7 +326,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
span, span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
self._add_attribute(span, "tool_name", tool_name) self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts) self._add_attribute(span, "attempts", attempts)
@@ -345,7 +346,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
span, span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
if llm: if llm:
self._add_attribute(span, "llm", llm.model) self._add_attribute(span, "llm", llm.model)
@@ -364,7 +365,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
span, span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
self._add_attribute(span, "crew_key", crew.key) self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id)) self._add_attribute(span, "crew_id", str(crew.id))
@@ -390,7 +391,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
span, span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
self._add_attribute(span, "crew_key", crew.key) self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id)) self._add_attribute(span, "crew_id", str(crew.id))
@@ -471,7 +472,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
span, span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
self._add_attribute(span, "crew_key", crew.key) self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id)) self._add_attribute(span, "crew_id", str(crew.id))
@@ -540,7 +541,7 @@ class Telemetry:
self._add_attribute( self._add_attribute(
crew._execution_span, crew._execution_span,
"crewai_version", "crewai_version",
version("crewai"), pkg_resources.get_distribution("crewai").version,
) )
self._add_attribute( self._add_attribute(
crew._execution_span, "crew_output", final_string_output crew._execution_span, "crew_output", final_string_output

View File

@@ -1,45 +0,0 @@
from typing import Dict, Optional, Union
from pydantic import BaseModel, Field
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
i18n = I18N()
class AddImageToolSchema(BaseModel):
image_url: str = Field(..., description="The URL or path of the image to add")
action: Optional[str] = Field(
default=None,
description="Optional context or question about the image"
)
class AddImageTool(BaseTool):
"""Tool for adding images to the content"""
name: str = Field(default_factory=lambda: i18n.tools("add_image")["name"]) # type: ignore
description: str = Field(default_factory=lambda: i18n.tools("add_image")["description"]) # type: ignore
args_schema: type[BaseModel] = AddImageToolSchema
def _run(
self,
image_url: str,
action: Optional[str] = None,
**kwargs,
) -> dict:
action = action or i18n.tools("add_image")["default_action"] # type: ignore
content = [
{"type": "text", "text": action},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
}
]
return {
"role": "user",
"content": content
}

View File

@@ -1,9 +1,9 @@
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.utilities import I18N from crewai.utilities import I18N
from .ask_question_tool import AskQuestionTool
from .delegate_work_tool import DelegateWorkTool from .delegate_work_tool import DelegateWorkTool
from .ask_question_tool import AskQuestionTool
class AgentTools: class AgentTools:
@@ -20,13 +20,13 @@ class AgentTools:
delegate_tool = DelegateWorkTool( delegate_tool = DelegateWorkTool(
agents=self.agents, agents=self.agents,
i18n=self.i18n, i18n=self.i18n,
description=self.i18n.tools("delegate_work").format(coworkers=coworkers), # type: ignore description=self.i18n.tools("delegate_work").format(coworkers=coworkers),
) )
ask_tool = AskQuestionTool( ask_tool = AskQuestionTool(
agents=self.agents, agents=self.agents,
i18n=self.i18n, i18n=self.i18n,
description=self.i18n.tools("ask_question").format(coworkers=coworkers), # type: ignore description=self.i18n.tools("ask_question").format(coworkers=coworkers),
) )
return [delegate_tool, ask_tool] return [delegate_tool, ask_tool]

View File

@@ -1,8 +1,6 @@
from typing import Optional
from pydantic import BaseModel, Field
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
class AskQuestionToolSchema(BaseModel): class AskQuestionToolSchema(BaseModel):

View File

@@ -1,15 +1,11 @@
import logging from typing import Optional, Union
from typing import Optional
from pydantic import Field from pydantic import Field
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N from crewai.utilities import I18N
logger = logging.getLogger(__name__)
class BaseAgentTool(BaseTool): class BaseAgentTool(BaseTool):
"""Base class for agent-related tools""" """Base class for agent-related tools"""
@@ -19,25 +15,6 @@ class BaseAgentTool(BaseTool):
default_factory=I18N, description="Internationalization settings" default_factory=I18N, description="Internationalization settings"
) )
def sanitize_agent_name(self, name: str) -> str:
"""
Sanitize agent role name by normalizing whitespace and setting to lowercase.
Converts all whitespace (including newlines) to single spaces and removes quotes.
Args:
name (str): The agent role name to sanitize
Returns:
str: The sanitized agent role name, with whitespace normalized,
converted to lowercase, and quotes removed
"""
if not name:
return ""
# Normalize all whitespace (including newlines) to single spaces
normalized = " ".join(name.split())
# Remove quotes and convert to lowercase
return normalized.replace('"', "").casefold()
def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]: def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]:
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker") coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
if coworker: if coworker:
@@ -47,27 +24,11 @@ class BaseAgentTool(BaseTool):
return coworker return coworker
def _execute( def _execute(
self, self, agent_name: Union[str, None], task: str, context: Union[str, None]
agent_name: Optional[str],
task: str,
context: Optional[str] = None
) -> str: ) -> str:
"""
Execute delegation to an agent with case-insensitive and whitespace-tolerant matching.
Args:
agent_name: Name/role of the agent to delegate to (case-insensitive)
task: The specific question or task to delegate
context: Optional additional context for the task execution
Returns:
str: The execution result from the delegated agent or an error message
if the agent cannot be found
"""
try: try:
if agent_name is None: if agent_name is None:
agent_name = "" agent_name = ""
logger.debug("No agent name provided, using empty string")
# It is important to remove the quotes from the agent name. # It is important to remove the quotes from the agent name.
# The reason we have to do this is because less-powerful LLM's # The reason we have to do this is because less-powerful LLM's
@@ -76,49 +37,31 @@ class BaseAgentTool(BaseTool):
# {"task": "....", "coworker": ".... # {"task": "....", "coworker": "....
# when it should look like this: # when it should look like this:
# {"task": "....", "coworker": "...."} # {"task": "....", "coworker": "...."}
sanitized_name = self.sanitize_agent_name(agent_name) agent_name = agent_name.casefold().replace('"', "").replace("\n", "")
logger.debug(f"Sanitized agent name from '{agent_name}' to '{sanitized_name}'")
available_agents = [agent.role for agent in self.agents]
logger.debug(f"Available agents: {available_agents}")
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None") agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
available_agent available_agent
for available_agent in self.agents for available_agent in self.agents
if self.sanitize_agent_name(available_agent.role) == sanitized_name if available_agent.role.casefold().replace("\n", "") == agent_name
] ]
logger.debug(f"Found {len(agent)} matching agents for role '{sanitized_name}'") except Exception as _:
except (AttributeError, ValueError) as e: return self.i18n.errors("agent_tool_unexsiting_coworker").format(
# Handle specific exceptions that might occur during role name processing
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join( coworkers="\n".join(
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents] [f"- {agent.role.casefold()}" for agent in self.agents]
), )
error=str(e)
) )
if not agent: if not agent:
# No matching agent found after sanitization return self.i18n.errors("agent_tool_unexsiting_coworker").format(
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join( coworkers="\n".join(
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents] [f"- {agent.role.casefold()}" for agent in self.agents]
), )
error=f"No agent found with role '{sanitized_name}'"
) )
agent = agent[0] agent = agent[0]
try: task_with_assigned_agent = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
task_with_assigned_agent = Task( description=task,
description=task, agent=agent,
agent=agent, expected_output=agent.i18n.slice("manager_request"),
expected_output=agent.i18n.slice("manager_request"), i18n=agent.i18n,
i18n=agent.i18n, )
) return agent.execute_task(task_with_assigned_agent, context)
logger.debug(f"Created task for agent '{self.sanitize_agent_name(agent.role)}': {task}")
return agent.execute_task(task_with_assigned_agent, context)
except Exception as e:
# Handle task creation or execution errors
return self.i18n.errors("agent_tool_execution_error").format(
agent_role=self.sanitize_agent_name(agent.role),
error=str(e)
)

View File

@@ -1,9 +1,8 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional from typing import Optional
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
class DelegateWorkToolSchema(BaseModel): class DelegateWorkToolSchema(BaseModel):
task: str = Field(..., description="The task to delegate") task: str = Field(..., description="The task to delegate")

View File

@@ -1,23 +1,12 @@
import warnings
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from inspect import signature from inspect import signature
from typing import Any, Callable, Type, get_args, get_origin from typing import Any, Callable, Type, get_args, get_origin
from pydantic import ( from pydantic import BaseModel, ConfigDict, Field, create_model, validator
BaseModel,
ConfigDict,
Field,
PydanticDeprecatedSince20,
create_model,
validator,
)
from pydantic import BaseModel as PydanticBaseModel from pydantic import BaseModel as PydanticBaseModel
from crewai.tools.structured_tool import CrewStructuredTool from crewai.tools.structured_tool import CrewStructuredTool
# Ignore all "PydanticDeprecatedSince20" warnings globally
warnings.filterwarnings("ignore", category=PydanticDeprecatedSince20)
class BaseTool(BaseModel, ABC): class BaseTool(BaseModel, ABC):
class _ArgsSchemaPlaceholder(PydanticBaseModel): class _ArgsSchemaPlaceholder(PydanticBaseModel):

View File

@@ -1,5 +1,6 @@
import ast import ast
import datetime import datetime
import os
import time import time
from difflib import SequenceMatcher from difflib import SequenceMatcher
from textwrap import dedent from textwrap import dedent
@@ -10,16 +11,18 @@ from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task from crewai.task import Task
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools import BaseTool from crewai.tools import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
from crewai.utilities import I18N, Converter, ConverterError, Printer from crewai.utilities import I18N, Converter, ConverterError, Printer
try: agentops = None
import agentops # type: ignore if os.environ.get("AGENTOPS_API_KEY"):
except ImportError: try:
agentops = None import agentops # type: ignore
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini", "o1", "o3", "o3-mini"] except ImportError:
pass
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini"]
class ToolUsageErrorException(Exception): class ToolUsageErrorException(Exception):
@@ -103,19 +106,6 @@ class ToolUsage:
if self.agent.verbose: if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red") self._printer.print(content=f"\n\n{error}\n", color="red")
return error return error
if isinstance(tool, CrewStructuredTool) and tool.name == self._i18n.tools("add_image")["name"]: # type: ignore
try:
result = self._use(tool_string=tool_string, tool=tool, calling=calling)
return result
except Exception as e:
error = getattr(e, "message", str(e))
self.task.increment_tools_errors()
if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None) return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
def _use( def _use(
@@ -169,7 +159,7 @@ class ToolUsage:
if calling.arguments: if calling.arguments:
try: try:
acceptable_args = tool.args_schema.model_json_schema()["properties"].keys() # type: ignore acceptable_args = tool.args_schema.schema()["properties"].keys() # type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "schema"
arguments = { arguments = {
k: v k: v
for k, v in calling.arguments.items() for k, v in calling.arguments.items()
@@ -432,10 +422,9 @@ class ToolUsage:
elif value.lower() in [ elif value.lower() in [
"true", "true",
"false", "false",
"null",
]: # Check for boolean and null values ]: # Check for boolean and null values
value = value.lower().capitalize() value = value.lower()
elif value.lower() == "null":
value = "None"
else: else:
# Assume the value is a string and needs quotes # Assume the value is a string and needs quotes
value = '"' + value.replace('"', '\\"') + '"' value = '"' + value.replace('"', '\\"') + '"'

View File

@@ -1,7 +1,6 @@
from datetime import datetime
from typing import Any, Dict from typing import Any, Dict
from pydantic import BaseModel from pydantic import BaseModel
from datetime import datetime
class ToolUsageEvent(BaseModel): class ToolUsageEvent(BaseModel):

View File

@@ -12,39 +12,31 @@
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n", "tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!", "no_tools": "\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n", "format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n", "final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n", "format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}", "task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.", "expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}", "human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent's final answer: {final_answer}\n\n", "getting_input": "This is the agent's final answer: {final_answer}\n\n",
"summarizer_system_message": "You are a helpful assistant that summarizes text.", "summarizer_system_message": "You are a helpful assistant that summarizes text.",
"summarize_instruction": "Summarize the following text, make sure to include all the important information: {group}", "sumamrize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
"summary": "This is a summary of our conversation so far:\n{merged_summary}", "summary": "This is a summary of our conversation so far:\n{merged_summary}",
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.", "manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.",
"formatted_task_instructions": "Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.", "formatted_task_instructions": "Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.",
"human_feedback_classification": "Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with 'True' if further changes are needed, or 'False' if the user is satisfied. **Important** Do not include any additional commentary outside of your 'True' or 'False' response.\n\nFeedback: \"{feedback}\"", "human_feedback_classification": "Determine if the following feedback indicates that the user is satisfied or if further changes are needed. Respond with 'True' if further changes are needed, or 'False' if the user is satisfied. **Important** Do not include any additional commentary outside of your 'True' or 'False' response.\n\nFeedback: \"{feedback}\""
"conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals."
}, },
"errors": { "errors": {
"force_final_answer_error": "You can't keep going, here is the best final answer you generated:\n\n {formatted_answer}", "force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.", "force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n", "agent_tool_unexsiting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n", "task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}", "tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.", "tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
"wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.", "wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}", "tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}"
"agent_tool_execution_error": "Error executing task with agent '{agent_role}'. Error: {error}",
"validation_error": "### Previous attempt failed validation: {guardrail_result_error}\n\n\n### Previous result:\n{task_output}\n\n\nTry again, making sure to address the validation error."
}, },
"tools": { "tools": {
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.", "delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.", "ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them."
"add_image": {
"name": "Add image to content",
"description": "See image to understand it's content, you can optionally ask a question about the image",
"default_action": "Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
}
} }
} }

View File

@@ -1,40 +0,0 @@
from typing import List
from pydantic import BaseModel, Field
class ChatInputField(BaseModel):
"""
Represents a single required input for the crew, with a name and short description.
Example:
{
"name": "topic",
"description": "The topic to focus on for the conversation"
}
"""
name: str = Field(..., description="The name of the input field")
description: str = Field(..., description="A short description of the input field")
class ChatInputs(BaseModel):
"""
Holds a high-level crew_description plus a list of ChatInputFields.
Example:
{
"crew_name": "topic-based-qa",
"crew_description": "Use this crew for topic-based Q&A",
"inputs": [
{"name": "topic", "description": "The topic to focus on"},
{"name": "username", "description": "Name of the user"},
]
}
"""
crew_name: str = Field(..., description="The name of the crew")
crew_description: str = Field(
..., description="A description of the crew's purpose"
)
inputs: List[ChatInputField] = Field(
default_factory=list, description="A list of input fields for the crew"
)

View File

@@ -3,4 +3,3 @@ TRAINED_AGENTS_DATA_FILE = "trained_agents_data.pkl"
DEFAULT_SCORE_THRESHOLD = 0.35 DEFAULT_SCORE_THRESHOLD = 0.35
KNOWLEDGE_DIRECTORY = "knowledge" KNOWLEDGE_DIRECTORY = "knowledge"
MAX_LLM_RETRY = 3 MAX_LLM_RETRY = 3
MAX_FILE_NAME_LENGTH = 255

Some files were not shown because too many files have changed in this diff Show More