Compare commits

...

90 Commits

Author SHA1 Message Date
Rip&Tear
2f09652d85 feat: Enhance crew creation script 2024-10-05 20:31:38 +08:00
Brandon Hancock (bhancock_ai)
0dfe3bcb0a quick fixes (#1385)
* quick fixes

* add generic name

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-10-04 13:23:42 -03:00
Brandon Hancock (bhancock_ai)
3f81383285 Brandon/cre 291 flow improvements (#1390)
* Implement joao feedback

* update colors for crew nodes

* clean up

* more linting clean up

* round legend corners

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-10-04 13:22:46 -03:00
Brandon Hancock (bhancock_ai)
e8a49e7687 Brandon/cre 288 add telemetry to flows (#1391)
* Telemetry for flows

* store node names
2024-10-04 13:21:55 -03:00
Brandon Hancock (bhancock_ai)
ed48efb9aa add plotting to flows documentation (#1394) 2024-10-04 13:19:09 -03:00
Vini Brasil
c3291b967b Add --force option to crewai tool publish (#1383)
This commit adds an option to bypass Git remote validations when
publishing tools.
2024-10-04 11:02:50 -03:00
Eren Küçüker
92e867010c docs: correct miswritten command name (#1365)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-10-03 12:22:07 -04:00
Tony Kipkemboi
5059aef574 Update README (#1376)
* Change all instaces of crewAI to CrewAI and fix installation step

* Update the  example to use YAML format

* Update  to come after setup and edits

* Remove double tool instance
2024-10-03 12:09:26 -04:00
Guilherme de Amorim
c50d62b82f fix: JSON encoding date objects (#1374) 2024-10-02 16:06:10 -04:00
Thiago Moretto
f46a12b3b4 Merge pull request #1382 from crewAIInc/tm-basic-event-structure
Add tool usage events
2024-10-02 12:54:51 -03:00
Vini Brasil
dd0b622826 Add Git validations for publishing tools (#1381)
This commit prevents tools from being published if the underlying Git
repository is unsynced with origin.
2024-10-02 11:46:18 -03:00
Thiago Moretto
835eb9fbea Make tests green again 2024-10-02 11:36:27 -03:00
Thiago Moretto
8cb10f9fcc Merge branch 'main' into tm-basic-event-structure 2024-10-02 11:18:46 -03:00
Thiago Moretto
30e26c9e35 Add tool usage events 2024-10-02 11:11:07 -03:00
Vini Brasil
01329a01ab Create crewai tool create <tool> command (#1379)
This commit creates a new CLI command for scaffolding tools.
2024-10-01 18:58:27 -03:00
Vini Brasil
0e11b33f6e Change Tool Repository authentication scope (#1378)
This commit adds a new command for adding custom PyPI indexes
credentials to the project. This was changed because credentials are now
user-scoped instead of organization.
2024-10-01 18:44:08 -03:00
João Moura
5113bca025 preparing new version 2024-10-01 14:18:17 -07:00
João Moura
71c5972fc7 hadnling pydantic obejct with Optional fields 2024-10-01 14:18:11 -07:00
João Moura
ba55160d6b updating dependencies 2024-10-01 11:37:57 -07:00
João Moura
24e973d792 preparing new version 2024-10-01 11:33:07 -07:00
Brandon Hancock (bhancock_ai)
96427c1dd2 Flow visualizer (#1377)
* Almost working!

* It fully works but not clean enought

* Working but not clean engouth

* Everything is workign

* WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well

* template working

* Everything is working

* More changes and todos

* Add more support for @start

* Router working now

* minor tweak to

* minor tweak to conditions and event handling

* Update logs

* Too trigger happy with cleanup

* Added in Thiago fix

* Flow passing results again

* Working on docs.

* made more progress updates on docs

* Finished talking about controlling flows

* add flow output

* fixed flow output section

* add crews to flows section is looking good now

* more flow doc changes

* Update docs and add more examples

* drop visualizer

* save visualizer

* pyvis is beginning to work

* pyvis working

* it is working

* regular methods and triggers working. Need to work on router next.

* properly identifying router and router children nodes. Need to fix color

* children router working. Need to support loops

* curving cycles but need to add curve conditionals

* everythin is showing up properly need to fix curves

* all working. needs to be cleaned up

* adjust padding

* drop lib

* clean up prior to PR

* incorporate joao feedback

* final tweaks for joao

* Refactor to make crews easier to understand

* update CLI and templates

* Fix crewai version in flows

* Fix merge conflict
2024-10-01 15:20:26 -03:00
João Moura
f15d5cbb64 remove unnecessary line 2024-09-30 23:21:50 -07:00
João Moura
c8a5a3e32e preparing new version 2024-09-30 23:13:47 -07:00
Brandon Hancock (bhancock_ai)
32fdd11c93 Flow visualizer (#1375)
* Almost working!

* It fully works but not clean enought

* Working but not clean engouth

* Everything is workign

* WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well

* template working

* Everything is working

* More changes and todos

* Add more support for @start

* Router working now

* minor tweak to

* minor tweak to conditions and event handling

* Update logs

* Too trigger happy with cleanup

* Added in Thiago fix

* Flow passing results again

* Working on docs.

* made more progress updates on docs

* Finished talking about controlling flows

* add flow output

* fixed flow output section

* add crews to flows section is looking good now

* more flow doc changes

* Update docs and add more examples

* drop visualizer

* save visualizer

* pyvis is beginning to work

* pyvis working

* it is working

* regular methods and triggers working. Need to work on router next.

* properly identifying router and router children nodes. Need to fix color

* children router working. Need to support loops

* curving cycles but need to add curve conditionals

* everythin is showing up properly need to fix curves

* all working. needs to be cleaned up

* adjust padding

* drop lib

* clean up prior to PR

* incorporate joao feedback

* final tweaks for joao
2024-09-30 20:52:56 -03:00
João Moura
7f830b4f43 fixing test 2024-09-30 12:01:50 -07:00
Shreyan Sood
d6c57402cf Update Pipeline.md, fixed typo "=inputs" was repeated. (#1363) 2024-09-28 01:27:43 -03:00
Rip&Tear
42bea00184 flow template copy fix (#1364) 2024-09-28 01:27:17 -03:00
João Moura
5a6b0ff398 temporary dropping excamples 2024-09-27 21:15:52 -03:00
João Moura
1b57bc0c75 preparing for new verison 2024-09-27 20:28:25 -03:00
João Moura
96544009f5 preparing new verion 2024-09-27 20:24:37 -03:00
João Moura
44c8765add fixing tasks order 2024-09-27 20:21:46 -03:00
Brandon Hancock (bhancock_ai)
bc31019b67 Brandon/cre 19 workflows (#1347)
Flows
2024-09-27 12:11:17 -03:00
João Moura
ff16348d4c preparing for version 0.64.0 2024-09-26 21:53:09 -03:00
João Moura
7310f4d85b ordering tasks properly 2024-09-26 21:41:23 -03:00
João Moura
ac331504e9 Fixing summarization logic 2024-09-26 21:41:23 -03:00
João Moura
6823f76ff4 increase default max inter 2024-09-26 21:41:23 -03:00
Vini Brasil
c3ac3219fe CLI for Tool Repository (#1357)
This commit adds two commands to the CLI:

- `crewai tool publish`
    - Builds the project using Poetry
    - Uploads the tarball to CrewAI's tool repository

- `crewai tool install my-tool`
    - Adds my-tool's index to Poetry and its credentials
    - Installs my-tool from the custom index
2024-09-26 17:23:31 -03:00
Thiago Moretto
104ef7a0c2 Merge pull request #1360 from crewAIInc/tm-fix-base-agent-key
Crew's key must remain stable after input interpolation
2024-09-26 15:08:36 -03:00
Thiago Moretto
2bbf8ed8a8 Crew's key must remain stable after input interpolation 2024-09-26 14:55:33 -03:00
João Moura
5dc6644ac7 Fixing trainign feature 2024-09-26 14:17:23 -03:00
João Moura
9c0f97eaf7 fixing training 2024-09-26 14:17:23 -03:00
Brandon Hancock (bhancock_ai)
164e7895bf Fixed typing issues for new crews (#1358) 2024-09-26 14:12:24 -03:00
Vini Brasil
fb46fb9ca3 Move crewai.cli.deploy.utils to crewai.cli.utils (#1350)
* Prevent double slashes when joining URLs

* Move crewai.cli.deploy.utils to crewai.cli.utils

This commit moves this package so it's reusable across commands.
2024-09-25 14:06:20 -03:00
Vini Brasil
effb7efc37 Create client for Tools API (#1348)
This commit creates a class for the new Tools API. It extracts common
methods from crewai.cli.deploy.api.CrewAPI to a parent class.
2024-09-25 12:37:54 -03:00
DanKing1903
f5098e7e45 docs: fix misspelling of "EXA Search" in mkdocs.yml (#1346) 2024-09-25 12:34:11 -03:00
Lennex Zinyando
b15d632308 Point footer socials to crewAIInc accounts (#1349) 2024-09-25 12:32:18 -03:00
João Moura
e534efa3e9 updating version 2024-09-25 00:26:03 -03:00
João Moura
8001314718 Updating logs and preparing new version 2024-09-24 23:55:12 -03:00
João Moura
e91ac4c5ad updating dependencies 2024-09-24 22:40:24 -03:00
João Moura
e19bdcb97d Bringing support to o1 family back + any models that don't support stop words 2024-09-24 22:18:20 -03:00
João Moura
b8aa46a767 cutting version 0.63.2 2024-09-24 05:31:58 -03:00
João Moura
ab79ee32fd fixing importing 2024-09-24 01:54:02 -03:00
João Moura
8d9c49a281 adding proepr LLM import 2024-09-24 01:53:23 -03:00
LogCreative
e659b60d8b docs: update "LLM-Connections" import and "Tasks" formatting (#1345)
* Update Tasks.md

Current formating of the page Tasks has been broken, fix the markdown formatting.

* Update LLM-Connections.md

LLM class has been moved to llm.py file
2024-09-24 01:52:41 -03:00
João Moura
7987bfee39 adding OPENAI_BASE_URL as fallback 2024-09-23 23:39:04 -03:00
João Moura
b6075f1a97 prepare new version 2024-09-23 22:05:48 -03:00
João Moura
9820a69443 removing logs 2024-09-23 20:56:58 -03:00
João Moura
753118687d removing logs 2024-09-23 19:59:23 -03:00
João Moura
35e234ed6e removing logging 2024-09-23 19:37:23 -03:00
João Moura
2d54b096af updating tests 2024-09-23 17:45:20 -03:00
João Moura
493f046c03 Checking supports_function_calling isntead of gpt models 2024-09-23 16:23:38 -03:00
João Moura
3b6d1838b4 preapring new version 2024-09-23 04:28:26 -03:00
João Moura
769ab940ed ignore type checker 2024-09-23 04:25:13 -03:00
João Moura
498a9e6e68 updating colors 2024-09-23 04:06:10 -03:00
Mr. Guo
699be4887c Fix encoding issue when loading i18n json file (#1341)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-23 04:01:35 -03:00
João Moura
854c58ded7 updating docs 2024-09-23 03:59:16 -03:00
João Moura
a19a4a5556 Adding new LLM class 2024-09-23 03:59:05 -03:00
João Moura
59e51f18fd updating tests 2024-09-23 03:58:41 -03:00
João Moura
7d981ba8ce adding callbacks to llm 2024-09-23 00:54:01 -03:00
João Moura
6dad33f47c supressing warning 2024-09-23 00:30:14 -03:00
João Moura
18c3925fa3 implementing initial LLM class 2024-09-22 22:37:29 -03:00
João Moura
000e2666fb linter 2024-09-22 17:04:40 -03:00
Ayo Ayibiowu
91ff331fec feat(memory): adds support for customizable memory interface (#1339)
* feat(memory): adds support for customizing crew storage

* chore: allow overwriting the crew memory configuration

* docs: update custom storage usage

* fix(lint): use correct syntax

* fix: type check warning

* fix: type check warnings

* fix(test): address agent default failing test

* fix(lint). address type checker error

* Update crew.py

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-22 17:03:23 -03:00
João Moura
e3c7c0185d fixing linting 2024-09-22 16:51:01 -03:00
Arthur Chien
405650840e Fix encoding issue when loading YAML file (#1316)
related to #1270

Co-authored-by: ccw@cht.com.tw <ccw@cht.com.tw>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-22 16:50:50 -03:00
João Moura
1bd188e0d2 updating dependencies 2024-09-22 16:47:57 -03:00
João Moura
9de7aa6377 fix test 2024-09-22 13:57:52 -03:00
FabioPolito24
d4c0a4248c Refactor: Remove redundant task creation in kickoff_for_each_async (#1326)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-09-22 10:42:05 -04:00
João Moura
c4167a5517 respecting OPENAI_MODEL_NAME 2024-09-22 11:20:54 -03:00
João Moura
c055c35361 bringin back gpt-4o-mini as default 2024-09-22 11:15:17 -03:00
Rip&Tear
a318a226de Merge pull request #1335 from lloydchang/patch-1
docs(Start-a-New-CrewAI-Project-Template-Method.md): fix typo
2024-09-22 18:11:18 +08:00
lloydchang
e88cb2fea6 docs(Start-a-New-CrewAI-Project-Template-Method.md): fix typo
agents → tasks
2024-09-18 03:17:22 -07:00
João Moura
0ab072a95e preparing new version 2024-09-18 04:36:05 -03:00
João Moura
5e8322b272 printing max rpm message in different color 2024-09-18 04:35:18 -03:00
João Moura
5a3b888f43 Updating all cassetes 2024-09-18 04:17:41 -03:00
João Moura
d7473edb41 always ending on a user message 2024-09-18 04:17:20 -03:00
João Moura
d125c85a2b updating dependenceis 2024-09-18 03:26:46 -03:00
João Moura
b46e663778 preparing new version 2024-09-18 03:24:20 -03:00
João Moura
2787c9b0ef quick bug fixes 2024-09-18 03:22:56 -03:00
João Moura
e77442cf34 Removing LangChain and Rebuilding Executor (#1322)
* rebuilding executor

* removing langchain

* Making all tests good

* fixing types and adding ability for nor using system prompts

* improving types

* pleasing the types gods

* pleasing the types gods

* fixing parser, tools and executor

* making sure all tests pass

* final pass

* fixing type

* Updating Docs

* preparing to cut new version
2024-09-16 14:14:04 -03:00
257 changed files with 81153 additions and 1620868 deletions

3
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.pytest_cache
__pycache__
dist/
lib/
.env
assets/*
.idea
@@ -15,4 +16,4 @@ rc-tests/*
*.pkl
temp/*
.vscode/*
crew_tasks_output.json
crew_tasks_output.json

290
README.md
View File

@@ -1,10 +1,10 @@
<div align="center">
![Logo of crewAI, two people rowing on a boat](./docs/crewai_logo.png)
![Logo of CrewAI, two people rowing on a boat](./docs/crewai_logo.png)
# **crewAI**
# **CrewAI**
🤖 **crewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<h3>
@@ -44,100 +44,222 @@ To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Then, install CrewAI:
```shell
pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command: pip install 'crewai[tools]'. This command installs the basic package and also adds extra components which require more dependencies to function."
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
### 2. Setting Up Your Crew
### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
```shell
crewai create crew <project_name>
```
This command creates a new project folder with the following structure:
```
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your crew by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents, and the `tasks.yaml` file is where you define your tasks.
#### To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
#### Example of a simple crew with a sequential process:
Instatiate your crew:
```shell
crewai create crew latest-ai-development
```
Modify the files as needed to fit your use case:
**agents.yaml**
```yaml
# src/my_project/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
```
**tasks.yaml**
```yaml
# src/my_project/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
```
**crew.py**
```python
import os
from crewai import Agent, Task, Crew, Process
# src/my_project/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
# You can choose to use a local model through Ollama for example. See https://docs.crewai.com/how-to/LLM-Connections/ for more information.
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
# os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
# os.environ["OPENAI_MODEL_NAME"] ='openhermes' # Adjust based on available model
# os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
# You can pass an optional llm attribute specifying what model you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
# If you don't specify a model, the default is OpenAI gpt-4o
#
# import os
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
#
# OR
#
# from langchain_openai import ChatOpenAI
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
search_tool = SerperDevTool()
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
# Define your agents with roles and goals
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
# You can pass an optional llm attribute specifying what model you wanna use.
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
tools=[search_tool]
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True
)
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.""",
expected_output="Full analysis report in bullet points",
agent=researcher
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=True,
process = Process.sequential
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)
```
**main.py**
```python
#!/usr/bin/env python
# src/my_project/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
```
### 3. Running Your Crew
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
Lock the dependencies and install them by using the CLI command but first, navigate to your project directory:
```shell
cd my_project
crewai install
```
To run your crew, execute the following command in the root of your project:
```bash
crewai run
```
or
```bash
python src/my_project/main.py
```
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
## Key Features
@@ -148,13 +270,13 @@ In addition to the sequential process, you can use the hierarchical process, whi
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
## Examples
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
@@ -185,9 +307,9 @@ You can test different real life examples of AI crews in the [crewAI-examples re
## Connecting Your Crew to a Model
crewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
## How CrewAI Compares
@@ -258,7 +380,7 @@ It's pivotal to understand that **NO data is collected** concerning prompts, tas
Data collected includes:
- Version of crewAI
- Version of CrewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
@@ -283,7 +405,7 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
## License
CrewAI is released under the MIT License.
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
## Frequently Asked Questions (FAQ)
@@ -316,7 +438,7 @@ A: Yes, CrewAI is open-source and welcomes contributions from the community.
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

View File

@@ -11,31 +11,33 @@ description: What are crewAI Agents and how to use them.
<li class='leading-3'>Make decisions</li>
<li class='leading-3'>Communicate with other agents</li>
</ul>
<br/>
<br/>
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
## Agent Attributes
| Attribute | Parameter | Description |
| :------------------------- | :---- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| :------------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory`| Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
## Creating an Agent
@@ -63,7 +65,7 @@ agent = Agent(
max_rpm=None, # Optional
max_execution_time=None, # Optional
verbose=True, # Optional
allow_delegation=True, # Optional
allow_delegation=False, # Optional
step_callback=my_intermediate_step_callback, # Optional
cache=True, # Optional
system_template=my_system_template, # Optional
@@ -74,8 +76,10 @@ agent = Agent(
tools_handler=my_tools_handler, # Optional
cache_handler=my_cache_handler, # Optional
callbacks=[callback1, callback2], # Optional
allow_code_execution=True, # Optiona
allow_code_execution=True, # Optional
max_retry_limit=2, # Optional
use_system_prompt=True, # Optional
respect_context_window=True, # Optional
)
```
@@ -105,7 +109,7 @@ agent = Agent(
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
CrewAI is a universal multi agent framework that allows for all agents to work together to automate tasks and solve problems.
CrewAI is a universal multi-agent framework that allows for all agents to work together to automate tasks and solve problems.
```py

View File

@@ -85,20 +85,20 @@ Example:
crewai replay -t task_123456
```
### 5. log_tasks_outputs
### 5. log-tasks-outputs
Retrieve your latest crew.kickoff() task outputs.
```
crewai log_tasks_outputs
crewai log-tasks-outputs
```
### 6. reset_memories
### 6. reset-memories
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```
crewai reset_memories [OPTIONS]
crewai reset-memories [OPTIONS]
```
- `-l, --long`: Reset LONG TERM memory
@@ -109,8 +109,8 @@ crewai reset_memories [OPTIONS]
Example:
```
crewai reset_memories --long --short
crewai reset_memories --all
crewai reset-memories --long --short
crewai reset-memories --all
```
### 7. test

View File

@@ -27,7 +27,7 @@ The `Crew` class has been enriched with several attributes to support advanced f
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution.
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew's execution.
- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency.
- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting.

View File

@@ -13,18 +13,18 @@ A crew in crewAI represents a collaborative group of agents working together to
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
| **Agents** | `agents` | A list of agents that are part of the crew. |
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. |
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. Default is `sequential`. |
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. Defaults to `False`. |
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
@@ -38,65 +38,6 @@ A crew in crewAI represents a collaborative group of agents working together to
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
## Creating a Crew
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
### Example: Assembling a Crew
```python
from crewai import Crew, Agent, Task, Process
from langchain_community.tools import DuckDuckGoSearchRun
from crewai_tools import tool
@tool('DuckDuckGoSearch')
def search(search_query: str):
"""Search the web for information on a given topic"""
return DuckDuckGoSearchRun().run(search_query)
# Define agents with specific roles and tools
researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
backstory="""You're a senior research analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
trends and innovations in the space of artificial intelligence.""",
tools=[search]
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
backstory="""You're a senior writer at a large company.
You're responsible for creating content to the business.
You're currently working on a project to write about trends
and innovations in the space of AI for your next meeting.""",
verbose=True
)
# Create tasks for the agents
research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher,
expected_output='A bullet list summary of the top 5 most important AI news'
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer,
expected_output='3 paragraph blog post on the latest AI technologies'
)
# Assemble the crew with a sequential process
my_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True,
)
```
## Crew Output

627
docs/core-concepts/Flows.md Normal file
View File

@@ -0,0 +1,627 @@
# CrewAI Flows
## Introduction
CrewAI Flows is a powerful feature designed to streamline the creation and management of AI workflows. Flows allow developers to combine and coordinate coding tasks and Crews efficiently, providing a robust framework for building sophisticated AI automations.
Flows allow you to create structured, event-driven workflows. They provide a seamless way to connect multiple tasks, manage state, and control the flow of execution in your AI applications. With Flows, you can easily design and implement multi-step processes that leverage the full potential of CrewAI's capabilities.
1. **Simplified Workflow Creation**: Easily chain together multiple Crews and tasks to create complex AI workflows.
2. **State Management**: Flows make it super easy to manage and share state between different tasks in your workflow.
3. **Event-Driven Architecture**: Built on an event-driven model, allowing for dynamic and responsive workflows.
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
## Getting Started
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
```python
import asyncio
from crewai.flow.flow import Flow, listen, start
from litellm import completion
class ExampleFlow(Flow):
model = "gpt-4o-mini"
@start()
def generate_city(self):
print("Starting flow")
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
)
random_city = response["choices"][0]["message"]["content"]
print(f"Random City: {random_city}")
return random_city
@listen(generate_city)
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
)
fun_fact = response["choices"][0]["message"]["content"]
return fun_fact
async def main():
flow = ExampleFlow()
result = await flow.kickoff()
print(f"Generated fun fact: {result}")
asyncio.run(main())
```
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
### @start()
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
### @listen()
The `@listen()` decorator is used to mark a method as a listener for the output of another task in the Flow. The method decorated with `@listen()` will be executed when the specified task emits an output. The method can access the output of the task it is listening to as an argument.
#### Usage
The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python
@listen("generate_city")
def generate_fun_fact(self, random_city):
# Implementation
```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python
@listen(generate_city)
def generate_fun_fact(self, random_city):
# Implementation
```
### Flow Output
Accessing and handling the output of a Flow is essential for integrating your AI workflows into larger applications or systems. CrewAI Flows provide straightforward mechanisms to retrieve the final output, access intermediate results, and manage the overall state of your Flow.
#### Retrieving the Final Output
When you run a Flow, the final output is determined by the last method that completes. The `kickoff()` method returns the output of this final method.
Here's how you can access the final output:
```python
import asyncio
from crewai.flow.flow import Flow, listen, start
class OutputExampleFlow(Flow):
@start()
def first_method(self):
return "Output from first_method"
@listen(first_method)
def second_method(self, first_output):
return f"Second method received: {first_output}"
async def main():
flow = OutputExampleFlow()
final_output = await flow.kickoff()
print("---- Final Output ----")
print(final_output)
asyncio.run(main())
```
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow. The `kickoff()` method will return this final output, which is then printed to the console.
The output of the Flow will be:
```
---- Final Output ----
Second method received: Output from first_method
```
#### Accessing and Updating State
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
Here's an example of how to update and access the state:
```python
import asyncio
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StateExampleFlow(Flow[ExampleState]):
@start()
def first_method(self):
self.state.message = "Hello from first_method"
self.state.counter += 1
@listen(first_method)
def second_method(self):
self.state.message += " - updated by second_method"
self.state.counter += 1
return self.state.message
async def main():
flow = StateExampleFlow()
final_output = await flow.kickoff()
print(f"Final Output: {final_output}")
print("Final State:")
print(flow.state)
asyncio.run(main())
```
In this example, the state is updated by both `first_method` and `second_method`. After the Flow has run, you can access the final state to see the updates made by these methods.
The output of the Flow will be:
```
Final Output: Hello from first_method - updated by second_method
Final State:
counter=2 message='Hello from first_method - updated by second_method'
```
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems, while also maintaining and accessing the state throughout the Flow's execution.
## Flow State Management
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management, allowing developers to choose the approach that best fits their application's needs.
### Unstructured State Management
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class. This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
```python
import asyncio
from crewai.flow.flow import Flow, listen, start
class UntructuredExampleFlow(Flow):
@start()
def first_method(self):
self.state.message = "Hello from structured flow"
self.state.counter = 0
@listen(first_method)
def second_method(self):
self.state.counter += 1
self.state.message += " - updated"
@listen(second_method)
def third_method(self):
self.state.counter += 1
self.state.message += " - updated again"
print(f"State after third_method: {self.state}")
async def main():
flow = UntructuredExampleFlow()
await flow.kickoff()
asyncio.run(main())
```
**Key Points:**
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
- **Simplicity:** Ideal for straightforward workflows where state structure is minimal or varies significantly.
### Structured State Management
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow. By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
```python
import asyncio
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow[ExampleState]):
@start()
def first_method(self):
self.state.message = "Hello from structured flow"
@listen(first_method)
def second_method(self):
self.state.counter += 1
self.state.message += " - updated"
@listen(second_method)
def third_method(self):
self.state.counter += 1
self.state.message += " - updated again"
print(f"State after third_method: {self.state}")
async def main():
flow = StructuredExampleFlow()
await flow.kickoff()
asyncio.run(main())
```
**Key Points:**
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
- **Type Safety:** Leveraging Pydantic ensures that state attributes adhere to the specified types, reducing runtime errors.
- **Auto-Completion:** IDEs can provide better auto-completion and error checking based on the defined state model.
### Choosing Between Unstructured and Structured State Management
- **Use Unstructured State Management when:**
- The workflow's state is simple or highly dynamic.
- Flexibility is prioritized over strict state definitions.
- Rapid prototyping is required without the overhead of defining schemas.
- **Use Structured State Management when:**
- The workflow requires a well-defined and consistent state structure.
- Type safety and validation are important for your application's reliability.
- You want to leverage IDE features like auto-completion and type checking for better developer experience.
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
## Flow Control
### Conditional Logic
#### or
The `or_` function in Flows allows you to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
```python
import asyncio
from crewai.flow.flow import Flow, listen, or_, start
class OrExampleFlow(Flow):
@start()
def start_method(self):
return "Hello from the start method"
@listen(start_method)
def second_method(self):
return "Hello from the second method"
@listen(or_(start_method, second_method))
def logger(self, result):
print(f"Logger: {result}")
async def main():
flow = OrExampleFlow()
await flow.kickoff()
asyncio.run(main())
```
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`. The `or_` function is to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
The output of the Flow will be:
```
Logger: Hello from the start method
Logger: Hello from the second method
```
#### and
The `and_` function in Flows allows you to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
```python
import asyncio
from crewai.flow.flow import Flow, and_, listen, start
class AndExampleFlow(Flow):
@start()
def start_method(self):
self.state["greeting"] = "Hello from the start method"
@listen(start_method)
def second_method(self):
self.state["joke"] = "What do computers eat? Microchips."
@listen(and_(start_method, second_method))
def logger(self):
print("---- Logger ----")
print(self.state)
async def main():
flow = AndExampleFlow()
await flow.kickoff()
asyncio.run(main())
```
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output. The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
The output of the Flow will be:
```
---- Logger ----
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
```
### Router
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method. You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
```python
import asyncio
import random
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
class ExampleState(BaseModel):
success_flag: bool = False
class RouterFlow(Flow[ExampleState]):
@start()
def start_method(self):
print("Starting the structured flow")
random_boolean = random.choice([True, False])
self.state.success_flag = random_boolean
@router(start_method)
def second_method(self):
if self.state.success_flag:
return "success"
else:
return "failed"
@listen("success")
def third_method(self):
print("Third method running")
@listen("failed")
def fourth_method(self):
print("Fourth method running")
async def main():
flow = RouterFlow()
await flow.kickoff()
asyncio.run(main())
```
In the above example, the `start_method` generates a random boolean value and sets it in the state. The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean. If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`. The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`, but you should see an output similar to the following:
```
Starting the structured flow
Third method running
```
## Adding Crews to Flows
Creating a flow with multiple crews in CrewAI is straightforward. You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
```bash
crewai create flow name_of_flow
```
This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews.
### Folder Structure
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
```
name_of_flow/
├── crews/
│ └── poem_crew/
│ ├── config/
│ │ ├── agents.yaml
│ │ └── tasks.yaml
│ ├── poem_crew.py
├── tools/
│ └── custom_tool.py
├── main.py
├── README.md
├── pyproject.toml
└── .gitignore
```
### Building Your Crews
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
- `config/agents.yaml`: Defines the agents for the crew.
- `config/tasks.yaml`: Defines the tasks for the crew.
- `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself.
You can copy, paste, and edit the `poem_crew` to create other crews.
### Connecting Crews in `main.py`
The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution.
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python
#!/usr/bin/env python
import asyncio
from random import randint
from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):
sentence_count: int = 1
poem: str = ""
class PoemFlow(Flow[PoemState]):
@start()
def generate_sentence_count(self):
print("Generating sentence count")
# Generate a number between 1 and 5
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count)
def generate_poem(self):
print("Generating poem")
poem_crew = PoemCrew().crew()
result = poem_crew.kickoff(inputs={"sentence_count": self.state.sentence_count})
print("Poem generated", result.raw)
self.state.poem = result.raw
@listen(generate_poem)
def save_poem(self):
print("Saving poem")
with open("poem.txt", "w") as f:
f.write(self.state.poem)
async def run():
"""
Run the flow.
"""
poem_flow = PoemFlow()
await poem_flow.kickoff()
def main():
asyncio.run(run())
if __name__ == "__main__":
main()
```
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
### Running the Flow
Before running the flow, make sure to install the dependencies by running:
```bash
poetry install
```
Once all of the dependencies are installed, you need to activate the virtual environment by running:
```bash
poetry shell
```
After activating the virtual environment, you can run the flow by executing one of the following commands:
```bash
crewai flow run
```
or
```bash
poetry run run_flow
```
The flow will execute, and you should see the output in the console.
## Plot Flows
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
### What are Plots?
Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations.
### How to Generate a Plot
CrewAI provides two convenient methods to generate plots of your flows:
#### Option 1: Using the `plot()` Method
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python
# Assuming you have a flow instance
flow.plot("my_flow_plot")
```
This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot.
#### Option 2: Using the Command Line
If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup.
```bash
crewai flow plot
```
This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow.
### Understanding the Plot
The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details.
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow)
3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows)
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.

155
docs/core-concepts/LLMs.md Normal file
View File

@@ -0,0 +1,155 @@
# Large Language Models (LLMs) in crewAI
## Introduction
Large Language Models (LLMs) are the backbone of intelligent agents in the crewAI framework. This guide will help you understand, configure, and optimize LLM usage for your crewAI projects.
## Table of Contents
- [Key Concepts](#key-concepts)
- [Configuring LLMs for Agents](#configuring-llms-for-agents)
- [1. Default Configuration](#1-default-configuration)
- [2. String Identifier](#2-string-identifier)
- [3. LLM Instance](#3-llm-instance)
- [4. Custom LLM Objects](#4-custom-llm-objects)
- [Connecting to OpenAI-Compatible LLMs](#connecting-to-openai-compatible-llms)
- [LLM Configuration Options](#llm-configuration-options)
- [Using Ollama (Local LLMs)](#using-ollama-local-llms)
- [Changing the Base API URL](#changing-the-base-api-url)
- [Best Practices](#best-practices)
- [Troubleshooting](#troubleshooting)
## Key Concepts
- **LLM**: Large Language Model, the AI powering agent intelligence
- **Agent**: A crewAI entity that uses an LLM to perform tasks
- **Provider**: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, [more providers](https://docs.litellm.ai/docs/providers))
## Configuring LLMs for Agents
crewAI offers flexible options for setting up LLMs:
### 1. Default Configuration
By default, crewAI uses the `gpt-4o-mini` model. It uses environment variables if no LLM is specified:
- `OPENAI_MODEL_NAME` (defaults to "gpt-4o-mini" if not set)
- `OPENAI_API_BASE`
- `OPENAI_API_KEY`
### 2. String Identifier
```python
agent = Agent(llm="gpt-4o", ...)
```
### 3. LLM Instance
List of [more providers](https://docs.litellm.ai/docs/providers).
```python
from crewai import LLM
llm = LLM(model="gpt-4", temperature=0.7)
agent = Agent(llm=llm, ...)
```
### 4. Custom LLM Objects
Pass a custom LLM implementation or object from another library.
## Connecting to OpenAI-Compatible LLMs
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
1. Using environment variables:
```python
import os
os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
```
2. Using LLM class attributes:
```python
llm = LLM(
model="custom-model-name",
api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
)
agent = Agent(llm=llm, ...)
```
## LLM Configuration Options
When configuring an LLM for your agent, you have access to a wide range of parameters:
| Parameter | Type | Description |
|-----------|------|-------------|
| `model` | str | The name of the model to use (e.g., "gpt-4", "gpt-3.5-turbo", "ollama/llama3.1", [more providers](https://docs.litellm.ai/docs/providers)) |
| `timeout` | float, int | Maximum time (in seconds) to wait for a response |
| `temperature` | float | Controls randomness in output (0.0 to 1.0) |
| `top_p` | float | Controls diversity of output (0.0 to 1.0) |
| `n` | int | Number of completions to generate |
| `stop` | str, List[str] | Sequence(s) to stop generation |
| `max_tokens` | int | Maximum number of tokens to generate |
| `presence_penalty` | float | Penalizes new tokens based on their presence in the text so far |
| `frequency_penalty` | float | Penalizes new tokens based on their frequency in the text so far |
| `logit_bias` | Dict[int, float] | Modifies likelihood of specified tokens appearing in the completion |
| `response_format` | Dict[str, Any] | Specifies the format of the response (e.g., {"type": "json_object"}) |
| `seed` | int | Sets a random seed for deterministic results |
| `logprobs` | bool | Whether to return log probabilities of the output tokens |
| `top_logprobs` | int | Number of most likely tokens to return the log probabilities for |
| `base_url` | str | The base URL for the API endpoint |
| `api_version` | str | The version of the API to use |
| `api_key` | str | Your API key for authentication |
Example:
```python
llm = LLM(
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
## Using Ollama (Local LLMs)
crewAI supports using Ollama for running open-source models locally:
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure agent:
```python
agent = Agent(
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
...
)
```
## Changing the Base API URL
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
```python
llm = LLM(
model="custom-model-name",
base_url="https://api.your-provider.com/v1",
api_key="your-api-key"
)
agent = Agent(llm=llm, ...)
```
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
## Best Practices
1. **Choose the right model**: Balance capability and cost.
2. **Optimize prompts**: Clear, concise instructions improve output.
3. **Manage tokens**: Monitor and limit token usage for efficiency.
4. **Use appropriate temperature**: Lower for factual tasks, higher for creative ones.
5. **Implement error handling**: Gracefully manage API errors and rate limits.
## Troubleshooting
- **API Errors**: Check your API key, network connection, and rate limits.
- **Unexpected Outputs**: Refine your prompts and adjust temperature or top_p.
- **Performance Issues**: Consider using a more powerful model or optimizing your queries.
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.

View File

@@ -4,16 +4,17 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
---
## Introduction to Memory Systems in crewAI
!!! note "Enhancing Agent Intelligence"
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :------------------- | :----------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context during the current executions. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. So Agents can remember what they did right and wrong across multiple executions |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. |
| Component | Description |
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## How Memory Systems Empower Agents
@@ -27,12 +28,12 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model. It's also possible to initialize the memory instance with your own instance.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package.
The **Long-Term Memory** uses SQLLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform specific location found using the appdirs package
and the name of the project which can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform-specific location found using the appdirs package,
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
### Example: Configuring Memory for a Crew
@@ -49,6 +50,45 @@ my_crew = Crew(
)
```
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
```python
from crewai import Crew, Agent, Task, Process
# Assemble your crew with memory capabilities
my_crew = Crew(
agents=[...],
tasks=[...],
process="Process.sequential",
memory=True,
long_term_memory=EnhanceLongTermMemory(
storage=LTMSQLiteStorage(
db_path="/my_data_dir/my_crew1/long_term_memory_storage.db"
)
),
short_term_memory=EnhanceShortTermMemory(
storage=CustomRAGStorage(
crew_name="my_crew",
storage_type="short_term",
data_dir="//my_data_dir",
model=embedder["model"],
dimension=embedder["dimension"],
),
),
entity_memory=EnhanceEntityMemory(
storage=CustomRAGStorage(
crew_name="my_crew",
storage_type="entities",
data_dir="//my_data_dir",
model=embedder["model"],
dimension=embedder["dimension"],
),
),
verbose=True,
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)
@@ -56,17 +96,17 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "openai",
"config":{
"model": 'text-embedding-3-small'
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "openai",
"config": {
"model": 'text-embedding-3-small'
}
}
)
```
@@ -75,19 +115,19 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "google",
"config":{
"model": 'models/embedding-001',
"task_type": "retrieval_document",
"title": "Embeddings for Embedchain"
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "google",
"config": {
"model": 'models/embedding-001',
"task_type": "retrieval_document",
"title": "Embeddings for Embedchain"
}
}
)
```
@@ -96,18 +136,18 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "azure_openai",
"config":{
"model": 'text-embedding-ada-002',
"deployment_name": "your_embedding_model_deployment_name"
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "azure_openai",
"config": {
"model": 'text-embedding-ada-002',
"deployment_name": "your_embedding_model_deployment_name"
}
}
)
```
@@ -116,14 +156,14 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "gpt4all"
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "gpt4all"
}
)
```
@@ -132,17 +172,17 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "vertexai",
"config":{
"model": 'textembedding-gecko'
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "vertexai",
"config": {
"model": 'textembedding-gecko'
}
}
)
```
@@ -151,24 +191,24 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "cohere",
"config":{
"model": "embed-english-v3.0",
"vector_dimension": 1024
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "cohere",
"config": {
"model": "embed-english-v3.0",
"vector_dimension": 1024
}
}
)
```
### Resetting Memory
```sh
crewai reset_memories [OPTIONS]
crewai reset-memories [OPTIONS]
```
#### Resetting Memory Options

View File

@@ -12,7 +12,7 @@ A pipeline in crewAI represents a structured workflow that allows for the sequen
Understanding the following terms is crucial for working effectively with pipelines:
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
- **Run**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
@@ -28,13 +28,13 @@ This represents a pipeline with three stages:
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
3. Another sequential stage (crew4)
Each input creates its own run, flowing through all stages of the pipeline. Multiple runs can be processed concurrently, each following the defined pipeline structure.
Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
## Pipeline Attributes
| Attribute | Parameters | Description |
| :--------- | :--------- | :---------------------------------------------------------------------------------------------- |
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
| Attribute | Parameters | Description |
| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
## Creating a Pipeline
@@ -43,7 +43,7 @@ When creating a pipeline, you define a series of stages, each consisting of eith
### Example: Assembling a Pipeline
```python
from crewai import Crew, Agent, Task, Pipeline
from crewai import Crew, Process, Pipeline
# Define your crews
research_crew = Crew(
@@ -74,7 +74,8 @@ my_pipeline = Pipeline(
| Method | Description |
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **process_runs** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more runs through the pipeline, handling the flow of data between stages. |
| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
## Pipeline Output
@@ -99,12 +100,12 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
| Attribute | Parameters | Type | Description |
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline run. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the final stage, if applicable. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the final stage, if applicable. |
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage across all stages of the pipeline run. |
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline run. |
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline run. |
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
### Pipeline Run Result Methods and Properties
@@ -112,7 +113,7 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
| :-------------- | :------------------------------------------------------------------------------------------------------- |
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| \***\*str\*\*** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
### Accessing Pipeline Outputs
@@ -261,7 +262,7 @@ In this example, the router decides between an urgent pipeline and a normal pipe
### Error Handling and Validation
The Pipeline class includes validation mechanisms to ensure the robustness of the pipeline structure:
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
- Validates that stages contain only Crew instances or lists of Crew instances.
- Prevents double nesting of stages to maintain a clear structure.

View File

@@ -43,7 +43,7 @@ my_crew = Crew(
### Example
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks.
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents' tasks.
```
[2024-07-15 16:49:11][INFO]: Planning the crew execution
@@ -96,7 +96,7 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
**Task Expected Output:** A fully fledge report with the main topics, each with a full section of information. Formatted as markdown without '```'
**Task Expected Output:** A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'
**Task Tools:** None specified
@@ -130,5 +130,4 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
- Double-check formatting and make any necessary adjustments.
**Expected Output:**
A fully-fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.

View File

@@ -12,22 +12,22 @@ Tasks within crewAI can be collaborative, requiring multiple agents to work toge
## Task Attributes
| Attribute | Parameters | Description |
| :------------------------------- | :---------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | A detailed description of what the task's completion looks like. |
| **Tools** _(optional)_ | `tools` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
| **Async Execution** _(optional)_ | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
| **Context** _(optional)_ | `context` | Specifies tasks whose outputs are used as context for this task. |
| **Config** _(optional)_ | `config` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
| **Output JSON** _(optional)_ | `output_json` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** _(optional)_ | `output_file` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Output** _(optional)_ | `output` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
| **Callback** _(optional)_ | `callback` | A callable that is executed with the task's output upon completion. |
| **Human Input** _(optional)_ | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. Defaults to False.|
| **Converter Class** _(optional)_ | `converter_cls` | A converter class used to export structured output. Defaults to None. |
| Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | `Optional[BaseAgent]` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Tools** _(optional)_ | `tools` | `Optional[List[Any]]` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Specifies tasks whose outputs are used as context for this task. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Output** _(optional)_ | `output` | `Optional[TaskOutput]` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | A callable that is executed with the task's output upon completion. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Indicates if the task should involve human review at the end, useful for tasks needing human oversight. Defaults to False.|
| **Converter Class** _(optional)_ | `converter_cls` | `Optional[Type[Converter]]` | A converter class used to export structured output. Defaults to None. |
## Creating a Task
@@ -49,28 +49,28 @@ Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's pr
## Task Output
!!! note "Understanding Task Outputs"
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw strings, JSON, and Pydantic models.
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
### Task Output Attributes
| Attribute | Parameters | Type | Description |
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A brief description of the task. |
| **Summary** | `summary` | `Optional[str]` | A short summary of the task, auto-generated from the first 10 words of the description. |
| **Description** | `description` | `str` | Description of the task. |
| **Summary** | `summary` | `Optional[str]` | Summary of the task, auto-generated from the first 10 words of the description. |
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
| **Agent** | `agent` | `str` | The agent that executed the task. |
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
### Task Output Methods and Properties
### Task Methods and Properties
| Method/Property | Description |
| :-------------- | :------------------------------------------------------------------------------------------------ |
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| \***\*str\*\*** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
| **str** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
### Accessing Task Outputs
@@ -234,7 +234,7 @@ def callback_function(output: TaskOutput):
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
Output: {output.raw}
""")
research_task = Task(
@@ -275,7 +275,7 @@ result = crew.kickoff()
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
Output: {task1.output.raw}
""")
```

View File

@@ -9,7 +9,7 @@ Testing is a crucial part of the development process, and it is essential to ens
### Using the Testing Feature
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model` which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
```bash
crewai test
@@ -21,20 +21,36 @@ If you want to run more iterations or use a different model, you can specify the
crewai test --n_iterations 5 --model gpt-4o
```
or using the short forms:
```bash
crewai test -n 5 -m gpt-4o
```
When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.
A table of scores at the end will show the performance of the crew in terms of the following metrics:
```
Task Scores
(1-10 Higher is better)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Tasks/Crew Run 1 Run 2 Avg. Total ┃
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
Task 1 │ 10.0 │ 9.0 9.5 │
│ Task 29.09.0 │ 9.0 │
│ Crew │ 9.5 │ 9.0 │ 9.2
└────────────┴───────┴───────┴────────────┘
Tasks Scores
(1-10 Higher is better)
┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Tasks/Crew/Agents │ Run 1 Run 2 Avg. Total │ Agents │
┠────────────────────┼───────┼───────┼────────────┼────────────────────────────────┼─────────────────────────────────┨
Task 1 │ 9.0 │ 9.5 │ 9.2 │ - Professional Insights │ ┃
┃ │ │ Researcher │ ┃
┃ │ │ │ │ │
┃ Task 2 │ 9.0 │ 10.0 │ 9.5 │ - Company Profile Investigator │ ┃
┃ │ │ │ │ │ ┃
┃ Task 3 │ 9.0 │ 9.0 │ 9.0 │ - Automation Insights │ ┃
┃ │ │ │ │ Specialist │ ┃
┃ │ │ │ │ │ ┃
┃ Task 4 │ 9.0 │ 9.0 │ 9.0 │ - Final Report Compiler │ ┃
┃ │ │ │ │ │ - Automation Insights ┃
┃ │ │ │ │ │ Specialist ┃
┃ Crew │ 9.00 │ 9.38 │ 9.2 │ │ ┃
┃ Execution Time (s) │ 126 │ 145 │ 135 │ │ ┃
┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.

View File

@@ -106,7 +106,7 @@ Here is a list of the available tools and their descriptions:
| **CodeInterpreterTool** | A tool for interpreting python code. |
| **ComposioTool** | Enables use of Composio tools. |
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
@@ -114,7 +114,7 @@ Here is a list of the available tools and their descriptions:
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages url using Firecrawl and returning its contents. |
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
@@ -123,14 +123,14 @@ Here is a list of the available tools and their descriptions:
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
| **Vision Tool** | A tool for generating images using the DALL-E API. |
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
| **Vision Tool** | A tool for generating images using the DALL-E API. |
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
## Creating your own Tools
@@ -144,6 +144,7 @@ pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a crewAI tool:
### Subclassing `BaseTool`
```python

View File

@@ -16,7 +16,7 @@ To use the training feature, follow these steps:
3. Run the following command:
```shell
crewai train -n <n_iterations> <filename>
crewai train -n <n_iterations> <filename> (optional)
```
!!! note "Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`."

View File

@@ -5,9 +5,10 @@ description: Learn how to integrate LangChain tools with CrewAI agents to enhanc
## Using LangChain Tools
!!! info "LangChain Integration"
CrewAI seamlessly integrates with LangChains comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)
CrewAI seamlessly integrates with LangChains comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with crewAI.
```python
import os
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper

View File

@@ -35,10 +35,10 @@ query_tool = LlamaIndexTool.from_query_engine(
# Create and assign the tools to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
)
# rest of the code ...
@@ -54,4 +54,4 @@ To effectively use the LlamaIndexTool, follow these steps:
pip install 'crewai[tools]'
```
2. **Install and Use LlamaIndex**: Follow LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
2. **Install and Use LlamaIndex**: Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -71,25 +71,59 @@ To customize your pipeline project, you can:
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
4. Add your environment variables into the `.env` file.
### Example: Defining a Pipeline
## Example 1: Defining a Two-Stage Sequential Pipeline
Here's an example of how to define a pipeline in `src/<project_name>/pipelines/normal_pipeline.py`:
Here's an example of how to define a pipeline with sequential stages in `src/<project_name>/pipelines/pipeline.py`:
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.normal_crew import NormalCrew
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
@PipelineBase
class NormalPipeline:
class SequentialPipeline:
def __init__(self):
# Initialize crews
self.normal_crew = NormalCrew().crew()
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.normal_crew
self.research_crew,
self.write_x_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
@PipelineBase
class ParallelExecutionPipeline:
def __init__(self):
# Initialize crews
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
self.write_linkedin_crew = WriteLinkedInCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.research_crew,
[self.write_x_crew, self.write_linkedin_crew] # Parallel execution
]
)
@@ -126,4 +160,4 @@ This will initialize your pipeline and begin task execution as defined in your `
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.

View File

@@ -1,5 +1,7 @@
---
title: Starting a New CrewAI Project - Using Template
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
---
@@ -21,6 +23,7 @@ $ pip install 'crewai[tools]'
```
## Creating a New Project
In this example, we will be using poetry as our virtual environment manager.
To create a new CrewAI project, run the following CLI command:
@@ -95,10 +98,13 @@ research_candidates_task:
```
### Referencing Variables:
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from task.yaml file. Ensure your annotated agent and function name is the same otherwise your task won't recognize the reference properly.
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
#### Example References
agent.yaml
`agents.yaml`
```yaml
email_summarizer:
role: >
@@ -110,7 +116,8 @@ email_summarizer:
llm: mixtal_llm
```
task.yaml
`tasks.yaml`
```yaml
email_summarizer_task:
description: >
@@ -123,37 +130,34 @@ email_summarizer_task:
- research_task
```
Use the annotations to properly reference the agent and task in the crew.py file.
Use the annotations to properly reference the agent and task in the `crew.py` file.
### Annotations include:
* [@agent](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L17)
* [@task](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L4)
* [@crew](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L69)
* [@llm](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L23)
* [@tool](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L39)
* [@callback](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L44)
* [@output_json](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L29)
* [@output_pydantic](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L34)
* [@cache_handler](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L49)
crew.py
```py
* `@agent`
* `@task`
* `@crew`
* `@tool`
* `@callback`
* `@output_json`
* `@output_pydantic`
* `@cache_handler`
`crew.py`
```python
# ...
@llm
def mixtal_llm(self):
return ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
@agent
def email_summarizer(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
)
@agent
def email_summarizer(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
)
## ...other tasks defined
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
# ...
```
@@ -172,7 +176,7 @@ This will install the dependencies specified in the `pyproject.toml` file.
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
#### agents.yaml
#### tasks.yaml
```yaml
research_task:
@@ -204,6 +208,7 @@ To run your project, use the following command:
```shell
$ crewai run
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
### Replay Tasks from Latest Crew Kickoff

View File

@@ -19,7 +19,7 @@ from crewai.task import Task
from crewai_tools import SerperDevTool
# Define a condition function for the conditional task
# if false task will be skipped, true, then execute task
# If false, the task will be skipped, if true, then execute the task.
def is_data_missing(output: TaskOutput) -> bool:
return len(output.pydantic.events) < 10 # this will skip this task
@@ -29,21 +29,21 @@ data_fetcher_agent = Agent(
goal="Fetch data online using Serper tool",
backstory="Backstory 1",
verbose=True,
tools=[SerperDevTool()],
tools=[SerperDevTool()]
)
data_processor_agent = Agent(
role="Data Processor",
goal="Process fetched data",
backstory="Backstory 2",
verbose=True,
verbose=True
)
summary_generator_agent = Agent(
role="Summary Generator",
goal="Generate summary from fetched data",
backstory="Backstory 3",
verbose=True,
verbose=True
)
class EventOutput(BaseModel):
@@ -69,7 +69,7 @@ conditional_task = ConditionalTask(
task3 = Task(
description="Generate summary of events in San Francisco from fetched data",
expected_output="summary_generated",
expected_output="A complete report on the customer and their customers and competitors, including their demographics, preferences, market positioning and audience engagement.",
agent=summary_generator_agent,
)
@@ -78,7 +78,7 @@ crew = Crew(
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
tasks=[task1, conditional_task, task3],
verbose=True,
planning=True # Enable planning feature
planning=True
)
# Run the crew

View File

@@ -91,4 +91,4 @@ Custom prompt files should be structured in JSON format and include all necessar
- **Improved Usability**: Supports multiple languages, making it suitable for global projects.
- **Consistency**: Ensures uniform prompt structures across different agents and tasks.
By incorporating these updates, CrewAI provides users with the ability to fully customize and internationalize their agent prompts, making the platform more versatile and user-friendly.
By incorporating these updates, CrewAI provides users with the ability to fully customize and internationalize their agent prompts, making the platform more versatile and user-friendly.

View File

@@ -14,12 +14,15 @@ Crafting an efficient CrewAI team hinges on the ability to dynamically tailor yo
- **Cache** *(Optional)*: Determines whether the agent should use a cache for tool usage.
- **Max RPM**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
- **Verbose** *(Optional)*: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents.
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents. This attribute is now set to `False` by default.
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency.
- **Max Execution Time** *(Optional)*: `max_execution_time` Sets the maximum execution time for an agent to complete a task.
- **System Template** *(Optional)*: `system_template` defines the system format for the agent.
- **Prompt Template** *(Optional)*: `prompt_template` defines the prompt format for the agent.
- **Response Template** *(Optional)*: `response_template` defines the response format for the agent.
- **Use System Prompt** *(Optional)*: `use_system_prompt` controls whether the agent will use a system prompt for task execution. Agents can now operate without system prompts.
- **Respect Context Window**: `respect_context_window` renames the sliding context window attribute and enables it by default to maintain context size.
- **Max Retry Limit**: `max_retry_limit` defines the maximum number of retries for an agent to execute a task when an error occurs.
## Advanced Customization Options
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
@@ -67,12 +70,11 @@ agent = Agent(
verbose=True,
max_rpm=None, # No limit on requests per minute
max_iter=25, # Default value for maximum iterations
allow_delegation=False
)
```
## Delegation and Autonomy
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to `True`, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is now set to `False`, disabling agents to seek assistance or delegate tasks as needed. This default behavior can be changed to promote collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be enabled to suit specific operational requirements.
### Example: Disabling Delegation for an Agent
```python
@@ -80,7 +82,7 @@ agent = Agent(
role='Content Writer',
goal='Write engaging content on market trends',
backstory='A seasoned writer with expertise in market analysis.',
allow_delegation=False # Disabling delegation
allow_delegation=True # Enabling delegation
)
```

View File

@@ -1,27 +1,31 @@
---
title: Forcing Tool Output as Result
description: Learn how to force tool output as the result in of an Agent's task in CrewAI.
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
---
## Introduction
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution.
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, avoiding any agent modification during the task execution.
## Forcing Tool Output as Result
To force the tool output as the result of an agent's task, you can set the `result_as_answer` parameter to `True` when creating the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
To force the tool output as the result of an agent's task, you need to set the `result_as_answer` parameter to `True` when adding a tool to the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
Here's an example of how to force the tool output as the result of an agent's task:
```python
# ...
from crewai.agent import Agent
from my_tool import MyCustomTool
# Define a custom tool that returns the result as the answer
# Create a coding agent with the custom tool
coding_agent = Agent(
role="Data Scientist",
goal="Produce amazing reports on AI",
backstory="You work with data and AI",
tools=[MyCustomTool(result_as_answer=True)],
)
# Assuming the tool's execution and result population occurs within the system
task_result = coding_agent.execute_task(task)
```
## Workflow in Action

View File

@@ -16,6 +16,13 @@ By default, tasks in CrewAI are managed through a sequential process. However, a
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
- **System Prompt Handling**: Optionally specify whether the system should use predefined prompts.
- **Stop Words Control**: Optionally specify whether stop words should be used, supporting various models including the o1 models.
- **Context Window Respect**: Prioritize important context by enabling respect of the context window, which is now the default behavior.
- **Delegation Control**: Delegation is now disabled by default to give users explicit control.
- **Max Requests Per Minute**: Configurable option to set the maximum number of requests per minute.
- **Max Iterations**: Limit the maximum number of iterations for obtaining a final answer.
## Implementing the Hierarchical Process
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`. Define a crew with a designated manager and establish a clear chain of command.
@@ -38,6 +45,9 @@ researcher = Agent(
cache=True,
verbose=False,
# tools=[] # This can be optionally specified; defaults to an empty list
use_system_prompt=True, # Enable or disable system prompts for this agent
max_rpm=30, # Limit on the number of requests per minute
max_iter=5 # Maximum number of iterations for a final answer
)
writer = Agent(
role='Writer',
@@ -46,6 +56,9 @@ writer = Agent(
cache=True,
verbose=False,
# tools=[] # Optionally specify tools; defaults to an empty list
use_system_prompt=True, # Enable or disable system prompts for this agent
max_rpm=30, # Limit on the number of requests per minute
max_iter=5 # Maximum number of iterations for a final answer
)
# Establishing the crew with a hierarchical process and additional configurations
@@ -54,6 +67,7 @@ project_crew = Crew(
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory if manager_agent is not set
process=Process.hierarchical, # Specifies the hierarchical management approach
respect_context_window=True, # Enable respect of the context window for tasks
memory=True, # Enable memory usage for enhanced task execution
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
planning=True, # Enable planning feature for pre-execution strategy

View File

@@ -74,7 +74,8 @@ task2 = Task(
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
agent=writer
agent=writer,
human_input=True
)
# Instantiate your crew with a sequential process

View File

@@ -72,7 +72,7 @@ asyncio.run(async_crew_execution())
## Example: Multiple Asynchronous Crew Executions
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using asyncio.gather():
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
```python
import asyncio
@@ -114,4 +114,4 @@ async def async_multiple_crews():
# Run the async function
asyncio.run(async_multiple_crews())
```
```

View File

@@ -25,13 +25,17 @@ coding_agent = Agent(
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
agent=coding_agent,
expected_output="The average age calculated from the dataset"
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
tasks=[data_analysis_task],
verbose=True,
memory=False,
respect_context_window=True # enable by default
)
datasets = [
@@ -42,4 +46,4 @@ datasets = [
# Execute the crew
result = analysis_crew.kickoff_for_each(inputs=datasets)
```
```

View File

@@ -1,196 +1,163 @@
---
title: Connect CrewAI to LLMs
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes, methods, and configuration options.
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
---
## Connect CrewAI to LLMs
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set. You can easily configure your agents to use a different model or provider as described in this guide.
CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
## Supported Providers
The platform supports connections to an array of Generative AI models, including:
LiteLLM supports a wide range of providers, including but not limited to:
- OpenAI's suite of advanced language models
- Anthropic's cutting-edge AI offerings
- Ollama's diverse range of locally-hosted generative model & embeddings
- LM Studio's diverse range of locally hosted generative models & embeddings
- Groq's Super Fast LLM offerings
- Azures' generative AI offerings
- HuggingFace's generative AI offerings
- OpenAI
- Anthropic
- Google (Vertex AI, Gemini)
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- Hugging Face
- Ollama
- Mistral AI
- Replicate
- Together AI
- AI21
- Cloudflare Workers AI
- DeepInfra
- Groq
- And many more!
This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability.
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
## Changing the LLM
To use a different LLM with your CrewAI agents, you have several options:
### 1. Using a String Identifier
Pass the model name as a string when initializing the agent:
## Changing the default LLM
The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation.
```python
# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
from crewai import Agent
# Agent will automatically use the model defined in the environment variable
example_agent = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
verbose=True
# Using OpenAI's GPT-4
openai_agent = Agent(
role='OpenAI Expert',
goal='Provide insights using GPT-4',
backstory="An AI assistant powered by OpenAI's latest model.",
llm='gpt-4'
)
# Using Anthropic's Claude
claude_agent = Agent(
role='Anthropic Expert',
goal='Analyze data using Claude',
backstory="An AI assistant leveraging Anthropic's language model.",
llm='claude-2'
)
```
## Ollama Local Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.
### 2. Using the LLM Class
```sh
os.environ[OPENAI_API_BASE]='http://localhost:11434'
os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model
os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
```
For more detailed configuration, use the LLM class:
## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
1. [Download and install Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
3. Llama3.1 should now be served locally on `http://localhost:11434`
```
from crewai import Agent, Task, Crew
from langchain_ollama import ChatOllama
import os
os.environ["OPENAI_API_KEY"] = "NA"
llm = ChatOllama(
model = "llama3.1",
base_url = "http://localhost:11434")
general_agent = Agent(role = "Math Professor",
goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",
backstory = """You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
allow_delegation = False,
verbose = True,
llm = llm)
task = Task(description="""what is 3 + 5""",
agent = general_agent,
expected_output="A numerical answer.")
crew = Crew(
agents=[general_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
print(result)
```
## HuggingFace Integration
There are a couple of different ways you can use HuggingFace to host your LLM.
### Your own HuggingFace endpoint
```python
from langchain_huggingface import HuggingFaceEndpoint
from crewai import Agent, LLM
llm = HuggingFaceEndpoint(
repo_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
llm = LLM(
model="gpt-4",
temperature=0.7,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(
role="HuggingFace Agent",
goal="Generate text using HuggingFace",
backstory="A diligent explorer of GitHub docs.",
role='Customized LLM Expert',
goal='Provide tailored responses',
backstory="An AI assistant with custom LLM settings.",
llm=llm
)
```
## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.
## Configuration Options
### Configuration Examples
#### FastChat
```sh
os.environ["OPENAI_API_BASE"]='http://localhost:8001/v1'
os.environ["OPENAI_MODEL_NAME"]='oh-2.5m7b-q51'
os.environ[OPENAI_API_KEY]='NA'
```
When configuring an LLM for your agent, you have access to a wide range of parameters:
#### LM Studio
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
```sh
os.environ["OPENAI_API_BASE"]='http://localhost:1234/v1'
os.environ["OPENAI_API_KEY"]='lm-studio'
```
| Parameter | Type | Description |
|-----------|------|-------------|
| `model` | str | The name of the model to use (e.g., "gpt-4", "claude-2") |
| `temperature` | float | Controls randomness in output (0.0 to 1.0) |
| `max_tokens` | int | Maximum number of tokens to generate |
| `top_p` | float | Controls diversity of output (0.0 to 1.0) |
| `frequency_penalty` | float | Penalizes new tokens based on their frequency in the text so far |
| `presence_penalty` | float | Penalizes new tokens based on their presence in the text so far |
| `stop` | str, List[str] | Sequence(s) to stop generation |
| `base_url` | str | The base URL for the API endpoint |
| `api_key` | str | Your API key for authentication |
#### Groq API
```sh
os.environ["OPENAI_API_KEY"]='your-groq-api-key'
os.environ["OPENAI_MODEL_NAME"]='llama3-8b-8192'
os.environ["OPENAI_API_BASE"]='https://api.groq.com/openai/v1'
```
For a complete list of parameters and their descriptions, refer to the LLM class documentation.
#### Mistral API
```sh
os.environ["OPENAI_API_KEY"]='your-mistral-api-key'
os.environ["OPENAI_API_BASE"]='https://api.mistral.ai/v1'
os.environ["OPENAI_MODEL_NAME"]='mistral-small'
```
## Connecting to OpenAI-Compatible LLMs
### Solar
```sh
from langchain_community.chat_models.solar import SolarChat
```
```sh
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
os.environ[SOLAR_API_KEY]="your-solar-api-key"
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
# Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```
### Using Environment Variables
### Cohere
```python
from langchain_cohere import ChatCohere
# Initialize language model
os.environ["COHERE_API_KEY"]='your-cohere-api-key'
llm = ChatCohere()
import os
# Free developer API key available here: https://cohere.com/
# Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
os.environ["OPENAI_MODEL_NAME"] = "your-model-name"
```
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
```sh
### Using LLM Class Attributes
os.environ["AZURE_OPENAI_DEPLOYMENT"]='Your deployment'
os.environ["OPENAI_API_VERSION"]='2023-12-01-preview'
os.environ["AZURE_OPENAI_ENDPOINT"]='Your Endpoint'
os.environ["AZURE_OPENAI_API_KEY"]='Your API Key'
```
### Example Agent with Azure LLM
```python
from dotenv import load_dotenv
from crewai import Agent
from langchain_openai import AzureChatOpenAI
load_dotenv()
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY")
llm = LLM(
model="custom-model-name",
api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
)
agent = Agent(llm=llm, ...)
```
azure_agent = Agent(
role='Example Agent',
goal='Demonstrate custom LLM configuration',
backstory='A diligent explorer of GitHub docs.',
llm=azure_llm
## Using Local Models with Ollama
For local models like those provided by Ollama:
1. [Download and install Ollama](https://ollama.com/download)
2. Pull the desired model (e.g., `ollama pull llama2`)
3. Configure your agent:
```python
agent = Agent(
role='Local AI Expert',
goal='Process information using a local model',
backstory="An AI assistant running on local hardware.",
llm=LLM(model="ollama/llama2", base_url="http://localhost:11434")
)
```
## Changing the Base API URL
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
```python
llm = LLM(
model="custom-model-name",
base_url="https://api.your-provider.com/v1",
api_key="your-api-key"
)
agent = Agent(llm=llm, ...)
```
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.

View File

@@ -1,6 +1,7 @@
---
title: Replay Tasks from Latest Crew Kickoff
description: Replay tasks from the latest crew.kickoff(...)
---
## Introduction
@@ -16,22 +17,24 @@ To use the replay feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
3. Run the following commands:
To view the latest kickoff task_ids use:
```shell
crewai log-tasks-outputs
```
Once you have your task_id to replay from use:
Once you have your `task_id` to replay, use:
```shell
crewai replay -t <task_id>
```
**Note:** Ensure `crewai` is installed and configured correctly in your development environment.
### Replaying from a Task Programmatically
To replay from a task programmatically, use the following steps:
1. Specify the task_id and input parameters for the replay process.
1. Specify the `task_id` and input parameters for the replay process.
2. Execute the replay command within a try-except block to handle potential errors.
```python
@@ -49,4 +52,7 @@ To replay from a task programmatically, use the following steps:
except Exception as e:
raise Exception(f"An unexpected error occurred: {e}")
```
```
## Conclusion
With the above enhancements and detailed functionality, replaying specific tasks in CrewAI has been made more efficient and robust. Ensure you follow the commands and steps precisely to make the most of these features.

View File

@@ -52,14 +52,17 @@ report_crew = Crew(
# Execute the crew
result = report_crew.kickoff()
# Accessing the type safe output
# Accessing the type-safe output
task_output: TaskOutput = result.tasks[0].output
crew_output: CrewOutput = result.output
```
### Note:
Each task in a sequential process **must** have an agent assigned. Ensure that every `Task` includes an `agent` parameter.
### Workflow in Action
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager directives guiding their execution.
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or directives guiding their execution.
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
## Advanced Features
@@ -87,4 +90,6 @@ CrewAI tracks token usage across all tasks and agents. You can access these metr
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
This updated documentation ensures that details accurately reflect the latest changes in the codebase and clearly describes how to leverage new features and configurations. The content is kept simple and direct to ensure easy understanding.

View File

@@ -53,6 +53,16 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Crews
</a>
</li>
<li>
<a href="./core-concepts/LLMs">
LLMs
</a>
</li>
<!-- <li>
<a href="./core-concepts/Flows">
Flows
</a>
</li> -->
<li>
<a href="./core-concepts/Pipeline">
Pipeline
@@ -80,7 +90,7 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
</li>
</ul>
</div>
<div style="width:30%">
<div style="width:25%">
<h2>How-To Guides</h2>
<ul>
<li>
@@ -155,7 +165,7 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
</li>
</ul>
</div>
<div style="width:30%">
<!-- <div style="width:25%">
<h2>Examples</h2>
<ul>
<li>
@@ -193,6 +203,26 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Landing Page Generator
</a>
</li>
<li>
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow">
Email Auto Responder Flow
</a>
</li>
<li>
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow">
Lead Score Flow
</a>
</li>
<li>
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows">
Write a Book Flow
</a>
</li>
<li>
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow">
Meeting Assistant Flow
</a>
</li>
</ul>
</div>
</div> -->
</div>

View File

@@ -78,14 +78,14 @@ theme:
palette:
- scheme: default
primary: red
accent: red
primary: deep orange
accent: deep orange
toggle:
icon: material/brightness-7
name: Switch to dark mode
- scheme: slate
primary: red
accent: red
primary: deep orange
accent: deep orange
toggle:
icon: material/brightness-4
name: Switch to light mode
@@ -162,7 +162,7 @@ nav:
- Directory RAG Search: 'tools/DirectorySearchTool.md'
- Directory Read: 'tools/DirectoryReadTool.md'
- Docx Rag Search: 'tools/DOCXSearchTool.md'
- EXA Serch Web Loader: 'tools/EXASearchTool.md'
- EXA Search Web Loader: 'tools/EXASearchTool.md'
- File Read: 'tools/FileReadTool.md'
- File Write: 'tools/FileWriteTool.md'
- Firecrawl Crawl Website Tool: 'tools/FirecrawlCrawlWebsiteTool.md'
@@ -210,6 +210,6 @@ extra:
property: G-N3Q505TMQ6
social:
- icon: fontawesome/brands/twitter
link: https://twitter.com/joaomdmoura
link: https://x.com/crewAIInc
- icon: fontawesome/brands/github
link: https://github.com/joaomdmoura/crewAI
link: https://github.com/crewAIInc/crewAI

3404
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "crewai"
version = "0.55.2"
version = "0.67.1"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
@@ -14,14 +14,14 @@ Repository = "https://github.com/crewAIInc/crewAI"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
pydantic = "^2.4.2"
langchain = ">0.2,<=0.3"
langchain = "^0.2.16"
openai = "^1.13.3"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "1.3.3"
regex = "^2024.7.24"
crewai-tools = { version = "^0.12.0", optional = true }
regex = "^2024.9.11"
crewai-tools = { version = "^0.12.1", optional = true }
click = "^8.1.7"
python-dotenv = "^1.0.0"
appdirs = "^1.4.4"
@@ -30,6 +30,9 @@ agentops = { version = "^0.3.0", optional = true }
embedchain = "^0.1.114"
json-repair = "^0.25.2"
auth0-python = "^4.7.1"
poetry = "^1.8.3"
litellm = "^1.44.22"
pyvis = "^0.3.2"
[tool.poetry.extras]
tools = ["crewai-tools"]
@@ -47,13 +50,14 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai-tools = "^0.12.0"
crewai-tools = "^0.12.1"
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"
pytest-vcr = "^1.0.2"
python-dotenv = "1.0.0"
pytest-asyncio = "^0.23.7"
pytest-subprocess = "^1.5.2"
[tool.poetry.scripts]
crewai = "crewai.cli.cli:crewai"

View File

@@ -1,8 +1,19 @@
import warnings
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.flow.flow import Flow
from crewai.llm import LLM
from crewai.pipeline import Pipeline
from crewai.process import Process
from crewai.routers import Router
from crewai.task import Task
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router"]
warnings.filterwarnings(
"ignore",
message="Pydantic serializer warnings:",
category=UserWarning,
module="pydantic.main",
)
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]

View File

@@ -1,23 +1,18 @@
import os
from inspect import signature
from typing import Any, List, Optional, Tuple
from langchain.agents.agent import RunnableAgent
from langchain.agents.tools import BaseTool
from langchain.agents.tools import tool as LangChainTool
from langchain_core.agents import AgentAction
from langchain_core.callbacks import BaseCallbackHandler
from langchain_openai import ChatOpenAI
from typing import Any, List, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser
from crewai.agents import CacheHandler
from crewai.utilities import Converter, Prompts
from crewai.tools.agent_tools import AgentTools
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.llm import LLM
def mock_agent_ops_provider():
@@ -34,7 +29,6 @@ agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore # Name "agentops" already defined on line 21
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
@@ -64,7 +58,6 @@ class Agent(BaseAgent):
allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
"""
_times_executed: int = PrivateAttr(default=0)
@@ -81,18 +74,16 @@ class Agent(BaseAgent):
default=None,
description="Callback to be executed after each step of the agent execution.",
)
llm: Any = Field(
default_factory=lambda: ChatOpenAI(
model=os.environ.get("OPENAI_MODEL_NAME", "gpt-4o")
),
description="Language model that will run the agent.",
use_system_prompt: Optional[bool] = Field(
default=True,
description="Use system prompt for the agent.",
)
llm: Union[str, InstanceOf[LLM], Any] = Field(
description="Language model that will run the agent.", default=None
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None, description="Callback to be executed"
)
system_template: Optional[str] = Field(
default=None, description="System format for the agent."
)
@@ -108,6 +99,14 @@ class Agent(BaseAgent):
allow_code_execution: Optional[bool] = Field(
default=False, description="Enable code execution for the agent."
)
respect_context_window: bool = Field(
default=True,
description="Keep messages under the context window size by summarizing content.",
)
max_iter: int = Field(
default=20,
description="Maximum number of iterations for an agent to execute a task before giving it's best answer",
)
max_retry_limit: int = Field(
default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.",
@@ -117,37 +116,64 @@ class Agent(BaseAgent):
def post_init_setup(self):
self.agent_ops_agent_name = self.role
# Different llms store the model name in different attributes
model_name = getattr(self.llm, "model_name", None) or getattr(
self.llm, "deployment_name", None
)
# Handle different cases for self.llm
if isinstance(self.llm, str):
# If it's a string, create an LLM instance
self.llm = LLM(model=self.llm)
elif isinstance(self.llm, LLM):
# If it's already an LLM instance, keep it as is
pass
elif self.llm is None:
# If it's None, use environment variables or default
model_name = os.environ.get("OPENAI_MODEL_NAME", "gpt-4o-mini")
llm_params = {"model": model_name}
if model_name:
self._setup_llm_callbacks(model_name)
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
"OPENAI_BASE_URL"
)
if api_base:
llm_params["base_url"] = api_base
api_key = os.environ.get("OPENAI_API_KEY")
if api_key:
llm_params["api_key"] = api_key
self.llm = LLM(**llm_params)
else:
# For any other type, attempt to extract relevant attributes
llm_params = {
"model": getattr(self.llm, "model_name", None)
or getattr(self.llm, "deployment_name", None)
or str(self.llm),
"temperature": getattr(self.llm, "temperature", None),
"max_tokens": getattr(self.llm, "max_tokens", None),
"logprobs": getattr(self.llm, "logprobs", None),
"timeout": getattr(self.llm, "timeout", None),
"max_retries": getattr(self.llm, "max_retries", None),
"api_key": getattr(self.llm, "api_key", None),
"base_url": getattr(self.llm, "base_url", None),
"organization": getattr(self.llm, "organization", None),
}
# Remove None values to avoid passing unnecessary parameters
llm_params = {k: v for k, v in llm_params.items() if v is not None}
self.llm = LLM(**llm_params)
# Similar handling for function_calling_llm
if self.function_calling_llm:
if isinstance(self.function_calling_llm, str):
self.function_calling_llm = LLM(model=self.function_calling_llm)
elif not isinstance(self.function_calling_llm, LLM):
self.function_calling_llm = LLM(
model=getattr(self.function_calling_llm, "model_name", None)
or getattr(self.function_calling_llm, "deployment_name", None)
or str(self.function_calling_llm)
)
if not self.agent_executor:
self._setup_agent_executor()
return self
def _setup_llm_callbacks(self, model_name: str):
token_handler = TokenCalcHandler(model_name, self._token_process)
if not isinstance(self.llm.callbacks, list):
self.llm.callbacks = []
if not any(
isinstance(handler, TokenCalcHandler) for handler in self.llm.callbacks
):
self.llm.callbacks.append(token_handler)
if agentops and not any(
isinstance(handler, agentops.LangchainCallbackHandler)
for handler in self.llm.callbacks
):
agentops.stop_instrumenting()
self.llm.callbacks.append(agentops.LangchainCallbackHandler())
def _setup_agent_executor(self):
if not self.cache_handler:
self.cache_handler = CacheHandler()
@@ -190,15 +216,7 @@ class Agent(BaseAgent):
task_prompt += self.i18n.slice("memory").format(memory=memory)
tools = tools or self.tools or []
parsed_tools = self._parse_tools(tools)
self.create_agent_executor(tools=tools)
self.agent_executor.tools = parsed_tools
self.agent_executor.task = task
self.agent_executor.tools_description = self._render_text_description_and_args(
parsed_tools
)
self.agent_executor.tools_names = self.__tools_names(parsed_tools)
self.create_agent_executor(tools=tools, task=task)
if self.crew and self.crew._train:
task_prompt = self._training_handler(task_prompt=task_prompt)
@@ -211,6 +229,7 @@ class Agent(BaseAgent):
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
"ask_for_human_input": task.human_input,
}
)["output"]
except Exception as e:
@@ -231,73 +250,25 @@ class Agent(BaseAgent):
return result
def format_log_to_str(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
observation_prefix: str = "Observation: ",
llm_prefix: str = "",
) -> str:
"""Construct the scratchpad that lets the agent continue its thought process."""
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\n{observation_prefix}{observation}\n{llm_prefix}"
return thoughts
def create_agent_executor(self, tools=None) -> None:
def create_agent_executor(self, tools=None, task=None) -> None:
"""Create an agent executor for the agent.
Returns:
An instance of the CrewAgentExecutor class.
"""
tools = tools or self.tools or []
agent_args = {
"input": lambda x: x["input"],
"tools": lambda x: x["tools"],
"tool_names": lambda x: x["tool_names"],
"agent_scratchpad": lambda x: self.format_log_to_str(
x["intermediate_steps"]
),
}
executor_args = {
"llm": self.llm,
"i18n": self.i18n,
"crew": self.crew,
"crew_agent": self,
"tools": self._parse_tools(tools),
"verbose": self.verbose,
"original_tools": tools,
"handle_parsing_errors": True,
"max_iterations": self.max_iter,
"max_execution_time": self.max_execution_time,
"step_callback": self.step_callback,
"tools_handler": self.tools_handler,
"function_calling_llm": self.function_calling_llm,
"callbacks": self.callbacks,
"max_tokens": self.max_tokens,
}
if self._rpm_controller:
executor_args["request_within_rpm_limit"] = (
self._rpm_controller.check_or_wait
)
parsed_tools = self._parse_tools(tools)
prompt = Prompts(
i18n=self.i18n,
agent=self,
tools=tools,
i18n=self.i18n,
use_system_prompt=self.use_system_prompt,
system_template=self.system_template,
prompt_template=self.prompt_template,
response_template=self.response_template,
).task_execution()
execution_prompt = prompt.partial(
goal=self.goal,
role=self.role,
backstory=self.backstory,
)
stop_words = [self.i18n.slice("observation")]
if self.response_template:
@@ -305,11 +276,26 @@ class Agent(BaseAgent):
self.response_template.split("{{ .Response }}")[1].strip()
)
bind = self.llm.bind(stop=stop_words)
inner_agent = agent_args | execution_prompt | bind | CrewAgentParser(agent=self)
self.agent_executor = CrewAgentExecutor(
agent=RunnableAgent(runnable=inner_agent), **executor_args
llm=self.llm,
task=task,
agent=self,
crew=self.crew,
tools=parsed_tools,
prompt=prompt,
original_tools=tools,
stop_words=stop_words,
max_iter=self.max_iter,
tools_handler=self.tools_handler,
tools_names=self.__tools_names(parsed_tools),
tools_description=self._render_text_description_and_args(parsed_tools),
step_callback=self.step_callback,
function_calling_llm=self.function_calling_llm,
respect_context_window=self.respect_context_window,
request_within_rpm_limit=self._rpm_controller.check_or_wait
if self._rpm_controller
else None,
callbacks=[TokenCalcHandler(self._token_process)],
)
def get_delegation_tools(self, agents: List[BaseAgent]):
@@ -330,7 +316,7 @@ class Agent(BaseAgent):
def get_output_converter(self, llm, text, model, instructions):
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def _parse_tools(self, tools: List[Any]) -> List[LangChainTool]: # type: ignore # Function "langchain_core.tools.tool" is not valid as a type
def _parse_tools(self, tools: List[Any]) -> List[Any]: # type: ignore
"""Parse tools to be used for the task."""
tools_list = []
try:
@@ -358,8 +344,9 @@ class Agent(BaseAgent):
human_feedbacks = [
i["human_feedback"] for i in data.get(agent_id, {}).values()
]
task_prompt += "You MUST follow these feedbacks: \n " + "\n - ".join(
human_feedbacks
task_prompt += (
"\n\nYou MUST follow these instructions: \n "
+ "\n - ".join(human_feedbacks)
)
return task_prompt
@@ -368,12 +355,13 @@ class Agent(BaseAgent):
"""Use trained data for the agent task prompt to improve output."""
if data := CrewTrainingHandler(TRAINED_AGENTS_DATA_FILE).load():
if trained_data_output := data.get(self.role):
task_prompt += "You MUST follow these feedbacks: \n " + "\n - ".join(
trained_data_output["suggestions"]
task_prompt += (
"\n\nYou MUST follow these instructions: \n - "
+ "\n - ".join(trained_data_output["suggestions"])
)
return task_prompt
def _render_text_description(self, tools: List[BaseTool]) -> str:
def _render_text_description(self, tools: List[Any]) -> str:
"""Render the tool name and description in plain text.
Output will be in the format of:
@@ -392,7 +380,7 @@ class Agent(BaseAgent):
return description
def _render_text_description_and_args(self, tools: List[BaseTool]) -> str:
def _render_text_description_and_args(self, tools: List[Any]) -> str:
"""Render the tool name, description, and args in plain text.
Output will be in the format of:

View File

@@ -1,6 +1,5 @@
from .cache.cache_handler import CacheHandler
from .executor import CrewAgentExecutor
from .parser import CrewAgentParser
from .tools_handler import ToolsHandler
__all__ = ["CacheHandler", "CrewAgentExecutor", "CrewAgentParser", "ToolsHandler"]
__all__ = ["CacheHandler", "CrewAgentParser", "ToolsHandler"]

View File

@@ -102,7 +102,8 @@ class BaseAgent(ABC, BaseModel):
description="Maximum number of requests per minute for the agent execution to be respected.",
)
allow_delegation: bool = Field(
default=True, description="Allow delegation of tasks to agents"
default=False,
description="Enable agent to delegate and ask questions among each other.",
)
tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents' disposal"
@@ -175,7 +176,11 @@ class BaseAgent(ABC, BaseModel):
@property
def key(self):
source = [self.role, self.goal, self.backstory]
source = [
self._original_role or self.role,
self._original_goal or self.goal,
self._original_backstory or self.backstory,
]
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
@abstractmethod
@@ -224,10 +229,8 @@ class BaseAgent(ABC, BaseModel):
# Copy llm and clear callbacks
existing_llm = shallow_copy(self.llm)
existing_llm.callbacks = []
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(**copied_data, llm=existing_llm, tools=self.tools)
return copied_agent

View File

@@ -6,6 +6,7 @@ from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities import I18N
from crewai.utilities.printer import Printer
if TYPE_CHECKING:
@@ -19,15 +20,14 @@ class CrewAgentExecutorMixin:
crew_agent: Optional["BaseAgent"]
task: Optional["Task"]
iterations: int
force_answer_max_iterations: int
have_forced_answer: bool
max_iter: int
_i18n: I18N
_printer: Printer = Printer()
def _should_force_answer(self) -> bool:
"""Determine if a forced answer is required based on iteration count."""
return (
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
return (self.iterations >= self.max_iter) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
"""Create and save a short-term memory item if conditions are met."""
@@ -102,6 +102,12 @@ class CrewAgentExecutorMixin:
def _ask_human_input(self, final_answer: dict) -> str:
"""Prompt human input for final decision making."""
return input(
self._i18n.slice("getting_input").format(final_answer=final_answer)
self._printer.print(
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
)
self._printer.print(
content="\n\n=====\n## Please provide feedback on the Final Result and the Agent's actions:",
color="bold_yellow",
)
return input()

View File

@@ -39,9 +39,3 @@ class OutputConverter(BaseModel, ABC):
def to_json(self, current_attempt=1):
"""Convert text to json."""
pass
@property
@abstractmethod
def is_gpt(self) -> bool:
"""Return if llm provided is of gpt from openai."""
pass

View File

@@ -0,0 +1,357 @@
import json
import re
from typing import Any, Dict, List, Union
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.parser import CrewAgentParser
from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N, Printer
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from crewai.utilities.logger import Logger
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserException,
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE,
)
class CrewAgentExecutor(CrewAgentExecutorMixin):
_logger: Logger = Logger()
def __init__(
self,
llm: Any,
task: Any,
crew: Any,
agent: Any,
prompt: dict[str, str],
max_iter: int,
tools: List[Any],
tools_names: str,
stop_words: List[str],
tools_description: str,
tools_handler: ToolsHandler,
step_callback: Any = None,
original_tools: List[Any] = [],
function_calling_llm: Any = None,
respect_context_window: bool = False,
request_within_rpm_limit: Any = None,
callbacks: List[Any] = [],
):
self._i18n: I18N = I18N()
self.llm = llm
self.task = task
self.agent = agent
self.crew = crew
self.prompt = prompt
self.tools = tools
self.tools_names = tools_names
self.stop = stop_words
self.max_iter = max_iter
self.callbacks = callbacks
self._printer: Printer = Printer()
self.tools_handler = tools_handler
self.original_tools = original_tools
self.step_callback = step_callback
self.use_stop_words = self.llm.supports_stop_words()
self.tools_description = tools_description
self.function_calling_llm = function_calling_llm
self.respect_context_window = respect_context_window
self.request_within_rpm_limit = request_within_rpm_limit
self.ask_for_human_input = False
self.messages: List[Dict[str, str]] = []
self.iterations = 0
self.log_error_after = 3
self.have_forced_answer = False
self.name_to_tool_map = {tool.name: tool for tool in self.tools}
if self.llm.stop:
self.llm.stop = list(set(self.llm.stop + self.stop))
else:
self.llm.stop = self.stop
def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]:
if "system" in self.prompt:
system_prompt = self._format_prompt(self.prompt.get("system", ""), inputs)
user_prompt = self._format_prompt(self.prompt.get("user", ""), inputs)
self.messages.append(self._format_msg(system_prompt, role="system"))
self.messages.append(self._format_msg(user_prompt))
else:
user_prompt = self._format_prompt(self.prompt.get("prompt", ""), inputs)
self.messages.append(self._format_msg(user_prompt))
self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
formatted_answer = self._invoke_loop()
if self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output)
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback)
# Making sure we only ask for it once, so disabling for the next thought loop
self.ask_for_human_input = False
self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))
formatted_answer = self._invoke_loop()
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer)
return {"output": formatted_answer.output}
def _invoke_loop(self, formatted_answer=None):
try:
while not isinstance(formatted_answer, AgentFinish):
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
if not self.use_stop_words:
try:
self._format_answer(answer)
except OutputParserException as e:
if (
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE
in e.error
):
answer = answer.split("Observation:")[0].strip()
self.iterations += 1
formatted_answer = self._format_answer(answer)
if isinstance(formatted_answer, AgentAction):
action_result = self._use_tool(formatted_answer)
formatted_answer.text += f"\nObservation: {action_result}"
formatted_answer.result = action_result
self._show_logs(formatted_answer)
if self.step_callback:
self.step_callback(formatted_answer)
if self._should_force_answer():
if self.have_forced_answer:
return AgentFinish(
output=self._i18n.errors(
"force_final_answer_error"
).format(formatted_answer.text),
text=formatted_answer.text,
)
else:
formatted_answer.text += (
f'\n{self._i18n.errors("force_final_answer")}'
)
self.have_forced_answer = True
self.messages.append(
self._format_msg(formatted_answer.text, role="user")
)
except OutputParserException as e:
self.messages.append({"role": "user", "content": e.error})
if self.iterations > self.log_error_after:
self._printer.print(
content=f"Error parsing LLM output, agent will retry: {e.error}",
color="red",
)
return self._invoke_loop(formatted_answer)
except Exception as e:
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
self._handle_context_length()
return self._invoke_loop(formatted_answer)
else:
raise e
self._show_logs(formatted_answer)
return formatted_answer
def _show_start_logs(self):
if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
):
agent_role = self.agent.role.split("\n")[0]
self._printer.print(
content=f"\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
)
self._printer.print(
content=f"\033[95m## Task:\033[00m \033[92m{self.task.description}\033[00m"
)
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
):
agent_role = self.agent.role.split("\n")[0]
if isinstance(formatted_answer, AgentAction):
thought = re.sub(r"\n+", "\n", formatted_answer.thought)
formatted_json = json.dumps(
formatted_answer.tool_input,
indent=2,
ensure_ascii=False,
)
self._printer.print(
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
)
if thought and thought != "":
self._printer.print(
content=f"\033[95m## Thought:\033[00m \033[92m{thought}\033[00m"
)
self._printer.print(
content=f"\033[95m## Using tool:\033[00m \033[92m{formatted_answer.tool}\033[00m"
)
self._printer.print(
content=f"\033[95m## Tool Input:\033[00m \033[92m\n{formatted_json}\033[00m"
)
self._printer.print(
content=f"\033[95m## Tool Output:\033[00m \033[92m\n{formatted_answer.result}\033[00m"
)
elif isinstance(formatted_answer, AgentFinish):
self._printer.print(
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
)
self._printer.print(
content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m\n\n"
)
def _use_tool(self, agent_action: AgentAction) -> Any:
tool_usage = ToolUsage(
tools_handler=self.tools_handler,
tools=self.tools,
original_tools=self.original_tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
function_calling_llm=self.function_calling_llm,
task=self.task, # type: ignore[arg-type]
agent=self.agent,
action=agent_action,
)
tool_calling = tool_usage.parse(agent_action.text)
if isinstance(tool_calling, ToolUsageErrorException):
tool_result = tool_calling.message
else:
if tool_calling.tool_name.casefold().strip() in [
name.casefold().strip() for name in self.name_to_tool_map
] or tool_calling.tool_name.casefold().replace("_", " ") in [
name.casefold().strip() for name in self.name_to_tool_map
]:
tool_result = tool_usage.use(tool_calling, agent_action.text)
else:
tool_result = self._i18n.errors("wrong_tool_name").format(
tool=tool_calling.tool_name,
tools=", ".join([tool.name.casefold() for tool in self.tools]),
)
return tool_result
def _summarize_messages(self) -> None:
messages_groups = []
for message in self.messages:
content = message["content"]
cut_size = self.llm.get_context_window_size()
for i in range(0, len(content), cut_size):
messages_groups.append(content[i : i + cut_size])
summarized_contents = []
for group in messages_groups:
summary = self.llm.call(
[
self._format_msg(
self._i18n.slice("summarizer_system_message"), role="system"
),
self._format_msg(
self._i18n.slice("sumamrize_instruction").format(group=group),
),
],
callbacks=self.callbacks,
)
summarized_contents.append(summary)
merged_summary = " ".join(str(content) for content in summarized_contents)
self.messages = [
self._format_msg(
self._i18n.slice("summary").format(merged_summary=merged_summary)
)
]
def _handle_context_length(self) -> None:
if self.respect_context_window:
self._logger.log(
"debug",
"Context length exceeded. Summarizing content to fit the model context window.",
color="yellow",
)
self._summarize_messages()
else:
self._logger.log(
"debug",
"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)
def _handle_crew_training_output(
self, result: AgentFinish, human_feedback: str | None = None
) -> None:
"""Function to handle the process of the training data."""
agent_id = str(self.agent.id)
if (
CrewTrainingHandler(TRAINING_DATA_FILE).load()
and not self.ask_for_human_input
):
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
if training_data.get(agent_id):
training_data[agent_id][self.crew._train_iteration][
"improved_output"
] = result.output
CrewTrainingHandler(TRAINING_DATA_FILE).save(training_data)
if self.ask_for_human_input and human_feedback is not None:
training_data = {
"initial_output": result.output,
"human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.agent.role,
}
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if isinstance(train_iteration, int):
CrewTrainingHandler(TRAINING_DATA_FILE).append(
train_iteration, agent_id, training_data
)
else:
self._logger.log(
"error",
"Invalid train iteration type. Expected int.",
color="red",
)
else:
self._logger.log(
"error",
"Crew is None or does not have _train_iteration attribute.",
color="red",
)
def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str:
prompt = prompt.replace("{input}", inputs["input"])
prompt = prompt.replace("{tool_names}", inputs["tool_names"])
prompt = prompt.replace("{tools}", inputs["tools"])
return prompt
def _format_answer(self, answer: str) -> Union[AgentAction, AgentFinish]:
return CrewAgentParser(agent=self.agent).parse(answer)
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
return {"role": role, "content": prompt}

View File

@@ -1,397 +0,0 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
import click
from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.agents import AgentAction, AgentFinish, AgentStep
from langchain_core.exceptions import OutputParserException
from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from crewai.utilities.logger import Logger
from crewai.utilities.training_handler import CrewTrainingHandler
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
_i18n: I18N = I18N()
should_ask_for_human_input: bool = False
llm: Any = None
iterations: int = 0
task: Any = None
tools_description: str = ""
tools_names: str = ""
original_tools: List[Any] = []
crew_agent: Any = None
crew: Any = None
function_calling_llm: Any = None
request_within_rpm_limit: Any = None
tools_handler: Optional[InstanceOf[ToolsHandler]] = None
max_iterations: Optional[int] = 15
have_forced_answer: bool = False
force_answer_max_iterations: Optional[int] = None # type: ignore # Incompatible types in assignment (expression has type "int | None", base class "CrewAgentExecutorMixin" defined the type as "int")
step_callback: Optional[Any] = None
system_template: Optional[str] = None
prompt_template: Optional[str] = None
response_template: Optional[str] = None
_logger: Logger = Logger()
_fit_context_window_strategy: Optional[Literal["summarize"]] = "summarize"
def _call(
self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
"""Run text through and get agent response."""
# Construct a mapping of tool name to tool for easy lookup
name_to_tool_map = {tool.name: tool for tool in self.tools}
# We construct a mapping from each tool to a color, used for logging.
color_mapping = get_color_mapping(
[tool.name.casefold() for tool in self.tools],
excluded_colors=["green", "red"],
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
# Allowing human input given task setting
if self.task and self.task.human_input:
self.should_ask_for_human_input = True
# Let's start tracking the number of iterations and time elapsed
self.iterations = 0
time_elapsed = 0.0
start_time = time.time()
# We now enter the agent loop (until it returns something).
while self._should_continue(self.iterations, time_elapsed):
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
next_step_output = self._take_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager=run_manager,
)
if self.step_callback:
self.step_callback(next_step_output)
if isinstance(next_step_output, AgentFinish):
# Creating long term memory
create_long_term_memory = threading.Thread(
target=self._create_long_term_memory, args=(next_step_output,)
)
create_long_term_memory.start()
return self._return(
next_step_output, intermediate_steps, run_manager=run_manager
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
# See if tool should return directly
tool_return = self._get_tool_return(next_step_action)
if tool_return is not None:
return self._return(
tool_return, intermediate_steps, run_manager=run_manager
)
self.iterations += 1
time_elapsed = time.time() - start_time
output = self.agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return self._return(output, intermediate_steps, run_manager=run_manager)
def _iter_next_step(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Iterator[Union[AgentFinish, AgentAction, AgentStep]]:
"""Take a single step in the thought-action-observation loop.
Override this to take control of how the agent makes and acts on choices.
"""
try:
if self._should_force_answer():
error = self._i18n.errors("force_final_answer")
output = AgentAction("_Exception", error, error)
self.have_forced_answer = True
yield AgentStep(action=output, observation=error)
return
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
except OutputParserException as e:
if isinstance(self.handle_parsing_errors, bool):
raise_error = not self.handle_parsing_errors
else:
raise_error = False
if raise_error:
raise ValueError(
"An output parsing error occurred. "
"In order to pass this error back to the agent and have it try "
"again, pass `handle_parsing_errors=True` to the AgentExecutor. "
f"This is the error: {str(e)}"
)
str(e)
if isinstance(self.handle_parsing_errors, bool):
if e.send_to_llm:
observation = f"\n{str(e.observation)}"
str(e.llm_output)
else:
observation = ""
elif isinstance(self.handle_parsing_errors, str):
observation = f"\n{self.handle_parsing_errors}"
elif callable(self.handle_parsing_errors):
observation = f"\n{self.handle_parsing_errors(e)}"
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, "")
if run_manager:
run_manager.on_agent_action(output, color="green")
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = ExceptionTool().run(
output.tool_input,
verbose=False,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
if self._should_force_answer():
error = self._i18n.errors("force_final_answer")
output = AgentAction("_Exception", error, error)
yield AgentStep(action=output, observation=error)
return
yield AgentStep(action=output, observation=observation)
return
except Exception as e:
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
output = self._handle_context_length_error(
intermediate_steps, run_manager, inputs
)
if isinstance(output, AgentFinish):
yield output
elif isinstance(output, list):
for step in output:
yield step
return
raise e
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
if self.should_ask_for_human_input:
human_feedback = self._ask_human_input(output.return_values["output"])
if self.crew and self.crew._train:
self._handle_crew_training_output(output, human_feedback)
# Making sure we only ask for it once, so disabling for the next thought loop
self.should_ask_for_human_input = False
action = AgentAction(
tool="Human Input", tool_input=human_feedback, log=output.log
)
yield AgentStep(
action=action,
observation=self._i18n.slice("human_feedback").format(
human_feedback=human_feedback
),
)
return
else:
if self.crew and self.crew._train:
self._handle_crew_training_output(output)
yield output
return
self._create_short_term_memory(output)
actions: List[AgentAction]
actions = [output] if isinstance(output, AgentAction) else output
yield from actions
for agent_action in actions:
if run_manager:
run_manager.on_agent_action(agent_action, color="green")
tool_usage = ToolUsage(
tools_handler=self.tools_handler, # type: ignore # Argument "tools_handler" to "ToolUsage" has incompatible type "ToolsHandler | None"; expected "ToolsHandler"
tools=self.tools, # type: ignore # Argument "tools" to "ToolUsage" has incompatible type "Sequence[BaseTool]"; expected "list[BaseTool]"
original_tools=self.original_tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
function_calling_llm=self.function_calling_llm,
task=self.task,
agent=self.crew_agent,
action=agent_action,
)
tool_calling = tool_usage.parse(agent_action.log)
if isinstance(tool_calling, ToolUsageErrorException):
observation = tool_calling.message
else:
if tool_calling.tool_name.casefold().strip() in [
name.casefold().strip() for name in name_to_tool_map
] or tool_calling.tool_name.casefold().replace("_", " ") in [
name.casefold().strip() for name in name_to_tool_map
]:
observation = tool_usage.use(tool_calling, agent_action.log)
else:
observation = self._i18n.errors("wrong_tool_name").format(
tool=tool_calling.tool_name,
tools=", ".join([tool.name.casefold() for tool in self.tools]),
)
yield AgentStep(action=agent_action, observation=observation)
def _handle_crew_training_output(
self, output: AgentFinish, human_feedback: str | None = None
) -> None:
"""Function to handle the process of the training data."""
agent_id = str(self.crew_agent.id)
if (
CrewTrainingHandler(TRAINING_DATA_FILE).load()
and not self.should_ask_for_human_input
):
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
if training_data.get(agent_id):
training_data[agent_id][self.crew._train_iteration][
"improved_output"
] = output.return_values["output"]
CrewTrainingHandler(TRAINING_DATA_FILE).save(training_data)
if self.should_ask_for_human_input and human_feedback is not None:
training_data = {
"initial_output": output.return_values["output"],
"human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.crew_agent.role,
}
CrewTrainingHandler(TRAINING_DATA_FILE).append(
self.crew._train_iteration, agent_id, training_data
)
def _handle_context_length(
self, intermediate_steps: List[Tuple[AgentAction, str]]
) -> List[Tuple[AgentAction, str]]:
text = intermediate_steps[0][1]
original_action = intermediate_steps[0][0]
text_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n"],
chunk_size=8000,
chunk_overlap=500,
)
if self._fit_context_window_strategy == "summarize":
docs = text_splitter.create_documents([text])
self._logger.log(
"debug",
"Summarizing Content, it is recommended to use a RAG tool",
color="bold_blue",
)
summarize_chain = load_summarize_chain(
self.llm, chain_type="map_reduce", verbose=True
)
summarized_docs = []
for doc in docs:
summary = summarize_chain.invoke(
{"input_documents": [doc]}, return_only_outputs=True
)
summarized_docs.append(summary["output_text"])
formatted_results = "\n\n".join(summarized_docs)
summary_step = AgentStep(
action=AgentAction(
tool=original_action.tool,
tool_input=original_action.tool_input,
log=original_action.log,
),
observation=formatted_results,
)
summary_tuple = (summary_step.action, summary_step.observation)
return [summary_tuple]
return intermediate_steps
def _handle_context_length_error(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun],
inputs: Dict[str, str],
) -> Union[AgentFinish, List[AgentStep]]:
self._logger.log(
"debug",
"Context length exceeded. Asking user if they want to use summarize prompt to fit, this will reduce context length.",
color="yellow",
)
user_choice = click.confirm(
"Context length exceeded. Do you want to summarize the text to fit models context window?"
)
if user_choice:
self._logger.log(
"debug",
"Context length exceeded. Using summarize prompt to fit, this will reduce context length.",
color="bold_blue",
)
intermediate_steps = self._handle_context_length(intermediate_steps)
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
if isinstance(output, AgentFinish):
return output
elif isinstance(output, AgentAction):
return [AgentStep(action=output, observation=None)]
else:
return [AgentStep(action=action, observation=None) for action in output]
else:
self._logger.log(
"debug",
"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)

View File

@@ -1,10 +1,6 @@
import re
from typing import Any, Union
from json_repair import repair_json
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.exceptions import OutputParserException
from crewai.utilities import I18N
@@ -14,7 +10,39 @@ MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE = "I did it wrong. Invalid Forma
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE = "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"
class CrewAgentParser(ReActSingleInputOutputParser):
class AgentAction:
thought: str
tool: str
tool_input: str
text: str
result: str
def __init__(self, thought: str, tool: str, tool_input: str, text: str):
self.thought = thought
self.tool = tool
self.tool_input = tool_input
self.text = text
class AgentFinish:
thought: str
output: str
text: str
def __init__(self, thought: str, output: str, text: str):
self.thought = thought
self.output = output
self.text = text
class OutputParserException(Exception):
error: str
def __init__(self, error: str):
self.error = error
class CrewAgentParser:
"""Parses ReAct-style LLM calls that have a single tool input.
Expects output to be in one of two formats.
@@ -38,7 +66,11 @@ class CrewAgentParser(ReActSingleInputOutputParser):
_i18n: I18N = I18N()
agent: Any = None
def __init__(self, agent: Any):
self.agent = agent
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
thought = self._extract_thought(text)
includes_answer = FINAL_ANSWER_ACTION in text
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
@@ -47,7 +79,7 @@ class CrewAgentParser(ReActSingleInputOutputParser):
if action_match:
if includes_answer:
raise OutputParserException(
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}"
)
action = action_match.group(1)
clean_action = self._clean_action(action)
@@ -57,30 +89,23 @@ class CrewAgentParser(ReActSingleInputOutputParser):
tool_input = action_input.strip(" ").strip('"')
safe_tool_input = self._safe_repair_json(tool_input)
return AgentAction(clean_action, safe_tool_input, text)
return AgentAction(thought, clean_action, safe_tool_input, text)
elif includes_answer:
return AgentFinish(
{"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
return AgentFinish(thought, final_answer, text)
if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
self.agent.increment_formatting_errors()
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation=f"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\n{self._i18n.slice('final_answer_format')}",
llm_output=text,
send_to_llm=True,
f"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\n{self._i18n.slice('final_answer_format')}",
)
elif not re.search(
r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
):
self.agent.increment_formatting_errors()
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation=MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
llm_output=text,
send_to_llm=True,
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
)
else:
format = self._i18n.slice("format_without_tools")
@@ -88,11 +113,15 @@ class CrewAgentParser(ReActSingleInputOutputParser):
self.agent.increment_formatting_errors()
raise OutputParserException(
error,
observation=error,
llm_output=text,
send_to_llm=True,
)
def _extract_thought(self, text: str) -> str:
regex = r"(.*?)(?:\n\nAction|\n\nFinal Answer)"
thought_match = re.search(regex, text, re.DOTALL)
if thought_match:
return thought_match.group(1).strip()
return ""
def _clean_action(self, text: str) -> str:
"""Clean action string by removing non-essential formatting characters."""
return re.sub(r"^\s*\*+\s*|\s*\*+\s*$", "", text).strip()

View File

@@ -79,7 +79,7 @@ class TokenManager:
"""
encrypted_data = self.read_secure_file(self.file_path)
decrypted_data = self.fernet.decrypt(encrypted_data)
decrypted_data = self.fernet.decrypt(encrypted_data) # type: ignore
data = json.loads(decrypted_data)
expiration = datetime.fromisoformat(data["expiration"])

View File

@@ -4,6 +4,7 @@ import click
import pkg_resources
from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow
from crewai.cli.create_pipeline import create_pipeline
from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
@@ -13,9 +14,12 @@ from .authentication.main import AuthenticationCommand
from .deploy.main import DeployCommand
from .evaluate_crew import evaluate_crew
from .install_crew import install_crew
from .plot_flow import plot_flow
from .replay_from_task import replay_task_command
from .reset_memories_command import reset_memories_command
from .run_crew import run_crew
from .run_flow import run_flow
from .tools.main import ToolCommand
from .train_crew import train_crew
@@ -25,19 +29,20 @@ def crewai():
@crewai.command()
@click.argument("type", type=click.Choice(["crew", "pipeline"]))
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
@click.argument("name")
@click.option(
"--router", is_flag=True, help="Create a pipeline with router functionality"
)
def create(type, name, router):
"""Create a new crew or pipeline."""
def create(type, name):
"""Create a new crew, pipeline, or flow."""
if type == "crew":
create_crew(name)
elif type == "pipeline":
create_pipeline(name, router)
create_pipeline(name)
elif type == "flow":
create_flow(name)
else:
click.secho("Error: Invalid type. Must be 'crew' or 'pipeline'.", fg="red")
click.secho(
"Error: Invalid type. Must be 'crew', 'pipeline', or 'flow'.", fg="red"
)
@crewai.command()
@@ -202,6 +207,12 @@ def deploy():
pass
@crewai.group()
def tool():
"""Tool Repository related commands."""
pass
@deploy.command(name="create")
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
def deploy_create(yes: bool):
@@ -249,5 +260,50 @@ def deploy_remove(uuid: Optional[str]):
deploy_cmd.remove_crew(uuid=uuid)
@tool.command(name="create")
@click.argument("handle")
def tool_create(handle: str):
tool_cmd = ToolCommand()
tool_cmd.create(handle)
@tool.command(name="install")
@click.argument("handle")
def tool_install(handle: str):
tool_cmd = ToolCommand()
tool_cmd.login()
tool_cmd.install(handle)
@tool.command(name="publish")
@click.option("--force", is_flag=True, show_default=True, default=False, help="Bypasses Git remote validations")
@click.option("--public", "is_public", flag_value=True, default=False)
@click.option("--private", "is_public", flag_value=False)
def tool_publish(is_public: bool, force: bool):
tool_cmd = ToolCommand()
tool_cmd.login()
tool_cmd.publish(is_public, force)
@crewai.group()
def flow():
"""Flow related commands."""
pass
@flow.command(name="run")
def flow_run():
"""Run the Flow."""
click.echo("Running the Flow")
run_flow()
@flow.command(name="plot")
def flow_plot():
"""Plot the Flow."""
click.echo("Plotting the Flow")
plot_flow()
if __name__ == "__main__":
crewai()

71
src/crewai/cli/command.py Normal file
View File

@@ -0,0 +1,71 @@
import requests
from requests.exceptions import JSONDecodeError
from rich.console import Console
from crewai.cli.plus_api import PlusAPI
from crewai.cli.utils import get_auth_token
from crewai.telemetry.telemetry import Telemetry
console = Console()
class BaseCommand:
def __init__(self):
self._telemetry = Telemetry()
self._telemetry.set_tracer()
class PlusAPIMixin:
def __init__(self, telemetry):
try:
telemetry.set_tracer()
self.plus_api_client = PlusAPI(api_key=get_auth_token())
except Exception:
self._deploy_signup_error_span = telemetry.deploy_signup_error_span()
console.print(
"Please sign up/login to CrewAI+ before using the CLI.",
style="bold red",
)
console.print("Run 'crewai signup' to sign up/login.", style="bold green")
raise SystemExit
def _validate_response(self, response: requests.Response) -> None:
"""
Handle and display error messages from API responses.
Args:
response (requests.Response): The response from the Plus API
"""
try:
json_response = response.json()
except (JSONDecodeError, ValueError):
console.print(
"Failed to parse response from Enterprise API failed. Details:",
style="bold red",
)
console.print(f"Status Code: {response.status_code}")
console.print(f"Response:\n{response.content}")
raise SystemExit
if response.status_code == 422:
console.print(
"Failed to complete operation. Please fix the following errors:",
style="bold red",
)
for field, messages in json_response.items():
for message in messages:
console.print(
f"* [bold red]{field.capitalize()}[/bold red] {message}"
)
raise SystemExit
if not response.ok:
console.print(
"Request to Enterprise API failed. Details:", style="bold red"
)
details = (
json_response.get("error")
or json_response.get("message")
or response.content
)
console.print(f"{details}")
raise SystemExit

View File

@@ -21,6 +21,7 @@ def create_crew(name, parent_folder=None):
bold=True,
)
# Create necessary directories
if not folder_path.exists():
folder_path.mkdir(parents=True)
(folder_path / "tests").mkdir(exist_ok=True)
@@ -28,14 +29,47 @@ def create_crew(name, parent_folder=None):
(folder_path / "src" / folder_name).mkdir(parents=True)
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
with open(folder_path / ".env", "w") as file:
file.write("OPENAI_API_KEY=YOUR_API_KEY")
else:
click.secho(
f"\tFolder {folder_name} already exists. Please choose a different name.",
fg="red",
f"\tFolder {folder_name} already exists. Updating .env file...",
fg="yellow",
)
return
# Path to the .env file
env_file_path = folder_path / ".env"
# Load existing environment variables if .env exists
env_vars = {}
if env_file_path.exists():
with open(env_file_path, "r") as file:
for line in file:
key_value = line.strip().split('=', 1)
if len(key_value) == 2:
env_vars[key_value[0]] = key_value[1]
# Prompt for keys/variables/LLM settings only if not already set
if 'OPENAI_API_KEY' not in env_vars:
if click.confirm("Do you want to enter your OPENAI_API_KEY?", default=True):
env_vars['OPENAI_API_KEY'] = click.prompt("Enter your OPENAI_API_KEY", type=str)
if 'ANTHROPIC_API_KEY' not in env_vars:
if click.confirm("Do you want to enter your ANTHROPIC_API_KEY?", default=False):
env_vars['ANTHROPIC_API_KEY'] = click.prompt("Enter your ANTHROPIC_API_KEY", type=str)
if 'GEMINI_API_KEY' not in env_vars:
if click.confirm("Do you want to specify your GEMINI_API_KEY?", default=True):
env_vars['GEMINI_API_KEY'] = click.prompt("Enter your GEMINI_API_KEY", type=str)
# Loop to add other environment variables
while click.confirm("Do you want to specify another environment variable?", default=False):
var_name = click.prompt("Enter the variable name", type=str)
var_value = click.prompt(f"Enter the value for {var_name}", type=str)
env_vars[var_name] = var_value
# Write the environment variables to .env file
with open(env_file_path, "w") as file:
for key, value in env_vars.items():
file.write(f"{key}={value}\n")
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew"

View File

@@ -0,0 +1,99 @@
from pathlib import Path
import click
from crewai.telemetry import Telemetry
def create_flow(name):
"""Create a new flow."""
folder_name = name.replace(" ", "_").replace("-", "_").lower()
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
click.secho(f"Creating flow {folder_name}...", fg="green", bold=True)
project_root = Path(folder_name)
if project_root.exists():
click.secho(f"Error: Folder {folder_name} already exists.", fg="red")
return
# Initialize telemetry
telemetry = Telemetry()
telemetry.flow_creation_span(class_name)
# Create directory structure
(project_root / "src" / folder_name).mkdir(parents=True)
(project_root / "src" / folder_name / "crews").mkdir(parents=True)
(project_root / "src" / folder_name / "tools").mkdir(parents=True)
(project_root / "tests").mkdir(exist_ok=True)
# Create .env file
with open(project_root / ".env", "w") as file:
file.write("OPENAI_API_KEY=YOUR_API_KEY")
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "flow"
# List of template files to copy
root_template_files = [".gitignore", "pyproject.toml", "README.md"]
src_template_files = ["__init__.py", "main.py"]
tools_template_files = ["tools/__init__.py", "tools/custom_tool.py"]
crew_folders = [
"poem_crew",
]
def process_file(src_file, dst_file):
if src_file.suffix in [".pyc", ".pyo", ".pyd"]:
return
try:
with open(src_file, "r", encoding="utf-8") as file:
content = file.read()
except Exception as e:
click.secho(f"Error processing file {src_file}: {e}", fg="red")
return
content = content.replace("{{name}}", name)
content = content.replace("{{flow_name}}", class_name)
content = content.replace("{{folder_name}}", folder_name)
with open(dst_file, "w") as file:
file.write(content)
# Copy and process root template files
for file_name in root_template_files:
src_file = templates_dir / file_name
dst_file = project_root / file_name
process_file(src_file, dst_file)
# Copy and process src template files
for file_name in src_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy tools files
for file_name in tools_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy crew folders
for crew_folder in crew_folders:
src_crew_folder = templates_dir / "crews" / crew_folder
dst_crew_folder = project_root / "src" / folder_name / "crews" / crew_folder
if src_crew_folder.exists():
for src_file in src_crew_folder.rglob("*"):
if src_file.is_file():
relative_path = src_file.relative_to(src_crew_folder)
dst_file = dst_crew_folder / relative_path
dst_file.parent.mkdir(parents=True, exist_ok=True)
process_file(src_file, dst_file)
else:
click.secho(
f"Warning: Crew folder {crew_folder} not found in template.",
fg="yellow",
)
click.secho(f"Flow {name} created successfully!", fg="green", bold=True)

View File

@@ -1,66 +0,0 @@
from os import getenv
import requests
from crewai.cli.deploy.utils import get_crewai_version
class CrewAPI:
"""
CrewAPI class to interact with the crewAI+ API.
"""
def __init__(self, api_key: str) -> None:
self.api_key = api_key
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
}
self.base_url = getenv(
"CREWAI_BASE_URL", "https://crewai.com/crewai_plus/api/v1/crews"
)
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
url = f"{self.base_url}/{endpoint}"
return requests.request(method, url, headers=self.headers, **kwargs)
# Deploy
def deploy_by_name(self, project_name: str) -> requests.Response:
return self._make_request("POST", f"by-name/{project_name}/deploy")
def deploy_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("POST", f"{uuid}/deploy")
# Status
def status_by_name(self, project_name: str) -> requests.Response:
return self._make_request("GET", f"by-name/{project_name}/status")
def status_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("GET", f"{uuid}/status")
# Logs
def logs_by_name(
self, project_name: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request("GET", f"by-name/{project_name}/logs/{log_type}")
def logs_by_uuid(
self, uuid: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request("GET", f"{uuid}/logs/{log_type}")
# Delete
def delete_by_name(self, project_name: str) -> requests.Response:
return self._make_request("DELETE", f"by-name/{project_name}")
def delete_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("DELETE", f"{uuid}")
# List
def list_crews(self) -> requests.Response:
return self._make_request("GET", "")
# Create
def create_crew(self, payload) -> requests.Response:
return self._make_request("POST", "", json=payload)

View File

@@ -2,19 +2,14 @@ from typing import Any, Dict, List, Optional
from rich.console import Console
from crewai.telemetry import Telemetry
from .api import CrewAPI
from .utils import (
fetch_and_json_env_file,
get_auth_token,
get_git_remote_url,
get_project_name,
)
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.utils import fetch_and_json_env_file, get_project_name
console = Console()
class DeployCommand:
class DeployCommand(BaseCommand, PlusAPIMixin):
"""
A class to handle deployment-related operations for CrewAI projects.
"""
@@ -23,40 +18,10 @@ class DeployCommand:
"""
Initialize the DeployCommand with project name and API client.
"""
try:
self._telemetry = Telemetry()
self._telemetry.set_tracer()
access_token = get_auth_token()
except Exception:
self._deploy_signup_error_span = self._telemetry.deploy_signup_error_span()
console.print(
"Please sign up/login to CrewAI+ before using the CLI.",
style="bold red",
)
console.print("Run 'crewai signup' to sign up/login.", style="bold green")
raise SystemExit
self.project_name = get_project_name()
if self.project_name is None:
console.print(
"No project name found. Please ensure your project has a valid pyproject.toml file.",
style="bold red",
)
raise SystemExit
self.client = CrewAPI(api_key=access_token)
def _handle_error(self, json_response: Dict[str, Any]) -> None:
"""
Handle and display error messages from API responses.
Args:
json_response (Dict[str, Any]): The JSON response containing error information.
"""
error = json_response.get("error", "Unknown error")
message = json_response.get("message", "No message provided")
console.print(f"Error: {error}", style="bold red")
console.print(f"Message: {message}", style="bold red")
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
self.project_name = get_project_name(require=True)
def _standard_no_param_error_message(self) -> None:
"""
@@ -104,20 +69,17 @@ class DeployCommand:
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue")
if uuid:
response = self.client.deploy_by_uuid(uuid)
response = self.plus_api_client.deploy_by_uuid(uuid)
elif self.project_name:
response = self.client.deploy_by_name(self.project_name)
response = self.plus_api_client.deploy_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
json_response = response.json()
if response.status_code == 200:
self._display_deployment_info(json_response)
else:
self._handle_error(json_response)
self._validate_response(response)
self._display_deployment_info(response.json())
def create_crew(self, confirm: bool) -> None:
def create_crew(self, confirm: bool = False) -> None:
"""
Create a new crew deployment.
"""
@@ -126,7 +88,11 @@ class DeployCommand:
)
console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file()
remote_repo_url = get_git_remote_url()
try:
remote_repo_url = git.Repository().origin_url()
except ValueError:
remote_repo_url = None
if remote_repo_url is None:
console.print("No remote repository URL found.", style="bold red")
@@ -138,12 +104,10 @@ class DeployCommand:
self._confirm_input(env_vars, remote_repo_url, confirm)
payload = self._create_payload(env_vars, remote_repo_url)
response = self.plus_api_client.create_crew(payload)
response = self.client.create_crew(payload)
if response.status_code == 201:
self._display_creation_success(response.json())
else:
self._handle_error(response.json())
self._validate_response(response)
self._display_creation_success(response.json())
def _confirm_input(
self, env_vars: Dict[str, str], remote_repo_url: str, confirm: bool
@@ -208,7 +172,7 @@ class DeployCommand:
"""
console.print("Listing all Crews\n", style="bold blue")
response = self.client.list_crews()
response = self.plus_api_client.list_crews()
json_response = response.json()
if response.status_code == 200:
self._display_crews(json_response)
@@ -243,18 +207,15 @@ class DeployCommand:
"""
console.print("Fetching deployment status...", style="bold blue")
if uuid:
response = self.client.status_by_uuid(uuid)
response = self.plus_api_client.crew_status_by_uuid(uuid)
elif self.project_name:
response = self.client.status_by_name(self.project_name)
response = self.plus_api_client.crew_status_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
json_response = response.json()
if response.status_code == 200:
self._display_crew_status(json_response)
else:
self._handle_error(json_response)
self._validate_response(response)
self._display_crew_status(response.json())
def _display_crew_status(self, status_data: Dict[str, str]) -> None:
"""
@@ -278,17 +239,15 @@ class DeployCommand:
console.print(f"Fetching {log_type} logs...", style="bold blue")
if uuid:
response = self.client.logs_by_uuid(uuid, log_type)
response = self.plus_api_client.crew_by_uuid(uuid, log_type)
elif self.project_name:
response = self.client.logs_by_name(self.project_name, log_type)
response = self.plus_api_client.crew_by_name(self.project_name, log_type)
else:
self._standard_no_param_error_message()
return
if response.status_code == 200:
self._display_logs(response.json())
else:
self._handle_error(response.json())
self._validate_response(response)
self._display_logs(response.json())
def remove_crew(self, uuid: Optional[str]) -> None:
"""
@@ -301,9 +260,9 @@ class DeployCommand:
console.print("Removing deployment...", style="bold blue")
if uuid:
response = self.client.delete_by_uuid(uuid)
response = self.plus_api_client.delete_crew_by_uuid(uuid)
elif self.project_name:
response = self.client.delete_by_name(self.project_name)
response = self.plus_api_client.delete_crew_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return

View File

@@ -1,155 +0,0 @@
import sys
import re
import subprocess
from rich.console import Console
from ..authentication.utils import TokenManager
console = Console()
if sys.version_info >= (3, 11):
import tomllib
# Drop the simple_toml_parser when we move to python3.11
def simple_toml_parser(content):
result = {}
current_section = result
for line in content.split('\n'):
line = line.strip()
if line.startswith('[') and line.endswith(']'):
# New section
section = line[1:-1].split('.')
current_section = result
for key in section:
current_section = current_section.setdefault(key, {})
elif '=' in line:
key, value = line.split('=', 1)
key = key.strip()
value = value.strip().strip('"')
current_section[key] = value
return result
def parse_toml(content):
if sys.version_info >= (3, 11):
return tomllib.loads(content)
else:
return simple_toml_parser(content)
def get_git_remote_url() -> str | None:
"""Get the Git repository's remote URL."""
try:
# Run the git remote -v command
result = subprocess.run(
["git", "remote", "-v"], capture_output=True, text=True, check=True
)
# Get the output
output = result.stdout
# Parse the output to find the origin URL
matches = re.findall(r"origin\s+(.*?)\s+\(fetch\)", output)
if matches:
return matches[0] # Return the first match (origin URL)
else:
console.print("No origin remote found.", style="bold red")
except subprocess.CalledProcessError as e:
console.print(f"Error running trying to fetch the Git Repository: {e}", style="bold red")
except FileNotFoundError:
console.print("Git command not found. Make sure Git is installed and in your PATH.", style="bold red")
return None
def get_project_name(pyproject_path: str = "pyproject.toml") -> str | None:
"""Get the project name from the pyproject.toml file."""
try:
# Read the pyproject.toml file
with open(pyproject_path, "r") as f:
pyproject_content = parse_toml(f.read())
# Extract the project name
project_name = pyproject_content["tool"]["poetry"]["name"]
if "crewai" not in pyproject_content["tool"]["poetry"]["dependencies"]:
raise Exception("crewai is not in the dependencies.")
return project_name
except FileNotFoundError:
print(f"Error: {pyproject_path} not found.")
except KeyError:
print(f"Error: {pyproject_path} is not a valid pyproject.toml file.")
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
print(
f"Error: {pyproject_path} is not a valid TOML file."
if sys.version_info >= (3, 11)
else f"Error reading the pyproject.toml file: {e}"
)
except Exception as e:
print(f"Error reading the pyproject.toml file: {e}")
return None
def get_crewai_version(poetry_lock_path: str = "poetry.lock") -> str:
"""Get the version number of crewai from the poetry.lock file."""
try:
with open(poetry_lock_path, "r") as f:
lock_content = f.read()
match = re.search(
r'\[\[package\]\]\s*name\s*=\s*"crewai"\s*version\s*=\s*"([^"]+)"',
lock_content,
re.DOTALL,
)
if match:
return match.group(1)
else:
print("crewai package not found in poetry.lock")
return "no-version-found"
except FileNotFoundError:
print(f"Error: {poetry_lock_path} not found.")
except Exception as e:
print(f"Error reading the poetry.lock file: {e}")
return "no-version-found"
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
"""Fetch the environment variables from a .env file and return them as a dictionary."""
try:
# Read the .env file
with open(env_file_path, "r") as f:
env_content = f.read()
# Parse the .env file content to a dictionary
env_dict = {}
for line in env_content.splitlines():
if line.strip() and not line.strip().startswith("#"):
key, value = line.split("=", 1)
env_dict[key.strip()] = value.strip()
return env_dict
except FileNotFoundError:
print(f"Error: {env_file_path} not found.")
except Exception as e:
print(f"Error reading the .env file: {e}")
return {}
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token

80
src/crewai/cli/git.py Normal file
View File

@@ -0,0 +1,80 @@
import subprocess
class Repository:
def __init__(self, path="."):
self.path = path
if not self.is_git_installed():
raise ValueError("Git is not installed or not found in your PATH.")
if not self.is_git_repo():
raise ValueError(f"{self.path} is not a Git repository.")
self.fetch()
def is_git_installed(self) -> bool:
"""Check if Git is installed and available in the system."""
try:
subprocess.run(
["git", "--version"], capture_output=True, check=True, text=True
)
return True
except (subprocess.CalledProcessError, FileNotFoundError):
return False
def fetch(self) -> None:
"""Fetch latest updates from the remote."""
subprocess.run(["git", "fetch"], cwd=self.path, check=True)
def status(self) -> str:
"""Get the git status in porcelain format."""
return subprocess.check_output(
["git", "status", "--branch", "--porcelain"],
cwd=self.path,
encoding="utf-8",
).strip()
def is_git_repo(self) -> bool:
"""Check if the current directory is a git repository."""
try:
subprocess.check_output(
["git", "rev-parse", "--is-inside-work-tree"],
cwd=self.path,
encoding="utf-8",
)
return True
except subprocess.CalledProcessError:
return False
def has_uncommitted_changes(self) -> bool:
"""Check if the repository has uncommitted changes."""
return len(self.status().splitlines()) > 1
def is_ahead_or_behind(self) -> bool:
"""Check if the repository is ahead or behind the remote."""
for line in self.status().splitlines():
if line.startswith("##") and ("ahead" in line or "behind" in line):
return True
return False
def is_synced(self) -> bool:
"""Return True if the Git repository is fully synced with the remote, False otherwise."""
if self.has_uncommitted_changes() or self.is_ahead_or_behind():
return False
else:
return True
def origin_url(self) -> str | None:
"""Get the Git repository's remote URL."""
try:
result = subprocess.run(
["git", "remote", "get-url", "origin"],
cwd=self.path,
capture_output=True,
text=True,
check=True,
)
return result.stdout.strip()
except subprocess.CalledProcessError:
return None

View File

@@ -0,0 +1,23 @@
import subprocess
import click
def plot_flow() -> None:
"""
Plot the flow by running a command in the Poetry environment.
"""
command = ["poetry", "run", "plot_flow"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while plotting the flow: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -0,0 +1,95 @@
from typing import Optional
import requests
from os import getenv
from crewai.cli.utils import get_crewai_version
from urllib.parse import urljoin
class PlusAPI:
"""
This class exposes methods for working with the CrewAI+ API.
"""
TOOLS_RESOURCE = "/crewai_plus/api/v1/tools"
CREWS_RESOURCE = "/crewai_plus/api/v1/crews"
def __init__(self, api_key: str) -> None:
self.api_key = api_key
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
"X-Crewai-Version": get_crewai_version(),
}
self.base_url = getenv("CREWAI_BASE_URL", "https://app.crewai.com")
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
url = urljoin(self.base_url, endpoint)
return requests.request(method, url, headers=self.headers, **kwargs)
def login_to_tool_repository(self):
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
def get_tool(self, handle: str):
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
def publish_tool(
self,
handle: str,
is_public: bool,
version: str,
description: Optional[str],
encoded_file: str,
):
params = {
"handle": handle,
"public": is_public,
"version": version,
"file": encoded_file,
"description": description,
}
return self._make_request("POST", f"{self.TOOLS_RESOURCE}", json=params)
def deploy_by_name(self, project_name: str) -> requests.Response:
return self._make_request(
"POST", f"{self.CREWS_RESOURCE}/by-name/{project_name}/deploy"
)
def deploy_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("POST", f"{self.CREWS_RESOURCE}/{uuid}/deploy")
def crew_status_by_name(self, project_name: str) -> requests.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/status"
)
def crew_status_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("GET", f"{self.CREWS_RESOURCE}/{uuid}/status")
def crew_by_name(
self, project_name: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/logs/{log_type}"
)
def crew_by_uuid(
self, uuid: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request(
"GET", f"{self.CREWS_RESOURCE}/{uuid}/logs/{log_type}"
)
def delete_crew_by_name(self, project_name: str) -> requests.Response:
return self._make_request(
"DELETE", f"{self.CREWS_RESOURCE}/by-name/{project_name}"
)
def delete_crew_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("DELETE", f"{self.CREWS_RESOURCE}/{uuid}")
def list_crews(self) -> requests.Response:
return self._make_request("GET", self.CREWS_RESOURCE)
def create_crew(self, payload) -> requests.Response:
return self._make_request("POST", self.CREWS_RESOURCE, json=payload)

View File

@@ -0,0 +1,23 @@
import subprocess
import click
def run_flow() -> None:
"""
Run the flow by running a command in the Poetry environment.
"""
command = ["poetry", "run", "run_flow"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the flow: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -10,8 +10,6 @@ from crewai.project import CrewBase, agent, crew, task
@CrewBase
class {{crew_name}}Crew():
"""{{crew_name}} crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def researcher(self) -> Agent:

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
[tool.poetry.scripts]

View File

@@ -0,0 +1,3 @@
.env
__pycache__/
lib/

View File

@@ -0,0 +1,57 @@
# {{crew_name}} Crew
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
```bash
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
crewai run
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -0,0 +1,11 @@
poem_writer:
role: >
CrewAI Poem Writer
goal: >
Generate a funny, light heartedpoem about how CrewAI
is awesome with a sentence count of {sentence_count}
backstory: >
You're a creative poet with a talent for capturing the essence of any topic
in a beautiful and engaging way. Known for your ability to craft poems that
resonate with readers, you bring a unique perspective and artistic flair to
every piece you write.

View File

@@ -0,0 +1,7 @@
write_poem:
description: >
Write a poem about how CrewAI is awesome.
Ensure the poem is engaging and adheres to the specified sentence count of {sentence_count}.
expected_output: >
A beautifully crafted poem about CrewAI, with exactly {sentence_count} sentences.
agent: poem_writer

View File

@@ -0,0 +1,31 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
@CrewBase
class PoemCrew():
"""Poem Crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def poem_writer(self) -> Agent:
return Agent(
config=self.agents_config['poem_writer'],
)
@task
def write_poem(self) -> Task:
return Task(
config=self.tasks_config['write_poem'],
)
@crew
def crew(self) -> Crew:
"""Creates the Research Crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -0,0 +1,65 @@
#!/usr/bin/env python
import asyncio
from random import randint
from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):
sentence_count: int = 1
poem: str = ""
class PoemFlow(Flow[PoemState]):
@start()
def generate_sentence_count(self):
print("Generating sentence count")
# Generate a number between 1 and 5
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count)
def generate_poem(self):
print("Generating poem")
print(f"State before poem: {self.state}")
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
print("Poem generated", result.raw)
self.state.poem = result.raw
print(f"State after generate_poem: {self.state}")
@listen(generate_poem)
def save_poem(self):
print("Saving poem")
print(f"State before save_poem: {self.state}")
with open("poem.txt", "w") as f:
f.write(self.state.poem)
print(f"State after save_poem: {self.state}")
async def run_flow():
"""
Run the flow.
"""
poem_flow = PoemFlow()
await poem_flow.kickoff()
async def plot_flow():
"""
Plot the flow.
"""
poem_flow = PoemFlow()
poem_flow.plot()
def main():
asyncio.run(run_flow())
def plot():
asyncio.run(plot_flow())
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,19 @@
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
run_flow = "{{folder_name}}.main:main"
plot_flow = "{{folder_name}}.main:plot"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -0,0 +1,12 @@
from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
def _run(self, argument: str) -> str:
# Implementation goes here
return "this is an example of a tool output, ignore it and move along."

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
[tool.poetry.scripts]

View File

@@ -0,0 +1,48 @@
# {{folder_name}}
{{folder_name}} is a CrewAI Tool. This template is designed to help you create
custom tools to power up your crews.
## Installing
Ensure you have Python >=3.10 <=3.13 installed on your system. This project
uses [Poetry](https://python-poetry.org/) for dependency management and package
handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Next, navigate to your project directory and install the dependencies with:
```bash
crewai install
```
## Publishing
Collaborate by sharing tools within your organization, or publish them publicly
to contribute with the community.
```bash
crewai tool publish {{tool_name}}
```
Others may install your tool in their crews running:
```bash
crewai tool install {{tool_name}}
```
## Support
For support, questions, or feedback regarding the {{crew_name}} tool or CrewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -0,0 +1,14 @@
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "Power up your crews with {{folder_name}}"
authors = ["Your Name <you@example.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.64.0,<1.0.0" }
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -0,0 +1,9 @@
from crewai_tools import BaseTool
class {{class_name}}(BaseTool):
name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization."
def _run(self, argument: str) -> str:
# Your tool's logic here
return "Tool's result"

View File

View File

@@ -0,0 +1,229 @@
import base64
from pathlib import Path
import click
import os
import subprocess
import tempfile
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli import git
from crewai.cli.utils import (
get_project_name,
get_project_description,
get_project_version,
tree_copy,
tree_find_and_replace,
)
from rich.console import Console
console = Console()
class ToolCommand(BaseCommand, PlusAPIMixin):
"""
A class to handle tool repository related operations for CrewAI projects.
"""
def __init__(self):
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
def create(self, handle: str):
self._ensure_not_in_project()
folder_name = handle.replace(" ", "_").replace("-", "_").lower()
class_name = handle.replace("_", " ").replace("-", " ").title().replace(" ", "")
project_root = Path(folder_name)
if project_root.exists():
click.secho(f"Folder {folder_name} already exists.", fg="red")
raise SystemExit
else:
os.makedirs(project_root)
click.secho(f"Creating custom tool {folder_name}...", fg="green", bold=True)
template_dir = Path(__file__).parent.parent / "templates" / "tool"
tree_copy(template_dir, project_root)
tree_find_and_replace(project_root, "{{folder_name}}", folder_name)
tree_find_and_replace(project_root, "{{class_name}}", class_name)
old_directory = os.getcwd()
os.chdir(project_root)
try:
self.login()
subprocess.run(["git", "init"], check=True)
console.print(
f"[green]Created custom tool [bold]{folder_name}[/bold]. Run [bold]cd {project_root}[/bold] to start working.[/green]"
)
finally:
os.chdir(old_directory)
def publish(self, is_public: bool, force: bool = False):
if not git.Repository().is_synced() and not force:
console.print(
"[bold red]Failed to publish tool.[/bold red]\n"
"Local changes need to be resolved before publishing. Please do the following:\n"
"* [bold]Commit[/bold] your changes.\n"
"* [bold]Push[/bold] to sync with the remote.\n"
"* [bold]Pull[/bold] the latest changes from the remote.\n"
"\nOnce your repository is up-to-date, retry publishing the tool."
)
raise SystemExit()
project_name = get_project_name(require=True)
assert isinstance(project_name, str)
project_version = get_project_version(require=True)
assert isinstance(project_version, str)
project_description = get_project_description(require=False)
encoded_tarball = None
with tempfile.TemporaryDirectory() as temp_build_dir:
subprocess.run(
["poetry", "build", "-f", "sdist", "--output", temp_build_dir],
check=True,
capture_output=False,
)
tarball_filename = next(
(f for f in os.listdir(temp_build_dir) if f.endswith(".tar.gz")), None
)
if not tarball_filename:
console.print(
"Project build failed. Please ensure that the command `poetry build -f sdist` completes successfully.",
style="bold red",
)
raise SystemExit
tarball_path = os.path.join(temp_build_dir, tarball_filename)
with open(tarball_path, "rb") as file:
tarball_contents = file.read()
encoded_tarball = base64.b64encode(tarball_contents).decode("utf-8")
publish_response = self.plus_api_client.publish_tool(
handle=project_name,
is_public=is_public,
version=project_version,
description=project_description,
encoded_file=f"data:application/x-gzip;base64,{encoded_tarball}",
)
self._validate_response(publish_response)
published_handle = publish_response.json()["handle"]
console.print(
f"Succesfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
style="bold green",
)
def install(self, handle: str):
get_response = self.plus_api_client.get_tool(handle)
if get_response.status_code == 404:
console.print(
"No tool found with this name. Please ensure the tool was published and you have access to it.",
style="bold red",
)
raise SystemExit
elif get_response.status_code != 200:
console.print(
"Failed to get tool details. Please try again later.", style="bold red"
)
raise SystemExit
self._add_package(get_response.json())
console.print(f"Succesfully installed {handle}", style="bold green")
def login(self):
login_response = self.plus_api_client.login_to_tool_repository()
if login_response.status_code != 200:
console.print(
"Failed to authenticate to the tool repository. Make sure you have the access to tools.",
style="bold red",
)
raise SystemExit
login_response_json = login_response.json()
for repository in login_response_json["repositories"]:
self._add_repository_to_poetry(
repository, login_response_json["credential"]
)
console.print(
"Succesfully authenticated to the tool repository.", style="bold green"
)
def _add_repository_to_poetry(self, repository, credentials):
repository_handle = f"crewai-{repository['handle']}"
add_repository_command = [
"poetry",
"source",
"add",
"--priority=explicit",
repository_handle,
repository["url"],
]
add_repository_result = subprocess.run(
add_repository_command, text=True, check=True
)
if add_repository_result.stderr:
click.echo(add_repository_result.stderr, err=True)
raise SystemExit
add_repository_credentials_command = [
"poetry",
"config",
f"http-basic.{repository_handle}",
credentials["username"],
credentials["password"],
]
add_repository_credentials_result = subprocess.run(
add_repository_credentials_command,
capture_output=False,
text=True,
check=True,
)
if add_repository_credentials_result.stderr:
click.echo(add_repository_credentials_result.stderr, err=True)
raise SystemExit
def _add_package(self, tool_details):
tool_handle = tool_details["handle"]
repository_handle = tool_details["repository"]["handle"]
pypi_index_handle = f"crewai-{repository_handle}"
add_package_command = [
"poetry",
"add",
"--source",
pypi_index_handle,
tool_handle,
]
add_package_result = subprocess.run(
add_package_command, capture_output=False, text=True, check=True
)
if add_package_result.stderr:
click.echo(add_package_result.stderr, err=True)
raise SystemExit
def _ensure_not_in_project(self):
if os.path.isfile("./pyproject.toml"):
console.print(
"[bold red]Oops! It looks like you're inside a project.[/bold red]"
)
console.print(
"You can't create a new tool while inside an existing project."
)
console.print(
"[bold yellow]Tip:[/bold yellow] Navigate to a different directory and try again."
)
raise SystemExit

View File

@@ -1,4 +1,18 @@
import os
import shutil
import click
import sys
import importlib.metadata
from crewai.cli.authentication.utils import TokenManager
from functools import reduce
from rich.console import Console
from typing import Any, Dict, List
if sys.version_info >= (3, 11):
import tomllib
console = Console()
def copy_template(src, dst, name, class_name, folder_name):
@@ -16,3 +30,176 @@ def copy_template(src, dst, name, class_name, folder_name):
file.write(content)
click.secho(f" - Created {dst}", fg="green")
# Drop the simple_toml_parser when we move to python3.11
def simple_toml_parser(content):
result = {}
current_section = result
for line in content.split("\n"):
line = line.strip()
if line.startswith("[") and line.endswith("]"):
# New section
section = line[1:-1].split(".")
current_section = result
for key in section:
current_section = current_section.setdefault(key, {})
elif "=" in line:
key, value = line.split("=", 1)
key = key.strip()
value = value.strip().strip('"')
current_section[key] = value
return result
def parse_toml(content):
if sys.version_info >= (3, 11):
return tomllib.loads(content)
else:
return simple_toml_parser(content)
def get_project_name(
pyproject_path: str = "pyproject.toml", require: bool = False
) -> str | None:
"""Get the project name from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["tool", "poetry", "name"], require=require
)
def get_project_version(
pyproject_path: str = "pyproject.toml", require: bool = False
) -> str | None:
"""Get the project version from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["tool", "poetry", "version"], require=require
)
def get_project_description(
pyproject_path: str = "pyproject.toml", require: bool = False
) -> str | None:
"""Get the project description from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["tool", "poetry", "description"], require=require
)
def _get_project_attribute(
pyproject_path: str, keys: List[str], require: bool
) -> Any | None:
"""Get an attribute from the pyproject.toml file."""
attribute = None
try:
with open(pyproject_path, "r") as f:
pyproject_content = parse_toml(f.read())
dependencies = (
_get_nested_value(pyproject_content, ["tool", "poetry", "dependencies"])
or {}
)
if "crewai" not in dependencies:
raise Exception("crewai is not in the dependencies.")
attribute = _get_nested_value(pyproject_content, keys)
except FileNotFoundError:
print(f"Error: {pyproject_path} not found.")
except KeyError:
print(f"Error: {pyproject_path} is not a valid pyproject.toml file.")
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
print(
f"Error: {pyproject_path} is not a valid TOML file."
if sys.version_info >= (3, 11)
else f"Error reading the pyproject.toml file: {e}"
)
except Exception as e:
print(f"Error reading the pyproject.toml file: {e}")
if require and not attribute:
console.print(
f"Unable to read '{'.'.join(keys)}' in the pyproject.toml file. Please verify that the file exists and contains the specified attribute.",
style="bold red",
)
raise SystemExit
return attribute
def _get_nested_value(data: Dict[str, Any], keys: List[str]) -> Any:
return reduce(dict.__getitem__, keys, data)
def get_crewai_version() -> str:
"""Get the version number of CrewAI running the CLI"""
return importlib.metadata.version("crewai")
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
"""Fetch the environment variables from a .env file and return them as a dictionary."""
try:
# Read the .env file
with open(env_file_path, "r") as f:
env_content = f.read()
# Parse the .env file content to a dictionary
env_dict = {}
for line in env_content.splitlines():
if line.strip() and not line.strip().startswith("#"):
key, value = line.split("=", 1)
env_dict[key.strip()] = value.strip()
return env_dict
except FileNotFoundError:
print(f"Error: {env_file_path} not found.")
except Exception as e:
print(f"Error reading the .env file: {e}")
return {}
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token
def tree_copy(source, destination):
"""Copies the entire directory structure from the source to the destination."""
for item in os.listdir(source):
source_item = os.path.join(source, item)
destination_item = os.path.join(destination, item)
if os.path.isdir(source_item):
shutil.copytree(source_item, destination_item)
else:
shutil.copy2(source_item, destination_item)
def tree_find_and_replace(directory, find, replace):
"""Recursively searches through a directory, replacing a target string in
both file contents and filenames with a specified replacement string.
"""
for path, dirs, files in os.walk(os.path.abspath(directory), topdown=False):
for filename in files:
filepath = os.path.join(path, filename)
with open(filepath, "r") as file:
contents = file.read()
with open(filepath, "w") as file:
file.write(contents.replace(find, replace))
if find in filename:
new_filename = filename.replace(find, replace)
new_filepath = os.path.join(path, new_filename)
os.rename(filepath, new_filepath)
for dirname in dirs:
if find in dirname:
new_dirname = dirname.replace(find, replace)
new_dirpath = os.path.join(path, new_dirname)
old_dirpath = os.path.join(path, dirname)
os.rename(old_dirpath, new_dirpath)

View File

@@ -6,7 +6,6 @@ from concurrent.futures import Future
from hashlib import md5
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from langchain_core.callbacks import BaseCallbackHandler
from pydantic import (
UUID4,
BaseModel,
@@ -23,6 +22,7 @@ from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache import CacheHandler
from crewai.crews.crew_output import CrewOutput
from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
@@ -68,7 +68,6 @@ class Crew(BaseModel):
manager_llm: The language model that will run manager agent.
manager_agent: Custom agent that will be used as manager.
memory: Whether the crew should use memory to store memories of it's execution.
manager_callbacks: The callback handlers to be executed by the manager agent when hierarchical process is used
cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential, hierarchical).
@@ -112,6 +111,18 @@ class Crew(BaseModel):
default=False,
description="Whether the crew should use memory to store memories of it's execution",
)
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
default=None,
description="An Instance of the ShortTermMemory to be used by the Crew",
)
long_term_memory: Optional[InstanceOf[LongTermMemory]] = Field(
default=None,
description="An Instance of the LongTermMemory to be used by the Crew",
)
entity_memory: Optional[InstanceOf[EntityMemory]] = Field(
default=None,
description="An Instance of the EntityMemory to be used by the Crew",
)
embedder: Optional[dict] = Field(
default={"provider": "openai"},
description="Configuration for the embedder to be used for the crew.",
@@ -126,10 +137,6 @@ class Crew(BaseModel):
manager_agent: Optional[BaseAgent] = Field(
description="Custom agent that will be used as manager.", default=None
)
manager_callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None,
description="A list of callback handlers to be executed by the manager agent when hierarchical process is used",
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
@@ -205,6 +212,15 @@ class Crew(BaseModel):
if self.output_log_file:
self._file_handler = FileHandler(self.output_log_file)
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
if self.function_calling_llm:
if isinstance(self.function_calling_llm, str):
self.function_calling_llm = LLM(model=self.function_calling_llm)
elif not isinstance(self.function_calling_llm, LLM):
self.function_calling_llm = LLM(
model=getattr(self.function_calling_llm, "model_name", None)
or getattr(self.function_calling_llm, "deployment_name", None)
or str(self.function_calling_llm)
)
self._telemetry = Telemetry()
self._telemetry.set_tracer()
return self
@@ -213,11 +229,19 @@ class Crew(BaseModel):
def create_crew_memory(self) -> "Crew":
"""Set private attributes."""
if self.memory:
self._long_term_memory = LongTermMemory()
self._short_term_memory = ShortTermMemory(
crew=self, embedder_config=self.embedder
self._long_term_memory = (
self.long_term_memory if self.long_term_memory else LongTermMemory()
)
self._short_term_memory = (
self.short_term_memory
if self.short_term_memory
else ShortTermMemory(crew=self, embedder_config=self.embedder)
)
self._entity_memory = (
self.entity_memory
if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder)
)
self._entity_memory = EntityMemory(crew=self, embedder_config=self.embedder)
return self
@model_validator(mode="after")
@@ -514,10 +538,6 @@ class Crew(BaseModel):
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
for i in range(len(inputs))
]
tasks = [
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
for i in range(len(inputs))
]
results = await asyncio.gather(*tasks)
@@ -588,8 +608,14 @@ class Crew(BaseModel):
"warning", "Manager agent should not have tools", color="orange"
)
manager.tools = []
raise Exception("Manager agent should not have tools")
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else:
self.manager_llm = (
getattr(self.manager_llm, "model_name", None)
or getattr(self.manager_llm, "deployment_name", None)
or self.manager_llm
)
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
@@ -599,6 +625,7 @@ class Crew(BaseModel):
verbose=self.verbose,
)
self.manager_agent = manager
manager.crew = self
def _execute_tasks(
self,
@@ -743,9 +770,6 @@ class Crew(BaseModel):
task.tools.append(new_tool)
def _log_task_start(self, task: Task, role: str = "None"):
color = self._logging_color
self._logger.log("debug", f"== Working Agent: {role}", color=color)
self._logger.log("info", f"== Starting Task: {task.description}", color=color)
if self.output_log_file:
self._file_handler.log(agent=role, task=task.description, status="started")
@@ -768,7 +792,6 @@ class Crew(BaseModel):
def _process_task_result(self, task: Task, output: TaskOutput) -> None:
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"== [{role}] Task output: {output}\n\n")
if self.output_log_file:
self._file_handler.log(agent=role, task=output, status="completed")
@@ -921,29 +944,30 @@ class Crew(BaseModel):
def calculate_usage_metrics(self) -> UsageMetrics:
"""Calculates and returns the usage metrics."""
total_usage_metrics = UsageMetrics()
for agent in self.agents:
if hasattr(agent, "_token_process"):
token_sum = agent._token_process.get_summary()
total_usage_metrics.add_usage_metrics(token_sum)
if self.manager_agent and hasattr(self.manager_agent, "_token_process"):
token_sum = self.manager_agent._token_process.get_summary()
total_usage_metrics.add_usage_metrics(token_sum)
self.usage_metrics = total_usage_metrics
return total_usage_metrics
def test(
self,
n_iterations: int,
openai_model_name: str,
openai_model_name: Optional[str] = None,
inputs: Optional[Dict[str, Any]] = None,
) -> None:
"""Test and evaluate the Crew with the given inputs for n iterations."""
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
self._test_execution_span = self._telemetry.test_execution_span(
self, n_iterations, inputs, openai_model_name
)
evaluator = CrewEvaluator(self, openai_model_name)
self,
n_iterations,
inputs,
openai_model_name, # type: ignore[arg-type]
) # type: ignore[arg-type]
evaluator = CrewEvaluator(self, openai_model_name) # type: ignore[arg-type]
for i in range(1, n_iterations + 1):
evaluator.set_iteration(i)

View File

@@ -41,6 +41,14 @@ class CrewOutput(BaseModel):
output_dict.update(self.pydantic.model_dump())
return output_dict
def __getitem__(self, key):
if self.pydantic and hasattr(self.pydantic, key):
return getattr(self.pydantic, key)
elif self.json_dict and key in self.json_dict:
return self.json_dict[key]
else:
raise KeyError(f"Key '{key}' not found in CrewOutput.")
def __str__(self):
if self.pydantic:
return str(self.pydantic)

View File

@@ -0,0 +1,3 @@
from crewai.flow.flow import Flow
__all__ = ["Flow"]

View File

@@ -0,0 +1,93 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>{{ title }}</title>
<script
src="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/vis-network.min.js"
integrity="sha512-LnvoEWDFrqGHlHmDD2101OrLcbsfkrzoSpvtSQtxK3RMnRV0eOkhhBN2dXHKRrUU8p2DGRTk35n4O8nWSVe1mQ=="
crossorigin="anonymous"
referrerpolicy="no-referrer"
></script>
<link
rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/dist/vis-network.min.css"
integrity="sha512-WgxfT5LWjfszlPHXRmBWHkV2eceiWTOBvrKCNbdgDYTHrT2AeLCGbF4sZlZw3UMN3WtL0tGUoIAKsu8mllg/XA=="
crossorigin="anonymous"
referrerpolicy="no-referrer"
/>
<style type="text/css">
body {
font-family: verdana;
margin: 0;
padding: 0;
}
.container {
display: flex;
flex-direction: column;
height: 100vh;
}
#mynetwork {
flex-grow: 1;
width: 100%;
height: 750px;
background-color: #ffffff;
}
.card {
border: none;
}
.legend-container {
display: flex;
align-items: center;
justify-content: center;
padding: 10px;
background-color: #f8f9fa;
position: fixed; /* Make the legend fixed */
bottom: 0; /* Position it at the bottom */
width: 100%; /* Make it span the full width */
}
.legend-item {
display: flex;
align-items: center;
margin-right: 20px;
}
.legend-color-box {
width: 20px;
height: 20px;
margin-right: 5px;
}
.logo {
height: 50px;
margin-right: 20px;
}
.legend-dashed {
border-bottom: 2px dashed #666666;
width: 20px;
height: 0;
margin-right: 5px;
}
.legend-solid {
border-bottom: 2px solid #666666;
width: 20px;
height: 0;
margin-right: 5px;
}
</style>
</head>
<body>
<div class="container">
<div class="card" style="width: 100%">
<div id="mynetwork" class="card-body"></div>
</div>
<div class="legend-container">
<img
src="data:image/svg+xml;base64,{{ logo_svg_base64 }}"
alt="CrewAI logo"
class="logo"
/>
<!-- LEGEND_ITEMS_PLACEHOLDER -->
</div>
</div>
{{ network_content }}
</body>
</html>

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 27 KiB

59
src/crewai/flow/config.py Normal file
View File

@@ -0,0 +1,59 @@
DARK_GRAY = "#333333"
CREWAI_ORANGE = "#FF5A50"
GRAY = "#666666"
WHITE = "#FFFFFF"
BLACK = "#000000"
COLORS = {
"bg": WHITE,
"start": CREWAI_ORANGE,
"method": DARK_GRAY,
"router": DARK_GRAY,
"router_border": CREWAI_ORANGE,
"edge": GRAY,
"router_edge": CREWAI_ORANGE,
"text": WHITE,
}
NODE_STYLES = {
"start": {
"color": CREWAI_ORANGE,
"shape": "box",
"font": {"color": WHITE},
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
},
"method": {
"color": DARK_GRAY,
"shape": "box",
"font": {"color": WHITE},
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
},
"router": {
"color": {
"background": DARK_GRAY,
"border": CREWAI_ORANGE,
"highlight": {
"border": CREWAI_ORANGE,
"background": DARK_GRAY,
},
},
"shape": "box",
"font": {"color": WHITE},
"borderWidth": 3,
"borderWidthSelected": 4,
"shapeProperties": {"borderDashes": [5, 5]},
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
},
"crew": {
"color": {
"background": WHITE,
"border": CREWAI_ORANGE,
},
"shape": "box",
"font": {"color": BLACK},
"borderWidth": 3,
"borderWidthSelected": 4,
"shapeProperties": {"borderDashes": False},
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
},
}

286
src/crewai/flow/flow.py Normal file
View File

@@ -0,0 +1,286 @@
# flow.py
import asyncio
import inspect
from typing import Any, Callable, Dict, Generic, List, Set, Type, TypeVar, Union
from pydantic import BaseModel
from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry
T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]])
def start(condition=None):
def decorator(func):
func.__is_start_method__ = True
if condition is not None:
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
func.__condition_type__ = "OR"
elif (
isinstance(condition, dict)
and "type" in condition
and "methods" in condition
):
func.__trigger_methods__ = condition["methods"]
func.__condition_type__ = condition["type"]
elif callable(condition) and hasattr(condition, "__name__"):
func.__trigger_methods__ = [condition.__name__]
func.__condition_type__ = "OR"
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
)
return func
return decorator
def listen(condition):
def decorator(func):
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
func.__condition_type__ = "OR"
elif (
isinstance(condition, dict)
and "type" in condition
and "methods" in condition
):
func.__trigger_methods__ = condition["methods"]
func.__condition_type__ = condition["type"]
elif callable(condition) and hasattr(condition, "__name__"):
func.__trigger_methods__ = [condition.__name__]
func.__condition_type__ = "OR"
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
)
return func
return decorator
def router(method):
def decorator(func):
func.__is_router__ = True
func.__router_for__ = method.__name__
return func
return decorator
def or_(*conditions):
methods = []
for condition in conditions:
if isinstance(condition, dict) and "methods" in condition:
methods.extend(condition["methods"])
elif isinstance(condition, str):
methods.append(condition)
elif callable(condition):
methods.append(getattr(condition, "__name__", repr(condition)))
else:
raise ValueError("Invalid condition in or_()")
return {"type": "OR", "methods": methods}
def and_(*conditions):
methods = []
for condition in conditions:
if isinstance(condition, dict) and "methods" in condition:
methods.extend(condition["methods"])
elif isinstance(condition, str):
methods.append(condition)
elif callable(condition):
methods.append(getattr(condition, "__name__", repr(condition)))
else:
raise ValueError("Invalid condition in and_()")
return {"type": "AND", "methods": methods}
class FlowMeta(type):
def __new__(mcs, name, bases, dct):
cls = super().__new__(mcs, name, bases, dct)
start_methods = []
listeners = {}
routers = {}
router_paths = {}
for attr_name, attr_value in dct.items():
if hasattr(attr_value, "__is_start_method__"):
start_methods.append(attr_name)
if hasattr(attr_value, "__trigger_methods__"):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods)
elif hasattr(attr_value, "__trigger_methods__"):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods)
elif hasattr(attr_value, "__is_router__"):
routers[attr_value.__router_for__] = attr_name
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
# Register router as a listener to its triggering method
trigger_method_name = attr_value.__router_for__
methods = [trigger_method_name]
condition_type = "OR"
listeners[attr_name] = (condition_type, methods)
setattr(cls, "_start_methods", start_methods)
setattr(cls, "_listeners", listeners)
setattr(cls, "_routers", routers)
setattr(cls, "_router_paths", router_paths)
return cls
class Flow(Generic[T], metaclass=FlowMeta):
_telemetry = Telemetry()
_start_methods: List[str] = []
_listeners: Dict[str, tuple[str, List[str]]] = {}
_routers: Dict[str, str] = {}
_router_paths: Dict[str, List[str]] = {}
initial_state: Union[Type[T], T, None] = None
def __class_getitem__(cls, item: Type[T]) -> Type["Flow"]:
class _FlowGeneric(cls):
_initial_state_T: Type[T] = item
_FlowGeneric.__name__ = f"{cls.__name__}[{item.__name__}]"
return _FlowGeneric
def __init__(self) -> None:
self._methods: Dict[str, Callable] = {}
self._state: T = self._create_initial_state()
self._completed_methods: Set[str] = set()
self._pending_and_listeners: Dict[str, Set[str]] = {}
self._method_outputs: List[Any] = [] # List to store all method outputs
self._telemetry.flow_creation_span(self.__class__.__name__)
for method_name in dir(self):
if callable(getattr(self, method_name)) and not method_name.startswith(
"__"
):
self._methods[method_name] = getattr(self, method_name)
def _create_initial_state(self) -> T:
if self.initial_state is None and hasattr(self, "_initial_state_T"):
return self._initial_state_T() # type: ignore
if self.initial_state is None:
return {} # type: ignore
elif isinstance(self.initial_state, type):
return self.initial_state()
else:
return self.initial_state
@property
def state(self) -> T:
return self._state
@property
def method_outputs(self) -> List[Any]:
"""Returns the list of all outputs from executed methods."""
return self._method_outputs
async def kickoff(self) -> Any:
if not self._start_methods:
raise ValueError("No start method defined")
self._telemetry.flow_execution_span(
self.__class__.__name__, list(self._methods.keys())
)
# Create tasks for all start methods
tasks = [
self._execute_start_method(start_method)
for start_method in self._start_methods
]
# Run all start methods concurrently
await asyncio.gather(*tasks)
# Return the final output (from the last executed method)
if self._method_outputs:
return self._method_outputs[-1]
else:
return None # Or raise an exception if no methods were executed
async def _execute_start_method(self, start_method: str) -> None:
result = await self._execute_method(self._methods[start_method])
await self._execute_listeners(start_method, result)
async def _execute_method(self, method: Callable, *args: Any, **kwargs: Any) -> Any:
result = (
await method(*args, **kwargs)
if asyncio.iscoroutinefunction(method)
else method(*args, **kwargs)
)
self._method_outputs.append(result) # Store the output
return result
async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
listener_tasks = []
if trigger_method in self._routers:
router_method = self._methods[self._routers[trigger_method]]
path = await self._execute_method(router_method)
# Use the path as the new trigger method
trigger_method = path
for listener, (condition_type, methods) in self._listeners.items():
if condition_type == "OR":
if trigger_method in methods:
listener_tasks.append(
self._execute_single_listener(listener, result)
)
elif condition_type == "AND":
if listener not in self._pending_and_listeners:
self._pending_and_listeners[listener] = set()
self._pending_and_listeners[listener].add(trigger_method)
if set(methods) == self._pending_and_listeners[listener]:
listener_tasks.append(
self._execute_single_listener(listener, result)
)
del self._pending_and_listeners[listener]
# Run all listener tasks concurrently and wait for them to complete
await asyncio.gather(*listener_tasks)
async def _execute_single_listener(self, listener: str, result: Any) -> None:
try:
method = self._methods[listener]
sig = inspect.signature(method)
params = list(sig.parameters.values())
# Exclude 'self' parameter
method_params = [p for p in params if p.name != "self"]
if method_params:
# If listener expects parameters, pass the result
listener_result = await self._execute_method(method, result)
else:
# If listener does not expect parameters, call without arguments
listener_result = await self._execute_method(method)
# Execute listeners of this listener
await self._execute_listeners(listener, listener_result)
except Exception as e:
print(f"[Flow._execute_single_listener] Error in method {listener}: {e}")
import traceback
traceback.print_exc()
def plot(self, filename: str = "crewai_flow") -> None:
self._telemetry.flow_plotting_span(
self.__class__.__name__, list(self._methods.keys())
)
plot_flow(self, filename)

View File

@@ -0,0 +1,104 @@
# flow_visualizer.py
import os
from pyvis.network import Network
from crewai.flow.config import COLORS, NODE_STYLES
from crewai.flow.html_template_handler import HTMLTemplateHandler
from crewai.flow.legend_generator import generate_legend_items_html, get_legend_items
from crewai.flow.utils import calculate_node_levels
from crewai.flow.visualization_utils import (
add_edges,
add_nodes_to_network,
compute_positions,
)
class FlowPlot:
def __init__(self, flow):
self.flow = flow
self.colors = COLORS
self.node_styles = NODE_STYLES
def plot(self, filename):
net = Network(
directed=True,
height="750px",
width="100%",
bgcolor=self.colors["bg"],
layout=None,
)
# Set options to disable physics
net.set_options(
"""
var options = {
"nodes": {
"font": {
"multi": "html"
}
},
"physics": {
"enabled": false
}
}
"""
)
# Calculate levels for nodes
node_levels = calculate_node_levels(self.flow)
# Compute positions
node_positions = compute_positions(self.flow, node_levels)
# Add nodes to the network
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
# Add edges to the network
add_edges(net, self.flow, node_positions, self.colors)
network_html = net.generate_html()
final_html_content = self._generate_final_html(network_html)
# Save the final HTML content to the file
with open(f"{filename}.html", "w", encoding="utf-8") as f:
f.write(final_html_content)
print(f"Plot saved as {filename}.html")
self._cleanup_pyvis_lib()
def _generate_final_html(self, network_html):
# Extract just the body content from the generated HTML
current_dir = os.path.dirname(__file__)
template_path = os.path.join(
current_dir, "assets", "crewai_flow_visual_template.html"
)
logo_path = os.path.join(current_dir, "assets", "crewai_logo.svg")
html_handler = HTMLTemplateHandler(template_path, logo_path)
network_body = html_handler.extract_body_content(network_html)
# Generate the legend items HTML
legend_items = get_legend_items(self.colors)
legend_items_html = generate_legend_items_html(legend_items)
final_html_content = html_handler.generate_final_html(
network_body, legend_items_html
)
return final_html_content
def _cleanup_pyvis_lib(self):
# Clean up the generated lib folder
lib_folder = os.path.join(os.getcwd(), "lib")
try:
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
import shutil
shutil.rmtree(lib_folder)
except Exception as e:
print(f"Error cleaning up {lib_folder}: {e}")
def plot_flow(flow, filename="flow_plot"):
visualizer = FlowPlot(flow)
visualizer.plot(filename)

View File

@@ -0,0 +1,65 @@
import base64
import re
class HTMLTemplateHandler:
def __init__(self, template_path, logo_path):
self.template_path = template_path
self.logo_path = logo_path
def read_template(self):
with open(self.template_path, "r", encoding="utf-8") as f:
return f.read()
def encode_logo(self):
with open(self.logo_path, "rb") as logo_file:
logo_svg_data = logo_file.read()
return base64.b64encode(logo_svg_data).decode("utf-8")
def extract_body_content(self, html):
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
return match.group(1) if match else ""
def generate_legend_items_html(self, legend_items):
legend_items_html = ""
for item in legend_items:
if "border" in item:
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item['color']}; border: 2px dashed {item['border']};"></div>
<div>{item['label']}</div>
</div>
"""
elif item.get("dashed") is not None:
style = "dashed" if item["dashed"] else "solid"
legend_items_html += f"""
<div class="legend-item">
<div class="legend-{style}" style="border-bottom: 2px {style} {item['color']};"></div>
<div>{item['label']}</div>
</div>
"""
else:
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item['color']};"></div>
<div>{item['label']}</div>
</div>
"""
return legend_items_html
def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"):
html_template = self.read_template()
logo_svg_base64 = self.encode_logo()
final_html_content = html_template.replace("{{ title }}", title)
final_html_content = final_html_content.replace(
"{{ network_content }}", network_body
)
final_html_content = final_html_content.replace(
"{{ logo_svg_base64 }}", logo_svg_base64
)
final_html_content = final_html_content.replace(
"<!-- LEGEND_ITEMS_PLACEHOLDER -->", legend_items_html
)
return final_html_content

View File

@@ -0,0 +1,53 @@
def get_legend_items(colors):
return [
{"label": "Start Method", "color": colors["start"]},
{"label": "Method", "color": colors["method"]},
{
"label": "Crew Method",
"color": colors["bg"],
"border": colors["start"],
"dashed": False,
},
{
"label": "Router",
"color": colors["router"],
"border": colors["router_border"],
"dashed": True,
},
{"label": "Trigger", "color": colors["edge"], "dashed": False},
{"label": "AND Trigger", "color": colors["edge"], "dashed": True},
{
"label": "Router Trigger",
"color": colors["router_edge"],
"dashed": True,
},
]
def generate_legend_items_html(legend_items):
legend_items_html = ""
for item in legend_items:
if "border" in item:
style = "dashed" if item["dashed"] else "solid"
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item['color']}; border: 2px {style} {item['border']}; border-radius: 5px;"></div>
<div>{item['label']}</div>
</div>
"""
elif item.get("dashed") is not None:
style = "dashed" if item["dashed"] else "solid"
legend_items_html += f"""
<div class="legend-item">
<div class="legend-{style}" style="border-bottom: 2px {style} {item['color']}; border-radius: 5px;"></div>
<div>{item['label']}</div>
</div>
"""
else:
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item['color']}; border-radius: 5px;"></div>
<div>{item['label']}</div>
</div>
"""
return legend_items_html

188
src/crewai/flow/utils.py Normal file
View File

@@ -0,0 +1,188 @@
import ast
import inspect
import textwrap
def get_possible_return_constants(function):
try:
source = inspect.getsource(function)
except OSError:
# Can't get source code
return None
except Exception as e:
print(f"Error retrieving source code for function {function.__name__}: {e}")
return None
try:
# Remove leading indentation
source = textwrap.dedent(source)
# Parse the source code into an AST
code_ast = ast.parse(source)
except IndentationError as e:
print(f"IndentationError while parsing source code of {function.__name__}: {e}")
print(f"Source code:\n{source}")
return None
except SyntaxError as e:
print(f"SyntaxError while parsing source code of {function.__name__}: {e}")
print(f"Source code:\n{source}")
return None
except Exception as e:
print(f"Unexpected error while parsing source code of {function.__name__}: {e}")
print(f"Source code:\n{source}")
return None
return_values = []
class ReturnVisitor(ast.NodeVisitor):
def visit_Return(self, node):
# Check if the return value is a constant (Python 3.8+)
if isinstance(node.value, ast.Constant):
return_values.append(node.value.value)
ReturnVisitor().visit(code_ast)
return return_values
def calculate_node_levels(flow):
levels = {}
queue = []
visited = set()
pending_and_listeners = {}
# Make all start methods at level 0
for method_name, method in flow._methods.items():
if hasattr(method, "__is_start_method__"):
levels[method_name] = 0
queue.append(method_name)
# Breadth-first traversal to assign levels
while queue:
current = queue.pop(0)
current_level = levels[current]
visited.add(current)
for listener_name, (
condition_type,
trigger_methods,
) in flow._listeners.items():
if condition_type == "OR":
if current in trigger_methods:
if (
listener_name not in levels
or levels[listener_name] > current_level + 1
):
levels[listener_name] = current_level + 1
if listener_name not in visited:
queue.append(listener_name)
elif condition_type == "AND":
if listener_name not in pending_and_listeners:
pending_and_listeners[listener_name] = set()
if current in trigger_methods:
pending_and_listeners[listener_name].add(current)
if set(trigger_methods) == pending_and_listeners[listener_name]:
if (
listener_name not in levels
or levels[listener_name] > current_level + 1
):
levels[listener_name] = current_level + 1
if listener_name not in visited:
queue.append(listener_name)
# Handle router connections
if current in flow._routers.values():
router_method_name = current
paths = flow._router_paths.get(router_method_name, [])
for path in paths:
for listener_name, (
condition_type,
trigger_methods,
) in flow._listeners.items():
if path in trigger_methods:
if (
listener_name not in levels
or levels[listener_name] > current_level + 1
):
levels[listener_name] = current_level + 1
if listener_name not in visited:
queue.append(listener_name)
return levels
def count_outgoing_edges(flow):
counts = {}
for method_name in flow._methods:
counts[method_name] = 0
for method_name in flow._listeners:
_, trigger_methods = flow._listeners[method_name]
for trigger in trigger_methods:
if trigger in flow._methods:
counts[trigger] += 1
return counts
def build_ancestor_dict(flow):
ancestors = {node: set() for node in flow._methods}
visited = set()
for node in flow._methods:
if node not in visited:
dfs_ancestors(node, ancestors, visited, flow)
return ancestors
def dfs_ancestors(node, ancestors, visited, flow):
if node in visited:
return
visited.add(node)
# Handle regular listeners
for listener_name, (_, trigger_methods) in flow._listeners.items():
if node in trigger_methods:
ancestors[listener_name].add(node)
ancestors[listener_name].update(ancestors[node])
dfs_ancestors(listener_name, ancestors, visited, flow)
# Handle router methods separately
if node in flow._routers.values():
router_method_name = node
paths = flow._router_paths.get(router_method_name, [])
for path in paths:
for listener_name, (_, trigger_methods) in flow._listeners.items():
if path in trigger_methods:
# Only propagate the ancestors of the router method, not the router method itself
ancestors[listener_name].update(ancestors[node])
dfs_ancestors(listener_name, ancestors, visited, flow)
def is_ancestor(node, ancestor_candidate, ancestors):
return ancestor_candidate in ancestors.get(node, set())
def build_parent_children_dict(flow):
parent_children = {}
# Map listeners to their trigger methods
for listener_name, (_, trigger_methods) in flow._listeners.items():
for trigger in trigger_methods:
if trigger not in parent_children:
parent_children[trigger] = []
if listener_name not in parent_children[trigger]:
parent_children[trigger].append(listener_name)
# Map router methods to their paths and to listeners
for router_method_name, paths in flow._router_paths.items():
for path in paths:
# Map router method to listeners of each path
for listener_name, (_, trigger_methods) in flow._listeners.items():
if path in trigger_methods:
if router_method_name not in parent_children:
parent_children[router_method_name] = []
if listener_name not in parent_children[router_method_name]:
parent_children[router_method_name].append(listener_name)
return parent_children
def get_child_index(parent, child, parent_children):
children = parent_children.get(parent, [])
children.sort()
return children.index(child)

View File

@@ -0,0 +1,178 @@
import ast
import inspect
from .utils import (
build_ancestor_dict,
build_parent_children_dict,
get_child_index,
is_ancestor,
)
def method_calls_crew(method):
"""Check if the method calls `.crew()`."""
try:
source = inspect.getsource(method)
source = inspect.cleandoc(source)
tree = ast.parse(source)
except Exception as e:
print(f"Could not parse method {method.__name__}: {e}")
return False
class CrewCallVisitor(ast.NodeVisitor):
def __init__(self):
self.found = False
def visit_Call(self, node):
if isinstance(node.func, ast.Attribute):
if node.func.attr == "crew":
self.found = True
self.generic_visit(node)
visitor = CrewCallVisitor()
visitor.visit(tree)
return visitor.found
def add_nodes_to_network(net, flow, node_positions, node_styles):
def human_friendly_label(method_name):
return method_name.replace("_", " ").title()
for method_name, (x, y) in node_positions.items():
method = flow._methods.get(method_name)
if hasattr(method, "__is_start_method__"):
node_style = node_styles["start"]
elif hasattr(method, "__is_router__"):
node_style = node_styles["router"]
elif method_calls_crew(method):
node_style = node_styles["crew"]
else:
node_style = node_styles["method"]
node_style = node_style.copy()
label = human_friendly_label(method_name)
node_style.update(
{
"label": label,
"shape": "box",
"font": {
"multi": "html",
"color": node_style.get("font", {}).get("color", "#FFFFFF"),
},
}
)
net.add_node(
method_name,
x=x,
y=y,
fixed=True,
physics=False,
**node_style,
)
def compute_positions(flow, node_levels, y_spacing=150, x_spacing=150):
level_nodes = {}
node_positions = {}
for method_name, level in node_levels.items():
level_nodes.setdefault(level, []).append(method_name)
for level, nodes in level_nodes.items():
x_offset = -(len(nodes) - 1) * x_spacing / 2 # Center nodes horizontally
for i, method_name in enumerate(nodes):
x = x_offset + i * x_spacing
y = level * y_spacing
node_positions[method_name] = (x, y)
return node_positions
def add_edges(net, flow, node_positions, colors):
ancestors = build_ancestor_dict(flow)
parent_children = build_parent_children_dict(flow)
for method_name in flow._listeners:
condition_type, trigger_methods = flow._listeners[method_name]
is_and_condition = condition_type == "AND"
for trigger in trigger_methods:
if trigger in flow._methods or trigger in flow._routers.values():
is_router_edge = any(
trigger in paths for paths in flow._router_paths.values()
)
edge_color = colors["router_edge"] if is_router_edge else colors["edge"]
is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
parent_has_multiple_children = len(parent_children.get(trigger, [])) > 1
needs_curvature = is_cycle_edge or parent_has_multiple_children
if needs_curvature:
source_pos = node_positions.get(trigger)
target_pos = node_positions.get(method_name)
if source_pos and target_pos:
dx = target_pos[0] - source_pos[0]
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
index = get_child_index(trigger, method_name, parent_children)
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth = False
edge_style = {
"color": edge_color,
"width": 2,
"arrows": "to",
"dashes": True if is_router_edge or is_and_condition else False,
"smooth": edge_smooth,
}
net.add_edge(trigger, method_name, **edge_style)
for router_method_name, paths in flow._router_paths.items():
for path in paths:
for listener_name, (
condition_type,
trigger_methods,
) in flow._listeners.items():
if path in trigger_methods:
is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
parent_has_multiple_children = (
len(parent_children.get(router_method_name, [])) > 1
)
needs_curvature = is_cycle_edge or parent_has_multiple_children
if needs_curvature:
source_pos = node_positions.get(router_method_name)
target_pos = node_positions.get(listener_name)
if source_pos and target_pos:
dx = target_pos[0] - source_pos[0]
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
index = get_child_index(
router_method_name, listener_name, parent_children
)
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth = False
edge_style = {
"color": colors["router_edge"],
"width": 2,
"arrows": "to",
"dashes": True,
"smooth": edge_smooth,
}
net.add_edge(router_method_name, listener_name, **edge_style)

183
src/crewai/llm.py Normal file
View File

@@ -0,0 +1,183 @@
from contextlib import contextmanager
from typing import Any, Dict, List, Optional, Union
import logging
import warnings
import litellm
from litellm import get_supported_openai_params
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
import sys
import io
class FilteredStream(io.StringIO):
def write(self, s):
if (
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
in s
or "LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True`"
in s
):
return
super().write(s)
LLM_CONTEXT_WINDOW_SIZES = {
# openai
"gpt-4": 8192,
"gpt-4o": 128000,
"gpt-4o-mini": 128000,
"gpt-4-turbo": 128000,
"o1-preview": 128000,
"o1-mini": 128000,
# deepseek
"deepseek-chat": 128000,
# groq
"gemma2-9b-it": 8192,
"gemma-7b-it": 8192,
"llama3-groq-70b-8192-tool-use-preview": 8192,
"llama3-groq-8b-8192-tool-use-preview": 8192,
"llama-3.1-70b-versatile": 131072,
"llama-3.1-8b-instant": 131072,
"llama-3.2-1b-preview": 8192,
"llama-3.2-3b-preview": 8192,
"llama-3.2-11b-text-preview": 8192,
"llama-3.2-90b-text-preview": 8192,
"llama3-70b-8192": 8192,
"llama3-8b-8192": 8192,
"mixtral-8x7b-32768": 32768,
}
@contextmanager
def suppress_warnings():
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
# Redirect stdout and stderr
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = FilteredStream()
sys.stderr = FilteredStream()
try:
yield
finally:
# Restore stdout and stderr
sys.stdout = old_stdout
sys.stderr = old_stderr
class LLM:
def __init__(
self,
model: str,
timeout: Optional[Union[float, int]] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
n: Optional[int] = None,
stop: Optional[Union[str, List[str]]] = None,
max_completion_tokens: Optional[int] = None,
max_tokens: Optional[int] = None,
presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None,
logit_bias: Optional[Dict[int, float]] = None,
response_format: Optional[Dict[str, Any]] = None,
seed: Optional[int] = None,
logprobs: Optional[bool] = None,
top_logprobs: Optional[int] = None,
base_url: Optional[str] = None,
api_version: Optional[str] = None,
api_key: Optional[str] = None,
callbacks: List[Any] = [],
**kwargs,
):
self.model = model
self.timeout = timeout
self.temperature = temperature
self.top_p = top_p
self.n = n
self.stop = stop
self.max_completion_tokens = max_completion_tokens
self.max_tokens = max_tokens
self.presence_penalty = presence_penalty
self.frequency_penalty = frequency_penalty
self.logit_bias = logit_bias
self.response_format = response_format
self.seed = seed
self.logprobs = logprobs
self.top_logprobs = top_logprobs
self.base_url = base_url
self.api_version = api_version
self.api_key = api_key
self.callbacks = callbacks
self.kwargs = kwargs
litellm.drop_params = True
litellm.set_verbose = False
litellm.callbacks = callbacks
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
with suppress_warnings():
if callbacks and len(callbacks) > 0:
litellm.callbacks = callbacks
try:
params = {
"model": self.model,
"messages": messages,
"timeout": self.timeout,
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": self.stop,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
"logit_bias": self.logit_bias,
"response_format": self.response_format,
"seed": self.seed,
"logprobs": self.logprobs,
"top_logprobs": self.top_logprobs,
"api_base": self.base_url,
"api_version": self.api_version,
"api_key": self.api_key,
"stream": False,
**self.kwargs,
}
# Remove None values to avoid passing unnecessary parameters
params = {k: v for k, v in params.items() if v is not None}
response = litellm.completion(**params)
return response["choices"][0]["message"]["content"]
except Exception as e:
if not LLMContextLengthExceededException(
str(e)
)._is_context_limit_error(str(e)):
logging.error(f"LiteLLM call failed: {str(e)}")
raise # Re-raise the exception after logging
def supports_function_calling(self) -> bool:
try:
params = get_supported_openai_params(model=self.model)
return "response_format" in params
except Exception as e:
logging.error(f"Failed to get supported params: {str(e)}")
return False
def supports_stop_words(self) -> bool:
try:
params = get_supported_openai_params(model=self.model)
return "stop" in params
except Exception as e:
logging.error(f"Failed to get supported params: {str(e)}")
return False
def get_context_window_size(self) -> int:
# Only using 75% of the context window size to avoid cutting the message in the middle
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)

View File

@@ -10,12 +10,13 @@ class EntityMemory(Memory):
Inherits from the Memory class.
"""
def __init__(self, crew=None, embedder_config=None):
storage = RAGStorage(
type="entities",
allow_reset=False,
embedder_config=embedder_config,
crew=crew,
def __init__(self, crew=None, embedder_config=None, storage=None):
storage = (
storage
if storage
else RAGStorage(
type="entities", allow_reset=False, embedder_config=embedder_config, crew=crew
)
)
super().__init__(storage)

View File

@@ -14,8 +14,8 @@ class LongTermMemory(Memory):
LongTermMemoryItem instances.
"""
def __init__(self):
storage = LTMSQLiteStorage()
def __init__(self, storage=None):
storage = storage if storage else LTMSQLiteStorage()
super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"

View File

@@ -13,9 +13,13 @@ class ShortTermMemory(Memory):
MemoryItem instances.
"""
def __init__(self, crew=None, embedder_config=None):
storage = RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
def __init__(self, crew=None, embedder_config=None, storage=None):
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
)
super().__init__(storage)

View File

@@ -1,6 +1,7 @@
from functools import wraps
from crewai.project.utils import memoize
from crewai import Crew
def task(func):
@@ -72,7 +73,7 @@ def pipeline(func):
return memoize(func)
def crew(func):
def crew(func) -> "Crew":
def wrapper(self, *args, **kwargs):
instantiated_tasks = []
instantiated_agents = []

View File

@@ -1,14 +1,16 @@
import inspect
from pathlib import Path
from typing import Any, Callable, Dict
from typing import Any, Callable, Dict, Type, TypeVar
import yaml
from dotenv import load_dotenv
load_dotenv()
T = TypeVar("T", bound=Type[Any])
def CrewBase(cls):
def CrewBase(cls: T) -> T:
class WrappedClass(cls):
is_crew_class: bool = True # type: ignore
@@ -35,7 +37,7 @@ def CrewBase(cls):
@staticmethod
def load_yaml(config_path: Path):
try:
with open(config_path, "r") as file:
with open(config_path, "r", encoding="utf-8") as file:
return yaml.safe_load(file)
except FileNotFoundError:
print(f"File not found: {config_path}")
@@ -89,7 +91,10 @@ def CrewBase(cls):
callbacks: Dict[str, Callable],
) -> None:
if llm := agent_info.get("llm"):
self.agents_config[agent_name]["llm"] = llms[llm]()
try:
self.agents_config[agent_name]["llm"] = llms[llm]()
except KeyError:
self.agents_config[agent_name]["llm"] = llm
if tools := agent_info.get("tools"):
self.agents_config[agent_name]["tools"] = [

View File

@@ -276,7 +276,9 @@ class Task(BaseModel):
content = (
json_output
if json_output
else pydantic_output.model_dump_json() if pydantic_output else result
else pydantic_output.model_dump_json()
if pydantic_output
else result
)
self._save_file(content)

View File

@@ -4,15 +4,30 @@ import asyncio
import json
import os
import platform
import warnings
from contextlib import contextmanager
from typing import TYPE_CHECKING, Any, Optional
import pkg_resources
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import Span, Status, StatusCode
@contextmanager
def suppress_warnings():
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
yield
with suppress_warnings():
import pkg_resources
from opentelemetry import trace # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter, # noqa: E402
)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
from opentelemetry.trace import Span, Status, StatusCode # noqa: E402
if TYPE_CHECKING:
from crewai.crew import Crew
@@ -40,7 +55,8 @@ class Telemetry:
self.resource = Resource(
attributes={SERVICE_NAME: "crewAI-telemetry"},
)
self.provider = TracerProvider(resource=self.resource)
with suppress_warnings():
self.provider = TracerProvider(resource=self.resource)
processor = BatchSpanProcessor(
OTLPSpanExporter(
@@ -62,8 +78,9 @@ class Telemetry:
def set_tracer(self):
if self.ready and not self.trace_set:
try:
trace.set_tracer_provider(self.provider)
self.trace_set = True
with suppress_warnings():
trace.set_tracer_provider(self.provider)
self.trace_set = True
except Exception:
self.ready = False
self.trace_set = False
@@ -102,14 +119,12 @@ class Telemetry:
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"function_calling_llm": json.dumps(
self._safe_llm_attributes(
agent.function_calling_llm
)
),
"llm": json.dumps(
self._safe_llm_attributes(agent.llm)
"function_calling_llm": (
agent.function_calling_llm.model
if agent.function_calling_llm
else ""
),
"llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
@@ -134,9 +149,9 @@ class Telemetry:
"expected_output": task.expected_output,
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": task.agent.role
if task.agent
else "None",
"agent_role": (
task.agent.role if task.agent else "None"
),
"agent_key": task.agent.key if task.agent else None,
"context": (
[task.description for task in task.context]
@@ -173,14 +188,12 @@ class Telemetry:
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"function_calling_llm": json.dumps(
self._safe_llm_attributes(
agent.function_calling_llm
)
),
"llm": json.dumps(
self._safe_llm_attributes(agent.llm)
"function_calling_llm": (
agent.function_calling_llm.model
if agent.function_calling_llm
else ""
),
"llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
@@ -203,9 +216,9 @@ class Telemetry:
"id": str(task.id),
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": task.agent.role
if task.agent
else "None",
"agent_role": (
task.agent.role if task.agent else "None"
),
"agent_key": task.agent.key if task.agent else None,
"tools_names": [
tool.name.casefold()
@@ -294,9 +307,7 @@ class Telemetry:
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -316,9 +327,7 @@ class Telemetry:
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -336,9 +345,7 @@ class Telemetry:
pkg_resources.get_distribution("crewai").version,
)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -491,7 +498,7 @@ class Telemetry:
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation,
"tools_names": [
tool.name.casefold() for tool in agent.tools or []
@@ -568,10 +575,37 @@ class Telemetry:
except Exception:
pass
def _safe_llm_attributes(self, llm):
attributes = ["name", "model_name", "model", "top_k", "temperature"]
if llm:
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__
return safe_attributes
return {}
def flow_creation_span(self, flow_name: str):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Creation")
self._add_attribute(span, "flow_name", flow_name)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def flow_plotting_span(self, flow_name: str, node_names: list[str]):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Plotting")
self._add_attribute(span, "flow_name", flow_name)
self._add_attribute(span, "node_names", json.dumps(node_names))
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def flow_execution_span(self, flow_name: str, node_names: list[str]):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Execution")
self._add_attribute(span, "flow_name", flow_name)
self._add_attribute(span, "node_names", json.dumps(node_names))
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass

View File

@@ -1,5 +1,4 @@
from langchain.tools import StructuredTool
from crewai.agents.agent_builder.utilities.base_agent_tool import BaseAgentTools

View File

@@ -1,39 +0,0 @@
import json
from typing import Any, List
import regex
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from pydantic import ValidationError
class ToolOutputParser(PydanticOutputParser):
"""Parses the function calling of a tool usage and it's arguments."""
def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
result[0].text = self._transform_in_valid_json(result[0].text)
json_object = super().parse_result(result)
try:
return self.pydantic_object.parse_obj(json_object)
except ValidationError as e:
name = self.pydantic_object.__name__
msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
raise OutputParserException(msg, llm_output=json_object)
def _transform_in_valid_json(self, text) -> str:
text = text.replace("```", "").replace("json", "")
json_pattern = r"\{(?:[^{}]|(?R))*\}"
matches = regex.finditer(json_pattern, text)
for match in matches:
try:
# Attempt to parse the matched string as JSON
json_obj = json.loads(match.group())
# Return the first successfully parsed JSON object
json_obj = json.dumps(json_obj)
return str(json_obj)
except json.JSONDecodeError:
# If parsing fails, skip to the next match
continue
return text

Some files were not shown because too many files have changed in this diff Show More