Compare commits

...

88 Commits

Author SHA1 Message Date
João Moura
1a1eb4e7aa preparing new version 0.5.3 2024-02-07 02:14:58 -08:00
João Moura
723fdc6245 adding fix to hierarchical process 2024-02-07 02:13:19 -08:00
João Moura
43a47b8bdf preparing v0.5.2 2024-02-06 00:04:53 -08:00
João Moura
ab5647145f updating RPM and max_inter logic 2024-02-05 23:14:22 -08:00
João Moura
856981e0ed updating docs and readme 2024-02-05 23:13:10 -08:00
João Moura
09bec0e28b adding manager_llm 2024-02-05 20:46:47 -08:00
João Moura
2f0bf3b325 updating readme 2024-02-04 13:13:42 -08:00
João Moura
51278424c1 moving dependencies 2024-02-04 12:11:11 -08:00
João Moura
bfe26de026 updating readme 2024-02-04 12:07:40 -08:00
João Moura
db100439cb preparing new version 0.5.0 2024-02-04 12:01:05 -08:00
João Moura
c37f54c86f installing mkdocs dependencies 2024-02-04 11:58:21 -08:00
João Moura
e0262d9712 fixing dependencies for mkdocs 2024-02-04 11:51:44 -08:00
João Moura
63fb5a22be adding new docs and smaller fixes 2024-02-04 11:47:49 -08:00
João Moura
05dda59cf6 Adding multi thread execution 2024-02-03 23:24:41 -08:00
João Moura
5628bcca78 updating docs 2024-02-03 23:23:47 -08:00
João Moura
6042d9a7d8 Update README.md 2024-02-03 05:48:54 -03:00
João Moura
144239394d simplifying README 2024-02-03 00:04:33 -08:00
João Moura
d712ee8451 adding ability to pass context to tasks 2024-02-02 23:17:02 -08:00
João Moura
a8c1348235 Update README.md 2024-02-03 02:26:10 -03:00
João Moura
148d9202bf Update README.md 2024-02-03 01:33:59 -03:00
Ilya Sudakov
44442e6407 Update README.md: new header, text clean up, fix broken links (#210)
* Update README.MD

* Update examples section in README.md
2024-02-03 01:29:04 -03:00
Gui Vieira
c78237cb86 Hierarchical process (#206)
* Hierarchical process +  Docs
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-02-02 13:56:35 -03:00
João Moura
8fc0f33dd5 adding task callback 2024-01-30 22:46:20 -03:00
João Moura
2010702880 Update README.md 2024-01-29 22:49:17 -03:00
Guilherme Vieira
29c31a2404 Fix static typing errors (#187)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-01-29 19:52:14 -03:00
João Moura
cd77981102 Adding support for expected output 2024-01-29 00:11:30 -03:00
IT Lackey
4f78d1e29c Feature: Documentation Site (#188) 2024-01-28 23:43:23 -03:00
Eliad Cohen
5be79454c3 Addresses typo and clarifiction in comments (#191)
Minor changes include a typo fixed and enhancing
an example for using OpenAI as an agent model with Ollama
via langchain

Resolves #189 #190
2024-01-28 23:42:31 -03:00
João Moura
d8c14ff31e updating website for crewai 2024-01-28 23:36:39 -03:00
João Moura
9e1be4ecd2 Update README.md 2024-01-22 11:05:01 -03:00
scott------
327d5c3a53 Update agent.py (#161)
adding tools to the list of attribute descriptions
2024-01-21 16:56:19 -03:00
Greyson LaLonde
852ca21e38 Update some docstrings / typehints (#144) 2024-01-21 16:55:17 -03:00
Prabha Arivalagan
23a549ac65 Fixed the small typo (#168) 2024-01-21 16:54:19 -03:00
João Moura
3e9630afe8 cutting new version 2024-01-14 11:25:09 -03:00
João Moura
2bf924b732 Add RPM control to both agents and crews (#133)
* moving file into utilities
* creating Logger and RPMController
* Adding support for RPM to agents and crew
2024-01-14 00:22:11 -03:00
João Moura
3686804f7e Update tests.yml 2024-01-14 00:11:53 -03:00
João Moura
4b8f99d7a3 slightly improving prompts 2024-01-13 11:32:32 -03:00
Jimmy Kounelis
4d996044e6 Adding Greek translation (#122)
* Adding Greek translation
Co-authored-by: JimJim12 <loljk@Madness>
2024-01-13 11:22:23 -03:00
João Moura
53a32153a5 Adding support for Crew throttling using RPM (#124)
* Add translations
* fixing translations
* Adding support for Crew throttling with RPM
2024-01-13 11:20:30 -03:00
Greyson LaLonde
cbe688adbc Add github action for black (#116) 2024-01-12 22:06:13 -03:00
João Moura
8e7772c9c3 Adding support for translations (#120)
Add translations support
2024-01-12 14:49:36 -03:00
João Moura
ea7759b322 Revamp max iteration Logic (#111)
This now will allow to add a max_inter option to agents while also making sure to force the agent to give it's best final answer before running out of it's max_inter.
2024-01-11 12:32:54 -03:00
Greyson LaLonde
8cc51d5e9e Bump to langchain0.1.0 (#108)
* Bump `langchain`, `openai`; add `langchain-openai`

* Update imports to fix warnings
2024-01-11 09:33:43 -03:00
João Moura
fdd36b0766 Update README.md 2024-01-11 09:31:45 -03:00
João Moura
4f22bbf4d4 Update README.md 2024-01-10 21:00:37 -03:00
João Moura
34c1c0d76a starting to revamp docs 2024-01-10 13:12:31 -03:00
João Moura
feafa586ae fixing github action 2024-01-10 12:24:37 -03:00
João Moura
786691e97e replacing circleci with github actions 2024-01-10 12:05:42 -03:00
Greyson LaLonde
155368be3b Move to src dir usage (#99) 2024-01-10 11:39:36 -03:00
João Moura
a944cfc8d0 installing mkdocs as part of the github workflow 2024-01-10 00:46:56 -03:00
João Moura
bc7366b862 TYPO 2024-01-10 00:42:12 -03:00
João Moura
bb080c47f6 starting github actions for docs 2024-01-10 00:40:56 -03:00
João Moura
402137711c starting to setup new documentation 2024-01-10 00:30:18 -03:00
Greyson LaLonde
002da5a6f5 Add imports (#98) 2024-01-10 00:13:06 -03:00
João Moura
376fee952d updating logo 2024-01-10 00:08:39 -03:00
SashaXser
761f682d44 Refractoring (#88)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-01-10 00:04:13 -03:00
João Moura
40aea44470 bringing output log 2024-01-09 23:57:35 -03:00
yanzz
8eba7aab89 improved readability (#90) 2024-01-09 23:29:50 -03:00
Chris
bc54d310f2 update example usage in README (#97) 2024-01-09 23:22:42 -03:00
João Moura
f102c2e7dd cutting new version v0.1.24 2024-01-07 21:36:14 -03:00
João Moura
1ce9a8540b removing reference for pydantic v1 2024-01-07 21:35:30 -03:00
João Moura
f101dc5592 Improving agent delegation prompt 2024-01-07 21:35:27 -03:00
Ikko Eltociear Ashimine
55de63f6fa Update README.md (#81)
bellow -> below
2024-01-07 13:37:30 -03:00
João Moura
7954f6b51c Reliability improvements (#77)
* fixing identation for AgentTools
* updating gitignore to exclude quick test script
* startingprompt translation
* supporting individual task output
* adding agent to task output
* cutting new version
* Updating README example
2024-01-07 12:43:23 -03:00
João Moura
234a2c72b0 Tools cache and delegation improvements (#68)
* Fixing repeated tool usage treatment
* Improving agent delegation prompt
2024-01-06 11:46:34 -03:00
João Moura
7a22b03713 Update README.md 2024-01-06 01:36:00 -03:00
Chris Bruner
52d404a267 Updated the main example in README.md (#61)
Update Example to mention local LLMs
2024-01-06 00:34:28 -03:00
João Moura
6e086fe574 Update README.md 2024-01-06 00:03:03 -03:00
João Moura
8206eb8915 Update README.md 2024-01-06 00:01:39 -03:00
João Moura
8288f38281 Update README.md 2024-01-06 00:01:07 -03:00
João Moura
99efb33b3f Update README.md 2024-01-05 16:06:48 -03:00
João Moura
57c870e15d Update README.md 2024-01-05 13:50:48 -03:00
João Moura
3f9c4df32d Better agent execution error handling (#54)
A few quality of life improvements around cache handling and repeated tool usage
2024-01-05 11:04:59 -03:00
João Moura
6b054651a7 Refactoring task cache to be a tool (#50)
* Refactoring task cache to be a tool

The previous implementation of the task caching system was early exiting
the agent executor due to the fact it was returning an AgentFinish object.

This now refactors it to use a cache specific tool that is dynamically
added and forced into the agent in case of a task execution that was
already executed with the same input.
2024-01-04 21:29:42 -03:00
João Moura
fe6bef0af1 Update README.md 2024-01-04 10:06:08 -03:00
João Moura
358e5fa534 Update README.md 2024-01-04 10:04:56 -03:00
João Moura
b5e9173cbb Update README.md 2024-01-04 10:04:31 -03:00
João Moura
14a081b814 Proper README example (#48) 2024-01-04 10:03:23 -03:00
João Moura
9a9319eea9 Update README.md 2024-01-03 20:21:59 -03:00
João Moura
05984093f0 bumping langchain version and cutting new version 2024-01-03 18:58:45 -03:00
João Moura
2c4851bd2e Updating README example 2024-01-03 18:58:45 -03:00
Scott Stoltzman
c2f403f0eb Change "agent" to "openhermes" in Ollama example (#33) 2024-01-03 10:38:14 -03:00
SuperMalinge
00e584312c Update output_parser.py (#42) 2024-01-02 20:52:12 -03:00
João Moura
f6c042e58e Update README.md 2024-01-02 18:51:44 -03:00
João Moura
fddeb0e672 Update README.md 2023-12-31 17:41:50 -03:00
Greyson LaLonde
f311afaab3 Remove model inheritance (#30) 2023-12-31 10:52:08 -03:00
Greyson LaLonde
0323191436 Implement CrewAIBaseModel and Update to ConfigDict (#29)
New CrewAIBaseModel:

Base for Agent, Crew, Task.
Includes generated, frozen UUID.
Adds hashing capability
Migrate to ConfigDict:

Replaces class Config with model_config, see this deprecation note .
Benefits:
Adds auditing capability with frozen UUIDs.
2023-12-30 21:52:04 -03:00
Ikko Eltociear Ashimine
fd4c850df7 Update README.md (#27)
Documention -> Documentation
2023-12-30 21:49:20 -03:00
138 changed files with 12092 additions and 1627 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@@ -1,27 +0,0 @@
version: 2.1
jobs:
build-and-test:
docker:
- image: python:3.9.18
steps:
- checkout
- run:
name: Install poetry
command: pip install poetry
- run:
name: Install dependencies
command: poetry install
- run:
name: Update PATH and Define Environment Variable at Runtime
command: |
echo 'export OPENAI_API_KEY=fake-api-key' >> "$BASH_ENV"
source "$BASH_ENV"
- run:
name: Run tests
command: poetry run pytest
workflows:
build-and-test:
jobs:
- build-and-test

10
.github/workflows/black.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
name: Lint
on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable

47
.github/workflows/mkdocs.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: Deploy MkDocs
on:
workflow_dispatch:
push:
branches:
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Calculate requirements hash
id: req-hash
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
- name: Setup cache
uses: actions/cache@v3
with:
key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Requirements
run: |
sudo apt-get update &&
sudo apt-get install pngquant &&
pip install mkdocs-material mkdocs-material-extensions pillow cairosvg
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
- name: Build and deploy MkDocs
run: mkdocs gh-deploy --force

32
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Run Tests
on: [pull_request]
permissions:
contents: write
env:
OPENAI_API_KEY: fake-api-key
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install Requirements
run: |
sudo apt-get update &&
pip install poetry &&
poetry lock &&
poetry install
- name: Run tests
run: poetry run pytest

30
.github/workflows/type-checker.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Run Type Checks
on: [pull_request]
permissions:
contents: write
jobs:
type-checker:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install Requirements
run: |
sudo apt-get update &&
pip install poetry &&
poetry lock &&
poetry install
- name: Run type checks
run: poetry run pyright

3
.gitignore vendored
View File

@@ -4,4 +4,5 @@ __pycache__
dist/
.env
assets/*
.idea
.idea
test.py

187
README.md
View File

@@ -1,16 +1,35 @@
# crewAI
<div align="center">
![Logo of crewAI, tow people rowing on a boat](./crewai_logo.png)
![Logo of crewAI, two people rowing on a boat](./docs/crewai_logo.png)
🤖 Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
# **crewAI**
- [Why CrewAI](#why-crewai)
🤖 **crewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<h3>
[Homepage](https://www.crewai.io/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/joaomdmoura/crewai-examples) | [Discord](https://discord.com/invite/X4JWnZnxPb)
</h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/joaomdmoura/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
## Table of contents
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Examples](#examples)
- [Local Open Source Models](#local-open-source-models)
- [CrewAI x AutoGen x ChatDev](#how-crewai-compares)
- [Quick Tutorial](#quick-tutorial)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Contribution](#contribution)
- [Hire CrewAI](#hire-crewai)
- [License](#license)
## Why CrewAI?
@@ -18,102 +37,145 @@
The power of AI collaboration has too much to offer.
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
- 🤖 [Talk with the Docs](https://chat.openai.com/g/g-qqTuUWsBY-crewai-assistant)
- 📄 [Documention Wiki](https://github.com/joaomdmoura/CrewAI/wiki)
## Getting Started
To get started with CrewAI, follow these simple steps:
1. **Installation**:
### 1. Installation
```shell
pip install crewai
```
2. **Setting Up Your Crew**:
The example below also uses DuckDuckGo's Search. You can install it with `pip` too:
```shell
pip install duckduckgo-search
```
### 2. Setting Up Your Crew
```python
import os
from crewai import Agent, Task, Crew, Process
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
# You can choose to use a local model through Ollama for example. See ./docs/how-to/llm-connections.md for more information.
# from langchain_community.llms import Ollama
# ollama_llm = Ollama(model="openhermes")
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
# Define your agents with roles and goals
researcher = Agent(
role='Researcher',
goal='Discover new insights',
backstory="You're a world class researcher working on a major data science company",
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False
# llm=OpenAI(temperature=0.7, model_name="gpt-4"). It uses langchain.chat_models, default is GPT4
allow_delegation=False,
tools=[search_tool]
# You can pass an optional llm attribute specifying what mode you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://python.langchain.com/docs/integrations/llms/)
#
# Examples:
#
# from langchain_community.llms import Ollama
# llm=ollama_llm # was defined above in the file
#
# from langchain_openai import ChatOpenAI
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
)
writer = Agent(
role='Writer',
goal='Create engaging content',
backstory="You're a famous technical writer, specialized on writing data related content",
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=False
allow_delegation=True,
# (optional) llm=ollama_llm
)
# Create tasks for your agents
task1 = Task(description='Investigate the latest AI trends', agent=researcher)
task2 = Task(description='Write a blog post on AI advancements', agent=writer)
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Your final answer MUST be a full analysis report""",
agent=researcher
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.
Your final answer MUST be the full blog post of at least 4 paragraphs.""",
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2, # Crew verbose more will let you know what tasks are being worked on, you can set it to 1 or 2 to different logging levels
process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
verbose=2, # You can set it to 1 or 2 to different logging levels
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
```
Currently the only supported process is `Process.sequential`, where one task is executed after the other and the outcome of one is passed as extra content into this next.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
## Key Features
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Processes Driven**: Currently only supports `sequential` task execution but more complex processes like consensual and hierarchical being worked on.
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models, even ones running locally!
![CrewAI Mind Map](/crewAI-mindmap.png "CrewAI Mind Map")
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
## Examples
You can test different real life examples of AI crews [in the examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file)
## Local Open Source Models
crewAI supports integration with local models, thorugh tools such as [Ollama](https://ollama.ai/), for enhanced flexibility and customization. This allows you to utilize your own models, which can be particularly useful for specialized tasks or data privacy concerns.
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file):
### Setting Up Ollama
- **Install Ollama**: Ensure that Ollama is properly installed in your environment. Follow the installation guide provided by Ollama for detailed instructions.
- **Configure Ollama**: Set up Ollama to work with your local model. You will probably need to [tweak the model using a Modelfile](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md). I'd recommend adding `Observation` as a stop word and playing with `top_p` and `temperature`.
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis)
### Integrating Ollama with CrewAI
- Instantiate Ollama Model: Create an instance of the Ollama model. You can specify the model and the base URL during instantiation. For example:
### Quick Tutorial
```python
from langchain.llms import Ollama
ollama_openhermes = Ollama(model="agent")
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:
[![CrewAI Tutorial](https://img.youtube.com/vi/tnejrr-0a94/maxresdefault.jpg)](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
local_expert = Agent(
role='Local Expert at this city',
goal='Provide the BEST insights about the selected city',
backstory="""A knowledgeable local guide with extensive information
about the city, it's attractions and customs""",
tools=[
SearchTools.search_internet,
BrowserTools.scrape_and_summarize_website,
],
llm=ollama_openhermes, # Ollama model passed here
verbose=True
)
```
### Trip Planner
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
## Connecting Your Crew to a Model
crewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
## How CrewAI Compares
@@ -134,12 +196,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
- We appreciate your input!
### Installing Dependencies
```bash
poetry lock
poetry install
```
### Virtual Env
```bash
poetry shell
```
@@ -151,21 +215,34 @@ pre-commit install
```
### Running Tests
```bash
poetry run pytest
```
### Running static type checks
```bash
poetry run pyright
```
### Packaging
```bash
poetry build
```
### Installing Locally
```bash
pip install dist/*.tar.gz
```
## Hire CrewAI
We're a company developing crewAI and crewAI Enterprise, we for a limited time are offer consulting with selected customers, to get them early access to our enterprise solution
If you are interested on having access to it and hiring weekly hours with our team, feel free to email us at [joao@crewai.com](mailto:joao@crewai.com).
## License
CrewAI is released under the MIT License
CrewAI is released under the MIT License.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 431 KiB

View File

@@ -1,155 +0,0 @@
from typing import Any, List, Optional
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationSummaryMemory
from langchain.tools.render import render_text_description
from langchain_core.runnables.config import RunnableConfig
from pydantic import BaseModel, Field, InstanceOf, model_validator
from crewai.agents import CacheHandler, CrewAgentOutputParser, ToolsHandler
from crewai.prompts import Prompts
class Agent(BaseModel):
"""Represents an agent in a system.
Each agent has a role, a goal, a backstory, and an optional language model (llm).
The agent can also have memory, can operate in verbose mode, and can delegate tasks to other agents.
Attributes:
agent_executor: An instance of the AgentExecutor class.
role: The role of the agent.
goal: The objective of the agent.
backstory: The backstory of the agent.
llm: The language model that will run the agent.
memory: Whether the agent should have memory or not.
verbose: Whether the agent execution should be in verbose mode.
allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
"""
class Config:
arbitrary_types_allowed = True
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
llm: Optional[Any] = Field(
default_factory=lambda: ChatOpenAI(
temperature=0.7,
model_name="gpt-4",
),
description="Language model that will run the agent.",
)
memory: bool = Field(
default=True, description="Whether the agent should have memory or not"
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
allow_delegation: bool = Field(
default=True, description="Allow delegation of tasks to agents"
)
tools: List[Any] = Field(
default_factory=list, description="Tools at agents disposal"
)
agent_executor: Optional[InstanceOf[AgentExecutor]] = Field(
default=None, description="An instance of the AgentExecutor class."
)
tools_handler: Optional[InstanceOf[ToolsHandler]] = Field(
default=None, description="An instance of the ToolsHandler class."
)
cache_handler: Optional[InstanceOf[CacheHandler]] = Field(
default=CacheHandler(), description="An instance of the CacheHandler class."
)
@model_validator(mode="after")
def check_agent_executor(self) -> "Agent":
if not self.agent_executor:
self.set_cache_handler(self.cache_handler)
return self
def execute_task(
self, task: str, context: str = None, tools: List[Any] = None
) -> str:
"""Execute a task with the agent.
Args:
task: Task to execute.
context: Context to execute the task in.
tools: Tools to use for the task.
Returns:
Output of the agent
"""
if context:
task = "\n".join(
[task, "\nThis is the context you are working with:", context]
)
tools = tools or self.tools
self.agent_executor.tools = tools
return self.agent_executor.invoke(
{
"input": task,
"tool_names": self.__tools_names(tools),
"tools": render_text_description(tools),
},
RunnableConfig(callbacks=[self.tools_handler]),
)["output"]
def set_cache_handler(self, cache_handler) -> None:
self.cache_handler = cache_handler
self.tools_handler = ToolsHandler(cache=self.cache_handler)
self.__create_agent_executor()
def __create_agent_executor(self) -> AgentExecutor:
"""Create an agent executor for the agent.
Returns:
An instance of the AgentExecutor class.
"""
agent_args = {
"input": lambda x: x["input"],
"tools": lambda x: x["tools"],
"tool_names": lambda x: x["tool_names"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
executor_args = {
"tools": self.tools,
"verbose": self.verbose,
"handle_parsing_errors": True,
}
if self.memory:
summary_memory = ConversationSummaryMemory(
llm=self.llm, memory_key="chat_history", input_key="input"
)
executor_args["memory"] = summary_memory
agent_args["chat_history"] = lambda x: x["chat_history"]
prompt = Prompts.TASK_EXECUTION_WITH_MEMORY_PROMPT
else:
prompt = Prompts.TASK_EXECUTION_PROMPT
execution_prompt = prompt.partial(
goal=self.goal,
role=self.role,
backstory=self.backstory,
)
bind = self.llm.bind(stop=["\nObservation"])
inner_agent = (
agent_args
| execution_prompt
| bind
| CrewAgentOutputParser(
tools_handler=self.tools_handler, cache=self.cache_handler
)
)
self.agent_executor = AgentExecutor(agent=inner_agent, **executor_args)
@staticmethod
def __tools_names(tools) -> str:
return ", ".join([t.name for t in tools])

View File

@@ -1,122 +0,0 @@
import json
from typing import Any, Dict, List, Optional, Union
from pydantic import (
BaseModel,
Field,
InstanceOf,
Json,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.agents import CacheHandler
from crewai.process import Process
from crewai.task import Task
from crewai.tools.agent_tools import AgentTools
class Crew(BaseModel):
"""Class that represents a group of agents, how they should work together and their tasks."""
class Config:
arbitrary_types_allowed = True
tasks: List[Task] = Field(description="List of tasks", default_factory=list)
agents: List[Agent] = Field(
description="List of agents in this crew.", default_factory=list
)
process: Process = Field(
description="Process that the crew will follow.", default=Process.sequential
)
verbose: Union[int, bool] = Field(
description="Verbose mode for the Agent Execution", default=0
)
config: Optional[Union[Json, Dict[str, Any]]] = Field(
description="Configuration of the crew.", default=None
)
cache_handler: Optional[InstanceOf[CacheHandler]] = Field(
default=CacheHandler(), description="An instance of the CacheHandler class."
)
@classmethod
@field_validator("config", mode="before")
def check_config_type(cls, v: Union[Json, Dict[str, Any]]):
if isinstance(v, Json):
return json.loads(v)
return v
@model_validator(mode="after")
def check_config(self):
if not self.config and not self.tasks and not self.agents:
raise PydanticCustomError(
"missing_keys", "Either agents and task need to be set or config.", {}
)
if self.config:
if not self.config.get("agents") or not self.config.get("tasks"):
raise PydanticCustomError(
"missing_keys_in_config", "Config should have agents and tasks", {}
)
self.agents = [Agent(**agent) for agent in self.config["agents"]]
tasks = []
for task in self.config["tasks"]:
task_agent = [agt for agt in self.agents if agt.role == task["agent"]][
0
]
del task["agent"]
tasks.append(Task(**task, agent=task_agent))
self.tasks = tasks
if self.agents:
for agent in self.agents:
agent.set_cache_handler(self.cache_handler)
return self
def kickoff(self) -> str:
"""Kickoff the crew to work on its tasks.
Returns:
Output of the crew for each task.
"""
for agent in self.agents:
agent.cache_handler = self.cache_handler
if self.process == Process.sequential:
return self.__sequential_loop()
def __sequential_loop(self) -> str:
"""Loop that executes the sequential process.
Returns:
Output of the crew.
"""
task_outcome = None
for task in self.tasks:
# Add delegation tools to the task if the agent allows it
if task.agent.allow_delegation:
tools = AgentTools(agents=self.agents).tools()
task.tools += tools
self.__log("debug", f"Working Agent: {task.agent.role}")
self.__log("info", f"Starting Task: {task.description} ...")
task_outcome = task.execute(task_outcome)
self.__log("debug", f"Task output: {task_outcome}")
return task_outcome
def __log(self, level, message):
"""Log a message"""
level_map = {"debug": 1, "info": 2}
verbose_level = (
2 if isinstance(self.verbose, bool) and self.verbose else self.verbose
)
if verbose_level and level_map[level] <= verbose_level:
print(message)

View File

@@ -1,84 +0,0 @@
"""Prompts for generic agent."""
from textwrap import dedent
from typing import ClassVar
from langchain.prompts import PromptTemplate
from pydantic import BaseModel
class Prompts(BaseModel):
"""Prompts for generic agent."""
TASK_SLICE: ClassVar[str] = dedent(
"""\
Begin! This is VERY important to you, your job depends on it!
Current Task: {input}"""
)
SCRATCHPAD_SLICE: ClassVar[str] = "\n{agent_scratchpad}"
MEMORY_SLICE: ClassVar[str] = dedent(
"""\
This is the summary of your work so far:
{chat_history}"""
)
ROLE_PLAYING_SLICE: ClassVar[str] = dedent(
"""\
You are {role}.
{backstory}
Your personal goal is: {goal}"""
)
TOOLS_SLICE: ClassVar[str] = dedent(
"""\
TOOLS:
------
You have access to the following tools:
{tools}
To use a tool, please use the exact following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}], just the name.
Action Input: the input to the action
Observation: the result of the action
```
When you have a response for your task, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```"""
)
VOTING_SLICE: ClassVar[str] = dedent(
"""\
You are working on a crew with your co-workers and need to decide who will execute the task.
These are your format instructions:
{format_instructions}
These are your co-workers and their roles:
{coworkers}"""
)
TASK_EXECUTION_WITH_MEMORY_PROMPT: ClassVar[str] = PromptTemplate.from_template(
ROLE_PLAYING_SLICE + TOOLS_SLICE + MEMORY_SLICE + TASK_SLICE + SCRATCHPAD_SLICE
)
TASK_EXECUTION_PROMPT: ClassVar[str] = PromptTemplate.from_template(
ROLE_PLAYING_SLICE + TOOLS_SLICE + TASK_SLICE + SCRATCHPAD_SLICE
)
CONSENSUNS_VOTING_PROMPT: ClassVar[str] = PromptTemplate.from_template(
ROLE_PLAYING_SLICE + VOTING_SLICE + TASK_SLICE + SCRATCHPAD_SLICE
)

View File

@@ -1,39 +0,0 @@
from typing import Any, List, Optional
from pydantic import BaseModel, Field, model_validator
from crewai.agent import Agent
class Task(BaseModel):
"""Class that represent a task to be executed."""
description: str = Field(description="Description of the actual task.")
agent: Optional[Agent] = Field(
description="Agent responsible for the task.", default=None
)
tools: List[Any] = Field(
default_factory=list,
description="Tools the agent are limited to use for this task.",
)
@model_validator(mode="after")
def check_tools(self):
if not self.tools and (self.agent and self.agent.tools):
self.tools.extend(self.agent.tools)
return self
def execute(self, context: str = None) -> str:
"""Execute the task.
Returns:
Output of the task.
"""
if self.agent:
return self.agent.execute_task(
task=self.description, context=context, tools=self.tools
)
else:
raise Exception(
f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, either consensual or hierarchical."
)

View File

@@ -1,72 +0,0 @@
from textwrap import dedent
from typing import List
from langchain.tools import Tool
from pydantic import BaseModel, Field
from crewai.agent import Agent
class AgentTools(BaseModel):
"""Tools for generic agent."""
agents: List[Agent] = Field(description="List of agents in this crew.")
def tools(self):
return [
Tool.from_function(
func=self.delegate_work,
name="Delegate work to co-worker",
description=dedent(
f"""Useful to delegate a specific task to one of the
following co-workers: [{', '.join([agent.role for agent in self.agents])}].
The input to this tool should be a pipe (|) separated text of length
three, representing the role you want to delegate it to, the task and
information necessary. For example, `coworker|task|information`.
"""
),
),
Tool.from_function(
func=self.ask_question,
name="Ask question to co-worker",
description=dedent(
f"""Useful to ask a question, opinion or take from on
of the following co-workers: [{', '.join([agent.role for agent in self.agents])}].
The input to this tool should be a pipe (|) separated text of length
three, representing the role you want to ask it to, the question and
information necessary. For example, `coworker|question|information`.
"""
),
),
]
def delegate_work(self, command):
"""Useful to delegate a specific task to a coworker."""
return self.__execute(command)
def ask_question(self, command):
"""Useful to ask a question, opinion or take from a coworker."""
return self.__execute(command)
def __execute(self, command):
"""Execute the command."""
try:
agent, task, information = command.split("|")
except ValueError:
return "\nError executing tool. Missing exact 3 pipe (|) separated values. For example, `coworker|task|information`."
if not agent or not task or not information:
return "\nError executing tool. Missing exact 3 pipe (|) separated values. For example, `coworker|question|information`."
agent = [
available_agent
for available_agent in self.agents
if available_agent.role == agent
]
if len(agent) == 0:
return f"\nError executing tool. Co-worker mentioned on the Action Input not found, it must to be one of the following options: {', '.join([agent.role for agent in self.agents])}."
agent = agent[0]
result = agent.execute_task(task, information)
return result

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

View File

@@ -0,0 +1,58 @@
---
title: crewAI Agents
description: What are crewAI Agents and how to use them.
---
## What is an Agent?
!!! note "What is an Agent?"
An agent is an **autonomous unit** programmed to:
<ul>
<li class='leading-3'>Perform tasks</li>
<li class='leading-3'>Make decisions</li>
<li class='leading-3'>Communicate with other agents</li>
<br/>
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
## Agent Attributes
| Attribute | Description |
| :---------- | :----------------------------------- |
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **Tools** | Set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to specific agents. |
| **Max Iter** | The maximum number of iterations the agent can perform before forced to give its best answer |
| **Max RPM** | The maximum number of requests per minute the agent can perform to avoid rate limits |
| **Verbose** | This allow you to actually see what is going on during the Crew execution. |
| **Allow Delegation** | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. |
## Creating an Agent
!!! note "Agent Interaction"
Agents can interact with each other using the CrewAI's built-in delegation and communication mechanisms.<br/>This allows for dynamic task management and problem-solving within the crew.
To create an agent, you would typically initialize an instance of the `Agent` class with the desired properties. Here's a conceptual example:
```python
# Example: Creating an agent with all attributes
from crewai import Agent
agent = Agent(
role='Data Analyst',
goal='Extract actionable insights',
backstory="""You're a data analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2],
max_iter=10,
max_rpm=10,
verbose=True,
allow_delegation=True
)
```
## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.

View File

@@ -0,0 +1,24 @@
---
title: How Agents Collaborate in CrewAI
description: Exploring the dynamics of agent collaboration within the CrewAI framework.
---
## Collaboration Fundamentals
!!! note "Core of Agent Interaction"
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
- **Information Sharing**: Ensures all agents are well-informed and can contribute effectively by sharing data and findings.
- **Task Assistance**: Allows agents to seek help from peers with the required expertise for specific tasks.
- **Resource Allocation**: Optimizes task execution through the efficient distribution and sharing of resources among agents.
## Delegation: Dividing to Conquer
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
## Implementing Collaboration and Delegation
Setting up a crew involves defining the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring efficient collaboration and delegation.
## Example Scenario
Imagine a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The writer can delegate research tasks or ask questions to the researcher, facilitating a seamless workflow.
## Conclusion
Collaboration and delegation are pivotal, transforming individual AI agents into a coherent, intelligent crew capable of tackling complex tasks. CrewAI's framework not only simplifies these interactions but enhances their effectiveness, paving the way for sophisticated AI-driven solutions.

View File

@@ -0,0 +1,48 @@
---
title: Managing Processes in CrewAI
description: An overview of workflow management through processes in CrewAI.
---
## Understanding Processes
!!! note "Core Concept"
Processes in CrewAI orchestrate how tasks are executed by agents, akin to project management in human teams. They ensure tasks are distributed and completed efficiently, according to a predefined game plan.
## Process Implementations
- **Sequential**: Executes tasks one after another, ensuring a linear and orderly progression.
- **Hierarchical**: Implements a chain of command, where tasks are delegated and executed based on a managerial structure.
- **Consensual (WIP)**: Future process type aiming for collaborative decision-making among agents on task execution.
## The Role of Processes in Teamwork
Processes transform individual agents into a unified team, coordinating their efforts to achieve common goals with efficiency and harmony.
## Assigning Processes to a Crew
Specify the process during crew creation to determine the execution strategy:
```python
from crewai import Crew
from crewai.process import Process
# Example: Creating a crew with a sequential process
crew = Crew(agents=my_agents, tasks=my_tasks, process=Process.sequential)
# Example: Creating a crew with a hierarchical process
crew = Crew(agents=my_agents, tasks=my_tasks, process=Process.hierarchical)
```
## Sequential Process
Ensures a natural flow of work, mirroring human team dynamics by progressing through tasks thoughtfully and systematically.
Tasks need to be pre-assigned to agents, and the order of execution is determined by the order of the tasks in the list.
Tasks are executed one after another, ensuring a linear and orderly progression and the output of one task is automatically used as context into the next task.
You can also define specific task's outputs that should be used as context for another task by using the `context` parameter in the `Task` class.
## Hierarchical Process
Mimics a corporate hierarchy, where a manager oversees task execution, planning, delegation, and validation, enhancing task coordination.
In this process tasks don't need to be pre-assigned to agents, the manager will decide which agent will perform each task, review the output and decide if the task is completed or not.
## Conclusion
Processes are vital for structured collaboration within CrewAI, enabling agents to work together systematically. Future updates will introduce new processes, further mimicking the adaptability and complexity of human teamwork.

212
docs/core-concepts/Tasks.md Normal file
View File

@@ -0,0 +1,212 @@
---
title: crewAI Tasks
description: Overview and management of tasks within the crewAI framework.
---
## Overview of a Task
!!! note "What is a Task?"
In the CrewAI framework, tasks are individual assignments that agents complete. They encapsulate necessary information for execution, including a description, assigned agent, and required tools, offering flexibility for various action complexities.
Tasks in CrewAI can be designed to require collaboration between agents. For example, one agent might gather data while another analyzes it. This collaborative approach can be defined within the task properties and managed by the Crew's process.
## Task Attributes
| Attribute | Description |
| :---------- | :----------------------------------- |
| **Description** | A clear, concise statement of what the task entails. |
| **Agent** | Optionally, you can specify which agent is responsible for the task. If not, the crew's process will determine who takes it on. |
| **Expected Output** *(optional)* | Clear and detailed definition of expected output for the task. |
| **Tools** *(optional)* | These are the functions or capabilities the agent can utilize to perform the task. They can be anything from simple actions like 'search' to more complex interactions with other agents or APIs. |
| **Async Execution** *(optional)* | If the task should be executed asynchronously. |
| **Context** *(optional)* | Other tasks that will have their output used as context for this task, if one is an asynchronous task it will wait for that to finish |
| **Callback** *(optional)* | A function to be executed after the task is completed. |
## Creating a Task
This is the simpliest example for creating a task, it involves defining its scope and agent, but there are optional attributes that can provide a lot of flexibility:
```python
from crewai import Task
task = Task(
description='Find and summarize the latest and most relevant news on AI',
agent=sales_agent
)
```
!!! note "Task Assignment"
Tasks can be assigned directly by specifying an `agent` to them, or they can be assigned in run time if you are using the `hierarchical` through CrewAI's process, considering roles, availability, or other criteria.
## Integrating Tools with Tasks
Tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) enhance task performance, allowing agents to interact more effectively with their environment. Assigning specific tools to tasks can tailor agent capabilities to particular needs.
## Creating a Task with Tools
```python
import os
os.environ["OPENAI_API_KEY"] = "Your Key"
from crewai import Agent, Task, Crew
from langchain.agents import Tool
from langchain_community.tools import DuckDuckGoSearchRun
research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business."""
verbose=True
)
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
search_tool = DuckDuckGoSearchRun()
task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=2
)
result = crew.kickoff()
print(result)
```
This demonstrates how tasks with specific tools can override an agent's default set for tailored task execution.
## Refering other Tasks
In crewAI the output of one task is automatically relayed into the next one, but you can specifically define what tasks output should be used as context for another task.
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
```python
# ...
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
write_blog_task = Task(
description="Write a full blog post about the importante of AI and it's latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_task]
)
#...
```
## Asynchronous Execution
You can define a task to be executed asynchronously, this means that the crew will not wait for it to be completed to continue with the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
You can then use the `context` attribute to define in a future task that it should wait for the output of the asynchronous task to be completed.
```python
#...
list_ideas = Task(
description="List of 5 interesting ideas to explore for na article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
write_article = Task(
description="Write an article about AI, it's history and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)
#...
```
## Callback Mechanism
You can define a callback function that will be executed after the task is completed. This is useful for tasks that need to trigger some side effect after they are completed, while the crew is still running.
```python
# ...
def callback_function(output: TaskOutput):
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.result}
""")
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)
#...
```
## Accessing a specific Task Output
Once a crew finishes running, you can access the output of a specific task by using the `output` attribute of the task object:
```python
# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
#...
crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
)
result = crew.kickoff()
# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.result}
""")
```
## Tool Override Mechanism
Specifying tools in a task allows for dynamic adaptation of agent capabilities, emphasizing CrewAI's flexibility.
## Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools is crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments.

View File

@@ -0,0 +1,83 @@
---
title: crewAI Tools
description: Understanding and leveraging tools within the crewAI framework.
---
## What is a Tool?
!!! note "Definition"
A tool in CrewAI, is a skill, something Agents can use perform tasks, right now those can be tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools), those are basically functions that an agent can utilize for various actions, from simple searches to complex interactions with external systems.
## Key Characteristics of Tools
- **Utility**: Designed for specific tasks such as web searching, data analysis, or content generation.
- **Integration**: Enhance agent capabilities by integrating tools directly into their workflow.
- **Customizability**: Offers the flexibility to develop custom tools or use existing ones from LangChain's ecosystem.
## Creating your own Tools
!!! example "Custom Tool Creation"
Developers can craft custom tools tailored for their agents needs or utilize pre-built options. Heres how to create one:
```python
import json
import requests
from crewai import Agent
from langchain.tools import tool
from unstructured.partition.html import partition_html
class BrowserTools():
# Anotate the fuction with the tool decorator from LangChain
@tool("Scrape website content")
def scrape_website(website):
# Write logic for the tool.
# In this case a function to scrape website content
url = f"https://chrome.browserless.io/content?token={config('BROWSERLESS_API_KEY')}"
payload = json.dumps({"url": website})
headers = {'cache-control': 'no-cache', 'content-type': 'application/json'}
response = requests.request("POST", url, headers=headers, data=payload)
elements = partition_html(text=response.text)
content = "\n\n".join([str(el) for el in elements])
return content[:5000]
# Assign the scraping tool to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[BrowserTools().scrape_website]
)
```
## Using LangChain Tools
!!! info "LangChain Integration"
CrewAI seamlessly integrates with LangChains comprehensive toolkit. Assigning an existing tool to an agent is straightforward:
```python
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
import os
# Setup API keys
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key"
search = GoogleSerperAPIWrapper()
# Create and assign the search tool to an agent
serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="Useful for search-based queries",
)
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
```
## Conclusion
Tools are crucial for extending the capabilities of CrewAI agents, allowing them to undertake a diverse array of tasks and collaborate efficiently. When building your AI solutions with CrewAI, consider both custom and existing tools to empower your agents and foster a dynamic AI ecosystem.

BIN
docs/crewAI-mindmap.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 427 KiB

BIN
docs/crew_only_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

BIN
docs/crewai_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

View File

@@ -0,0 +1,112 @@
---
title: Assembling and Activating Your CrewAI Team
description: A step-by-step guide to creating a cohesive CrewAI team for your projects.
---
## Introduction
Embarking on your CrewAI journey involves a few straightforward steps to set up your environment and initiate your AI crew. This guide ensures a seamless start.
## Step 0: Installation
Begin by installing CrewAI and any additional packages required for your project. For instance, the `duckduckgo-search` package is used in this example for enhanced search capabilities.
```shell
pip install crewai
pip install duckduckgo-search
```
## Step 1: Assemble Your Agents
Begin by defining your agents with distinct roles and backstories. These elements not only add depth but also guide their task execution and interaction within the crew.
```python
import os
os.environ["OPENAI_API_KEY"] = "Your Key"
from crewai import Agent
# Topic that will be used in the crew run
topic = 'AI in healthcare'
# Creating a senior researcher agent
researcher = Agent(
role='Senior Researcher',
goal=f'Uncover groundbreaking technologies around {topic}',
verbose=True,
backstory="""Driven by curiosity, you're at the forefront of
innovation, eager to explore and share knowledge that could change
the world."""
)
# Creating a writer agent
writer = Agent(
role='Writer',
goal=f'Narrate compelling tech stories around {topic}',
verbose=True,
backstory="""With a flair for simplifying complex topics, you craft
engaging narratives that captivate and educate, bringing new
discoveries to light in an accessible manner."""
)
```
## Step 2: Define the Tasks
Detail the specific objectives for your agents. These tasks guide their focus and ensure a targeted approach to their roles.
```python
from crewai import Task
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
# Research task for identifying AI trends
research_task = Task(
description=f"""Identify the next big trend in {topic}.
Focus on identifying pros and cons and the overall narrative.
Your final report should clearly articulate the key points,
its market opportunities, and potential risks.
""",
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
max_inter=3,
tools=[search_tool],
agent=researcher
)
# Writing task based on research findings
write_task = Task(
description=f"""Compose an insightful article on {topic}.
Focus on the latest trends and how it's impacting the industry.
This article should be easy to understand, engaging and positive.
""",
expected_output=f'A 4 paragraph article on {topic} advancements.',
tools=[search_tool],
agent=writer
)
```
## Step 3: Form the Crew
Combine your agents into a crew, setting the workflow process they'll follow to accomplish the tasks.
```python
from crewai import Crew, Process
# Forming the tech-focused crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential # Sequential task execution
)
```
## Step 4: Kick It Off
With your crew ready and the stage set, initiate the process. Watch as your agents collaborate, each contributing their expertise to achieve the collective goal.
```python
# Starting the task execution process
result = crew.kickoff()
print(result)
```
## Conclusion
Building and activating a crew in CrewAI is a seamless process. By carefully assigning roles, tasks, and a clear process, your AI team is equipped to tackle challenges efficiently. The depth of agent backstories and the precision of their objectives enrich the collaboration, leading to successful project outcomes.

View File

@@ -0,0 +1,65 @@
---
title: Customizing Agents in CrewAI
description: A guide to tailoring agents for specific roles and tasks within the CrewAI framework.
---
## Customizable Attributes
Tailoring your AI agents is pivotal in crafting an efficient CrewAI team. Customization allows agents to be dynamically adapted to the unique requirements of any project.
### Key Attributes for Customization
- **Role**: Defines the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'.
- **Goal**: The agent's objective, aligned with its role and the crew's overall goals.
- **Backstory**: Adds depth to the agent's character, enhancing its role and motivations within the crew.
- **Tools**: The capabilities or methods the agent employs to accomplish tasks, ranging from simple functions to complex integrations.
## Understanding Tools in CrewAI
Tools empower agents with functionalities to interact and manipulate their environment, from generic utilities to specialized functions. Integrating with LangChain offers access to a broad range of tools for diverse tasks.
## Customizing Agents and Tools
Agents are customized by defining their attributes during initialization, with tools being a critical aspect of their functionality.
### Example: Assigning Tools to an Agent
```python
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
import os
# Set API keys for tool initialization
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key"
# Initialize a search tool
search_tool = GoogleSerperAPIWrapper()
# Define and assign the tool to an agent
serper_tool = Tool(
name="Intermediate Answer",
func=search_tool.run,
description="Useful for search-based queries"
)
# Initialize the agent with the tool
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
```
## Delegation and Autonomy
Agents in CrewAI can delegate tasks or ask questions, enhancing the crew's collaborative dynamics. This feature can be disabled to ensure straightforward task execution.
### Example: Disabling Delegation for an Agent
```python
agent = Agent(
role='Content Writer',
goal='Write engaging content on market trends',
backstory='A seasoned writer with expertise in market analysis.',
allow_delegation=False
)
```
## Conclusion
Customizing agents is key to leveraging the full potential of CrewAI. By thoughtfully setting agents' roles, goals, backstories, and tools, you craft a nuanced and capable AI team ready to tackle complex challenges.

View File

@@ -0,0 +1,60 @@
---
title: Implementing the Hierarchical Process in CrewAI
description: Understanding and applying the hierarchical process within your CrewAI projects.
---
## Introduction
The hierarchical process in CrewAI introduces a structured approach to task management, mimicking traditional organizational hierarchies for efficient task delegation and execution.
!!! note "Complexity"
The current implementation of the hierarchical process relies on tools usage that usually require more complex models like GPT-4 and usually imply of a higher token usage.
## Hierarchical Process Overview
In this process, tasks are assigned and executed based on a defined hierarchy, where a 'manager' agent coordinates the workflow, delegating tasks to other agents and validating their outcomes before proceeding.
### Key Features
- **Task Delegation**: A manager agent oversees task distribution among crew members.
- **Result Validation**: The manager reviews outcomes before passing tasks along, ensuring quality and relevance.
- **Efficient Workflow**: Mimics corporate structures for a familiar and organized task management approach.
## Implementing the Hierarchical Process
To utilize the hierarchical process, you must define a crew with a designated manager and a clear chain of command for task execution.
!!! note "Tools on the hierarchical process"
For tools when using the hierarchical process, you want to make sure to assign them to the agents instead of the tasks, as the manager will be the one delegating the tasks and the agents will be the ones executing them.
!!! note "Manager LLM"
A manager will be automatically set for the crew, you don't need to define it. You do need to set the `manager_llm` parameter in the crew though.
```python
from langchain_openai import ChatOpenAI
from crewai import Crew, Process, Agent
# Define your agents, no need to define a manager
researcher = Agent(
role='Researcher',
goal='Conduct in-depth analysis',
# tools = [...]
)
writer = Agent(
role='Writer',
goal='Create engaging content',
# tools = [...]
)
# Form the crew with a hierarchical process
project_crew = Crew(
tasks=[...], # Tasks that that manager will figure out how to complete
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # The manager's LLM that will be used internally
process=Process.hierarchical # Designating the hierarchical approach
)
```
### Workflow in Action
1. **Task Assignment**: The manager assigns tasks based on agent roles and capabilities.
2. **Execution and Review**: Agents perform their tasks, with the manager reviewing outcomes for approval.
3. **Sequential Task Progression**: Tasks are completed in a sequence dictated by the manager, ensuring orderly progression.
## Conclusion
The hierarchical process in CrewAI offers a familiar, structured way to manage tasks within a project. By leveraging a chain of command, it enhances efficiency and quality control, making it ideal for complex projects requiring meticulous oversight.

View File

@@ -0,0 +1,76 @@
# Human Input on Execution
Human inputs is important in many agent execution use cases, humans are AGI so they can can be prompted to step in and provide extra details ins necessary.
Using it with crewAI is pretty straightforward and you can do it through a LangChain Tool.
Check [LangChain Integration](https://python.langchain.com/docs/integrations/tools/human_tools) for more details:
Example:
```python
import os
from crewai import Agent, Task, Crew, Process
from langchain_community.tools import DuckDuckGoSearchRun
from langchain.agents import load_tools
search_tool = DuckDuckGoSearchRun()
# Loading Human Tools
human_tools = load_tools(["human"])
# Define your agents with roles and goals
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science in',
backstory="""You are a Senior Research Analyst at a leading tech think tank.
Your expertise lies in identifying emerging trends and technologies in AI and
data science. You have a knack for dissecting complex data and presenting
actionable insights.""",
verbose=True,
allow_delegation=False,
# Passing human tools to the agent
tools=[search_tool]+human_tools
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Tech Content Strategist, known for your insightful
and engaging articles on technology and innovation. With a deep understanding of
the tech industry, you transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True
)
# Create tasks for your agents
# Being explicit on the task to ask for human feedback.
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Compile your findings in a detailed report.
Make sure to check with the human if the draft is good before returning your Final Answer.
Your final answer MUST be a full analysis report""",
agent=researcher
)
task2 = Task(
description="""Using the insights from the researcher's report, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Aim for a narrative that captures the essence of these breakthroughs and their
implications for the future.
Your final answer MUST be the full blog post of at least 3 paragraphs.""",
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
```

View File

@@ -0,0 +1,105 @@
---
title: Connect CrewAI to LLMs
description: Guide on integrating CrewAI with various Large Language Models (LLMs).
---
## Connect CrewAI to LLMs
!!! note "Default LLM"
By default, crewAI uses OpenAI's GPT-4 model for language processing. However, you can configure your agents to use a different model or API. This guide will show you how to connect your agents to different LLMs.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
## Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. It requires installation and configuration, including model adjustments via a Modelfile to optimize performance.
### Setting Up Ollama
- **Installation**: Follow Ollama's guide for setup.
- **Configuration**: [Adjust your local model with a Modelfile](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md), considering adding `Observation` as a stop word and playing with parameters like `top_p` and `temperature`.
### Integrating Ollama with CrewAI
Instantiate Ollama and pass it to your agents within CrewAI, enhancing them with the local model's capabilities.
```python
from langchain_community.llms import Ollama
# Assuming you have Ollama installed and downloaded the openhermes model
ollama_openhermes = Ollama(model="openhermes")
local_expert = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
tools=[SearchTools.search_internet, BrowserTools.scrape_and_summarize_website],
llm=ollama_openhermes,
verbose=True
)
```
## OpenAI Compatible API Endpoints
You can use environment variables for easy switch between APIs and models, supporting diverse platforms like FastChat, LM Studio, and Mistral AI.
### Configuration Examples
### FastChat
```sh
# Required
OPENAI_API_BASE="http://localhost:8001/v1"
OPENAI_API_KEY=NA
MODEL_NAME='oh-2.5m7b-q51' # Depending on the model you have available
```
### LM Studio
```sh
# Required
OPENAI_API_BASE="http://localhost:8000/v1"
OPENAI_API_KEY=NA
MODEL_NAME=NA
```
### Mistral API
```sh
OPENAI_API_KEY=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1
MODEL_NAME="mistral-small" # Check documentation for available models
```
### text-gen-web-ui
```sh
# Required
API_BASE_URL=http://localhost:5000
OPENAI_API_KEY=NA
MODEL_NAME=NA
```
### Azure Open AI
Azure's OpenAI API needs a distinct setup, utilizing the `langchain_openai` component for Azure-specific configurations.
Configuration settings:
```sh
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_ENDPOINT=""
AZURE_OPENAI_KEY=""
```
```python
from dotenv import load_dotenv
from langchain_openai import AzureChatOpenAI
load_dotenv()
default_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY")
)
example_agent = Agent(
role='Example Agent',
goal='Demonstrate custom LLM configuration',
backstory='A diligent explorer of GitHub docs.',
llm=default_llm
)
```
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

50
docs/how-to/Sequential.md Normal file
View File

@@ -0,0 +1,50 @@
---
title: Implementing the Sequential Process in CrewAI
description: A guide to utilizing the sequential process for task execution in CrewAI projects.
---
## Introduction
The sequential process in CrewAI ensures tasks are executed one after the other, following a linear progression. This approach is akin to a relay race, where each agent completes their task before passing the baton to the next.
## Sequential Process Overview
This process is straightforward and effective, particularly for projects where tasks must be completed in a specific order to achieve the desired outcome.
### Key Features
- **Linear Task Flow**: Tasks are handled in a predetermined sequence, ensuring orderly progression.
- **Simplicity**: Ideal for projects with clearly defined, step-by-step tasks.
- **Easy Monitoring**: Task completion can be easily tracked, offering clear insights into project progress.
## Implementing the Sequential Process
To apply the sequential process, assemble your crew and define the tasks in the order they need to be executed.
!!! note "Task assignment"
In the sequential process you need to make sure all tasks are assigned to the agents, as the agents will be the ones executing them.
```python
from crewai import Crew, Process, Agent, Task
# Define your agents
researcher = Agent(role='Researcher', goal='Conduct foundational research')
analyst = Agent(role='Data Analyst', goal='Analyze research findings')
writer = Agent(role='Writer', goal='Draft the final report')
# Define the tasks in sequence
research_task = Task(description='Gather relevant data', agent=researcher)
analysis_task = Task(description='Analyze the data', agent=analyst)
writing_task = Task(description='Compose the report', agent=writer)
# Form the crew with a sequential process
report_crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, writing_task],
process=Process.sequential
)
```
### Workflow in Action
1. **Initial Task**: The first agent completes their task and signals completion.
2. **Subsequent Tasks**: Following agents pick up their tasks in the order defined, using the outcomes of preceding tasks as inputs.
3. **Completion**: The process concludes once the final task is executed, culminating in the project's completion.
## Conclusion
The sequential process in CrewAI provides a clear, straightforward path for task execution. It's particularly suited for projects requiring a logical progression of tasks, ensuring each step is completed before the next begins, thereby facilitating a cohesive final product.

108
docs/index.md Normal file
View File

@@ -0,0 +1,108 @@
<img src='./crew_only_logo.png' width='250' class='mb-10'/>
# crewAI Documentation
Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<div style="display:flex; margin:0 auto; justify-content: center;">
<div style="width:25%">
<h2>Core Concepts</h2>
<ul>
<li>
<a href="./core-concepts/Agents">
Agents
</a>
</li>
<li>
<a href="./core-concepts/Tasks">
Tasks
</a>
</li>
<li>
<a href="./core-concepts/Tools">
Tools
</a>
</li>
<li>
<a href="./core-concepts/Processes">
Processes
</a>
</li>
</ul>
</div>
<div style="width:30%">
<h2>How-To Guides</h2>
<ul>
<li>
<a href="./how-to/Creating-a-Crew-and-kick-it-off">
Getting Started
</a>
</li>
<li>
<a href="./how-to/how-to/Sequential">
Using Sequential Process
</a>
</li>
<li>
<a href="./how-to/Hierarchical">
Using Hierarchical Process
</a>
</li>
<li>
<a href="./how-to/LLM-Connections">
Connecting to LLMs
</a>
</li>
<li>
<a href="./how-to/Customizing-Agents">
Customizing Agents
</a>
</li>
<li>
<a href="./how-to/Human-Input-on-Execution">
Human Input on Execution
</a>
</li>
</ul>
</div>
<div style="width:30%">
<h2>Examples</h2>
<ul>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/prep-for-a-meeting">
Prepare for meetings
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner">
Trip Planner Crew
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post">
Create Instagram Post
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis">
Stock Analysis
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/game-builder-crew">
Game Generator
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/CrewAI-LangGraph">
Drafting emails with LangGraph
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator">
Landing Page Generator
</a>
</li>
</ul>
</div>
</div>

3
docs/postcss.config.js Normal file
View File

@@ -0,0 +1,3 @@
module.exports = {
plugins: [require('tailwindcss'), require('autoprefixer')]
}

View File

@@ -0,0 +1,3 @@
.md-typeset .admonition-title {
margin-bottom: 10px;
}

565
docs/stylesheets/output.css Normal file
View File

@@ -0,0 +1,565 @@
/*
! tailwindcss v3.4.1 | MIT License | https://tailwindcss.com
*/
/*
1. Prevent padding and border from affecting element width. (https://github.com/mozdevs/cssremedy/issues/4)
2. Allow adding a border to an element by just adding a border-width. (https://github.com/tailwindcss/tailwindcss/pull/116)
*/
*,
::before,
::after {
box-sizing: border-box;
/* 1 */
border-width: 0;
/* 2 */
border-style: solid;
/* 2 */
border-color: #e5e7eb;
/* 2 */
}
::before,
::after {
--tw-content: '';
}
/*
1. Use a consistent sensible line-height in all browsers.
2. Prevent adjustments of font size after orientation changes in iOS.
3. Use a more readable tab size.
4. Use the user's configured `sans` font-family by default.
5. Use the user's configured `sans` font-feature-settings by default.
6. Use the user's configured `sans` font-variation-settings by default.
7. Disable tap highlights on iOS
*/
html,
:host {
line-height: 1.5;
/* 1 */
-webkit-text-size-adjust: 100%;
/* 2 */
-moz-tab-size: 4;
/* 3 */
-o-tab-size: 4;
tab-size: 4;
/* 3 */
font-family: ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
/* 4 */
font-feature-settings: normal;
/* 5 */
font-variation-settings: normal;
/* 6 */
-webkit-tap-highlight-color: transparent;
/* 7 */
}
/*
1. Remove the margin in all browsers.
2. Inherit line-height from `html` so users can set them as a class directly on the `html` element.
*/
body {
margin: 0;
/* 1 */
line-height: inherit;
/* 2 */
}
/*
1. Add the correct height in Firefox.
2. Correct the inheritance of border color in Firefox. (https://bugzilla.mozilla.org/show_bug.cgi?id=190655)
3. Ensure horizontal rules are visible by default.
*/
hr {
height: 0;
/* 1 */
color: inherit;
/* 2 */
border-top-width: 1px;
/* 3 */
}
/*
Add the correct text decoration in Chrome, Edge, and Safari.
*/
abbr:where([title]) {
-webkit-text-decoration: underline dotted;
text-decoration: underline dotted;
}
/*
Remove the default font size and weight for headings.
*/
h1,
h2,
h3,
h4,
h5,
h6 {
font-size: inherit;
font-weight: inherit;
}
/*
Reset links to optimize for opt-in styling instead of opt-out.
*/
a {
color: inherit;
text-decoration: inherit;
}
/*
Add the correct font weight in Edge and Safari.
*/
b,
strong {
font-weight: bolder;
}
/*
1. Use the user's configured `mono` font-family by default.
2. Use the user's configured `mono` font-feature-settings by default.
3. Use the user's configured `mono` font-variation-settings by default.
4. Correct the odd `em` font sizing in all browsers.
*/
code,
kbd,
samp,
pre {
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
/* 1 */
font-feature-settings: normal;
/* 2 */
font-variation-settings: normal;
/* 3 */
font-size: 1em;
/* 4 */
}
/*
Add the correct font size in all browsers.
*/
small {
font-size: 80%;
}
/*
Prevent `sub` and `sup` elements from affecting the line height in all browsers.
*/
sub,
sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
sub {
bottom: -0.25em;
}
sup {
top: -0.5em;
}
/*
1. Remove text indentation from table contents in Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=999088, https://bugs.webkit.org/show_bug.cgi?id=201297)
2. Correct table border color inheritance in all Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=935729, https://bugs.webkit.org/show_bug.cgi?id=195016)
3. Remove gaps between table borders by default.
*/
table {
text-indent: 0;
/* 1 */
border-color: inherit;
/* 2 */
border-collapse: collapse;
/* 3 */
}
/*
1. Change the font styles in all browsers.
2. Remove the margin in Firefox and Safari.
3. Remove default padding in all browsers.
*/
button,
input,
optgroup,
select,
textarea {
font-family: inherit;
/* 1 */
font-feature-settings: inherit;
/* 1 */
font-variation-settings: inherit;
/* 1 */
font-size: 100%;
/* 1 */
font-weight: inherit;
/* 1 */
line-height: inherit;
/* 1 */
color: inherit;
/* 1 */
margin: 0;
/* 2 */
padding: 0;
/* 3 */
}
/*
Remove the inheritance of text transform in Edge and Firefox.
*/
button,
select {
text-transform: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Remove default button styles.
*/
button,
[type='button'],
[type='reset'],
[type='submit'] {
-webkit-appearance: button;
/* 1 */
background-color: transparent;
/* 2 */
background-image: none;
/* 2 */
}
/*
Use the modern Firefox focus style for all focusable elements.
*/
:-moz-focusring {
outline: auto;
}
/*
Remove the additional `:invalid` styles in Firefox. (https://github.com/mozilla/gecko-dev/blob/2f9eacd9d3d995c937b4251a5557d95d494c9be1/layout/style/res/forms.css#L728-L737)
*/
:-moz-ui-invalid {
box-shadow: none;
}
/*
Add the correct vertical alignment in Chrome and Firefox.
*/
progress {
vertical-align: baseline;
}
/*
Correct the cursor style of increment and decrement buttons in Safari.
*/
::-webkit-inner-spin-button,
::-webkit-outer-spin-button {
height: auto;
}
/*
1. Correct the odd appearance in Chrome and Safari.
2. Correct the outline style in Safari.
*/
[type='search'] {
-webkit-appearance: textfield;
/* 1 */
outline-offset: -2px;
/* 2 */
}
/*
Remove the inner padding in Chrome and Safari on macOS.
*/
::-webkit-search-decoration {
-webkit-appearance: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Change font properties to `inherit` in Safari.
*/
::-webkit-file-upload-button {
-webkit-appearance: button;
/* 1 */
font: inherit;
/* 2 */
}
/*
Add the correct display in Chrome and Safari.
*/
summary {
display: list-item;
}
/*
Removes the default spacing and border for appropriate elements.
*/
blockquote,
dl,
dd,
h1,
h2,
h3,
h4,
h5,
h6,
hr,
figure,
p,
pre {
margin: 0;
}
fieldset {
margin: 0;
padding: 0;
}
legend {
padding: 0;
}
ol,
ul,
menu {
list-style: none;
margin: 0;
padding: 0;
}
/*
Reset default styling for dialogs.
*/
dialog {
padding: 0;
}
/*
Prevent resizing textareas horizontally by default.
*/
textarea {
resize: vertical;
}
/*
1. Reset the default placeholder opacity in Firefox. (https://github.com/tailwindlabs/tailwindcss/issues/3300)
2. Set the default placeholder color to the user's configured gray 400 color.
*/
input::-moz-placeholder, textarea::-moz-placeholder {
opacity: 1;
/* 1 */
color: #9ca3af;
/* 2 */
}
input::placeholder,
textarea::placeholder {
opacity: 1;
/* 1 */
color: #9ca3af;
/* 2 */
}
/*
Set the default cursor for buttons.
*/
button,
[role="button"] {
cursor: pointer;
}
/*
Make sure disabled buttons don't get the pointer cursor.
*/
:disabled {
cursor: default;
}
/*
1. Make replaced elements `display: block` by default. (https://github.com/mozdevs/cssremedy/issues/14)
2. Add `vertical-align: middle` to align replaced elements more sensibly by default. (https://github.com/jensimmons/cssremedy/issues/14#issuecomment-634934210)
This can trigger a poorly considered lint error in some tools but is included by design.
*/
img,
svg,
video,
canvas,
audio,
iframe,
embed,
object {
display: block;
/* 1 */
vertical-align: middle;
/* 2 */
}
/*
Constrain images and videos to the parent width and preserve their intrinsic aspect ratio. (https://github.com/mozdevs/cssremedy/issues/14)
*/
img,
video {
max-width: 100%;
height: auto;
}
/* Make elements with the HTML hidden attribute stay hidden by default */
[hidden] {
display: none;
}
*, ::before, ::after {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
}
::backdrop {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
}
.mb-10 {
margin-bottom: 2.5rem;
}
.transform {
transform: translate(var(--tw-translate-x), var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y));
}
.leading-3 {
line-height: .75rem;
}
.transition {
transition-property: color, background-color, border-color, text-decoration-color, fill, stroke, opacity, box-shadow, transform, filter, -webkit-backdrop-filter;
transition-property: color, background-color, border-color, text-decoration-color, fill, stroke, opacity, box-shadow, transform, filter, backdrop-filter;
transition-property: color, background-color, border-color, text-decoration-color, fill, stroke, opacity, box-shadow, transform, filter, backdrop-filter, -webkit-backdrop-filter;
transition-timing-function: cubic-bezier(0.4, 0, 0.2, 1);
transition-duration: 150ms;
}

View File

@@ -0,0 +1,3 @@
@import 'tailwindcss/base';
@import 'tailwindcss/components';
@import 'tailwindcss/utilities';

9
docs/tailwind.config.js Normal file
View File

@@ -0,0 +1,9 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
content: ["./**/*.md"],
theme: {
extend: {},
},
plugins: [],
}

158
mkdocs.yml Normal file
View File

@@ -0,0 +1,158 @@
site_name: crewAI
site_author: crewAI, Inc
site_description: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
repo_name: crewAI
repo_url: https://github.com/joaomdmoura/crewai/
site_url: https://crewai.com
edit_uri: edit/main/docs/
copyright: Copyright &copy; 2024 crewAI, Inc
markdown_extensions:
- abbr
- admonition
- pymdownx.details
- attr_list
- def_list
- footnotes
- md_in_html
- toc:
permalink: true
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.caret
- pymdownx.emoji:
emoji_generator: !!python/name:material.extensions.emoji.to_svg
emoji_index: !!python/name:material.extensions.emoji.twemoji
- pymdownx.highlight:
anchor_linenums: true
line_spans: __span
pygments_lang_class: true
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.magiclink:
normalize_issue_symbols: true
repo_url_shorthand: true
user: joaomdmoura
repo: crewAI
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.snippets:
auto_append:
- includes/mkdocs.md
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
combine_header_slug: true
slugify: !!python/object/apply:pymdownx.slugs.slugify
kwds:
case: lower
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
theme:
name: material
language: en
icon:
repo: fontawesome/brands/github
edit: material/pencil
view: material/eye
admonition:
note: octicons/light-bulb-16
abstract: octicons/checklist-16
info: octicons/info-16
tip: octicons/squirrel-16
success: octicons/check-16
question: octicons/question-16
warning: octicons/alert-16
failure: octicons/x-circle-16
danger: octicons/zap-16
bug: octicons/bug-16
example: octicons/beaker-16
quote: octicons/quote-16
palette:
- scheme: default
primary: red
accent: red
toggle:
icon: material/brightness-7
name: Switch to dark mode
- scheme: slate
primary: red
accent: red
toggle:
icon: material/brightness-4
name: Switch to light mode
features:
- announce.dismiss
- content.action.edit
- content.action.view
- content.code.annotate
- content.code.copy
- content.code.select
- content.tabs.link
- content.tooltips
- header.autohide
- navigation.footer
- navigation.indexes
# - navigation.prune
# - navigation.sections
# - navigation.tabs
- search.suggest
- navigation.instant
- navigation.instant.progress
- navigation.instant.prefetch
- navigation.tracking
# - navigation.expand
- navigation.path
- navigation.top
- toc.follow
- toc.integrate
- search.highlight
- search.share
nav:
- Home: '/'
- Core Concepts:
- Agents: 'core-concepts/Agents.md'
- Tasks: 'core-concepts/Tasks.md'
- Tools: 'core-concepts/Tools.md'
- Processes: 'core-concepts/Processes.md'
- Collaboration: 'core-concepts/Collaboration.md'
- How to Guides:
- Getting Started: 'how-to/Creating-a-Crew-and-kick-it-off.md'
- Using Sequential Process: 'how-to/Sequential.md'
- Using Hierarchical Process: 'how-to/Hierarchical.md'
- Connecting to any LLM: 'how-to/LLM-Connections.md'
- Customizing Agents: 'how-to/Customizing-Agents.md'
- Human Input on Execution: 'how-to/Human-Input-on-Execution.md'
- Examples:
- Trip Planner Crew: https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner"
- Create Instagram Post: https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post"
- Stock Analysis: https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis"
- Game Generator: https://github.com/joaomdmoura/crewAI-examples/tree/main/game-builder-crew"
- Drafting emails with LangGraph: https://github.com/joaomdmoura/crewAI-examples/tree/main/CrewAI-LangGraph"
- Landing Page Generator: https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator"
- Prepare for meetings: https://github.com/joaomdmoura/crewAI-examples/tree/main/prep-for-a-meeting"
extra_css:
- stylesheets/output.css
- stylesheets/extra.css
plugins:
- social
extra:
analytics:
provider: google
property: G-N3Q505TMQ6
social:
- icon: fontawesome/brands/twitter
link: https://twitter.com/joaomdmoura
- icon: fontawesome/brands/github
link: https://github.com/joaomdmoura/crewAI

1710
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,32 +1,46 @@
[tool.poetry]
name = "crewai"
version = "0.1.14"
version = "0.5.3"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joaomdmoura@gmail.com>"]
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
packages = [
{ include = "crewai", from = "src" },
]
[tool.poetry.urls]
Homepage = "https://github.com/joaomdmoura/crewai"
Homepage = "https://crewai.io"
Documentation = "https://github.com/joaomdmoura/CrewAI/wiki/Index"
Repository = "https://github.com/joaomdmoura/crewai"
[tool.poetry.dependencies]
python = ">=3.9,<4.0"
python = ">=3.10,<4.0"
pydantic = "^2.4.2"
langchain = "^0.0.351"
openai = "^1.5.0"
langchain = "0.1.0"
openai = "^1.7.1"
langchain-openai = "^0.0.2"
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
black = "^23.12.1"
pyright = "1.1.333"
black = {git = "https://github.com/psf/black.git", rev = "stable"}
autoflake = "^2.2.1"
pre-commit = "^3.6.0"
mkdocs = "^1.4.3"
mkdocstrings = "^0.22.0"
mkdocstrings-python = "^1.1.2"
mkdocs-material = {extras = ["imaging"], version = "^9.5.7"}
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
[tool.isort]
profile = "black"
known_first_party = ["crewai"]
[tool.poetry.group.test.dependencies]
pytest = "^7.4"
pytest-vcr = "^1.0.2"

Some files were not shown because too many files have changed in this diff Show More