Compare commits

...

39 Commits

Author SHA1 Message Date
Brandon Hancock
d95d7146f8 add support for langfuse with litellm 2024-12-06 13:50:18 -05:00
Brandon Hancock (bhancock_ai)
c7c0647dd2 drop metadata requirement (#1712)
* drop metadata requirement

* fix linting

* Update docs for new knowledge

* more linting

* more linting

* make save_documents private

* update docs to the new way we use knowledge and include clearing memory
2024-12-05 14:59:52 -05:00
Brandon Hancock (bhancock_ai)
7b276e6797 Incorporate Stale PRs that have feedback (#1693)
* incorporate #1683

* add in --version flag to cli. closes #1679.

* Fix env issue

* Add in suggestions from @caike to make sure ragstorage doesnt exceed os file limit. Also, included additional checks to support windows.

* remove poetry.lock as pointed out by @sanders41 in #1574.

* Incorporate feedback from crewai reviewer

* Incorporate @lorenzejay feedback
2024-12-05 12:17:23 -05:00
João Moura
3daba0c79e curting new verson 2024-12-05 13:53:10 -03:00
João Moura
2c85e8e23a updating tools 2024-12-05 13:51:20 -03:00
Brandon Hancock (bhancock_ai)
b0f1d1fcf0 New docs about yaml crew with decorators. Simplify template crew with… (#1701)
* New docs about yaml crew with decorators. Simplify template crew with links

* Fix spelling issues.
2024-12-05 11:23:20 -05:00
Brandon Hancock (bhancock_ai)
611526596a Brandon/cre 509 hitl multiple rounds of followup (#1702)
* v1 of HITL working

* Drop print statements

* HITL code more robust. Still needs to be refactored.

* refactor and more clear messages

* Fix type issue

* fix tests

* Fix test again

* Drop extra print
2024-12-05 10:14:04 -05:00
Tony Kipkemboi
fa373f9660 add knowledge demo + improve knowledge docs (#1706) 2024-12-05 09:49:44 -05:00
Rashmi Pawar
48bb8ef775 docs: add nvidia as provider (#1632)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-04 15:38:46 -05:00
Brandon Hancock (bhancock_ai)
bbea797b0c remove all references to pipeline and pipeline router (#1661)
* remove all references to pipeline and router

* fix linting

* drop poetry.lock
2024-12-04 12:39:34 -05:00
Tony Kipkemboi
066ad73423 Merge pull request #1698 from crewAIInc/brandon/cre-510-update-docs-to-talk-about-pydantic-and-json-outputs
Talk about getting structured consistent outputs with tasks.
2024-12-04 11:07:52 -05:00
Tony Kipkemboi
0695c26703 Merge branch 'main' into brandon/cre-510-update-docs-to-talk-about-pydantic-and-json-outputs 2024-12-04 11:05:47 -05:00
Brandon Hancock
4fb3331c6a Talk about getting structured consistent outputs with tasks. 2024-12-04 10:46:39 -05:00
Stephen
b6c6eea6f5 Update README.md (#1694)
Corrected the statement which says users can not disable telemetry, but now users can disable by setting the environment variable OTEL_SDK_DISABLED to true.

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-03 16:08:19 -05:00
Lorenze Jay
1af95f5146 Knowledge project directory standard (#1691)
* Knowledge project directory standard

* fixed types

* comment fix

* made base file knowledge source an abstract class

* cleaner validator on model_post_init

* fix type checker

* cleaner refactor

* better template
2024-12-03 12:27:48 -08:00
Feynman Liang
ed3487aa22 Fix indentation in llm-connections.mdx code block (#1573)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-03 12:52:23 -05:00
Patcher
77af733e44 [Doc]: Add documenation for openlit observability (#1612)
* Create openlit-observability.mdx

* Update doc with images and steps

* Update mkdocs.yml and add OpenLIT guide link

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-03 12:38:49 -05:00
Tom Mahler, PhD
aaf80d1d43 [FEATURE] Support for custom path in RAGStorage (#1659)
* added path to RAGStorage

* added path to short term and entity memory

* add path for long_term_storage for completeness

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-03 12:22:29 -05:00
Ola Hungerford
9e9b945a46 Update using langchain tools docs (#1664)
* Update example of how to use LangChain tools with correct syntax

* Use .env

* Add  Code back

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-03 11:13:06 -05:00
Javier Saldaña
308a8dc925 Update reset memories command based on the SDK (#1688)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-03 10:09:30 -05:00
Tony Kipkemboi
7d9d0ff6f7 fix missing code in flows docs (#1690)
* docs: improve tasks documentation clarity and structure

- Add Task Execution Flow section
- Add variable interpolation explanation
- Add Task Dependencies section with examples
- Improve overall document structure and readability
- Update code examples with proper syntax highlighting

* docs: update agent documentation with improved examples and formatting

- Replace DuckDuckGoSearchRun with SerperDevTool
- Update code block formatting to be consistent
- Improve template examples with actual syntax
- Update LLM examples to use current models
- Clean up formatting and remove redundant comments

* docs: enhance LLM documentation with Cerebras provider and formatting improvements

* docs: simplify LLMs documentation title

* docs: improve installation guide clarity and structure

- Add clear Python version requirements with check command
- Simplify installation options to recommended method
- Improve upgrade section clarity for existing users
- Add better visual structure with Notes and Tips
- Update description and formatting

* docs: improve introduction page organization and clarity

- Update organizational analogy in Note section
- Improve table formatting and alignment
- Remove emojis from component table for cleaner look
- Add 'helps you' to make the note more action-oriented

* docs: add enterprise and community cards

- Add Enterprise deployment card in quickstart
- Add community card focused on open source discussions
- Remove deployment reference from community description
- Clean up introduction page cards
- Remove link from Enterprise description text

* docs: add code snippet to Getting Started section in flows.mdx

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-03 10:02:06 -05:00
João Moura
f8a8e7b2a5 preparing new version 2024-12-02 18:28:58 -03:00
Brandon Hancock (bhancock_ai)
3285c1b196 Fixes issues with result as answer not properly exiting LLM loop (#1689)
* v1 of fix implemented. Need to confirm with tokens.

* remove print statements
2024-12-02 13:38:17 -05:00
Tony Kipkemboi
4bc23affe0 Documentation Improvements: LLM Configuration and Usage (#1684)
* docs: improve tasks documentation clarity and structure

- Add Task Execution Flow section
- Add variable interpolation explanation
- Add Task Dependencies section with examples
- Improve overall document structure and readability
- Update code examples with proper syntax highlighting

* docs: update agent documentation with improved examples and formatting

- Replace DuckDuckGoSearchRun with SerperDevTool
- Update code block formatting to be consistent
- Improve template examples with actual syntax
- Update LLM examples to use current models
- Clean up formatting and remove redundant comments

* docs: enhance LLM documentation with Cerebras provider and formatting improvements

* docs: simplify LLMs documentation title

* docs: improve installation guide clarity and structure

- Add clear Python version requirements with check command
- Simplify installation options to recommended method
- Improve upgrade section clarity for existing users
- Add better visual structure with Notes and Tips
- Update description and formatting

* docs: improve introduction page organization and clarity

- Update organizational analogy in Note section
- Improve table formatting and alignment
- Remove emojis from component table for cleaner look
- Add 'helps you' to make the note more action-oriented

* docs: add enterprise and community cards

- Add Enterprise deployment card in quickstart
- Add community card focused on open source discussions
- Remove deployment reference from community description
- Clean up introduction page cards
- Remove link from Enterprise description text
2024-12-02 09:50:12 -05:00
Tony Kipkemboi
bca56eea48 Merge pull request #1675 from rokbenko/rok
[DOCS] Update Agents docs to include two approaches for creating an agent
2024-11-30 11:26:10 -05:00
Rok Benko
588ad3c4a4 Update Agents docs to include two approaches for creating an agent: with and without YAML configuration 2024-11-28 17:20:53 +01:00
Lorenze Jay
c6a6c918e0 added knowledge to agent level (#1655)
* added knowledge to agent level

* linted

* added doc

* added from suggestions

* added test

* fixes from discussion

* fix docs

* fix test

* rm cassette for knowledge_sources test as its a mock and update agent doc string

* fix test

* rm unused

* linted
2024-11-27 11:33:07 -08:00
Brandon Hancock (bhancock_ai)
366bbbbea3 Feat/remove langchain (#1668)
* feat: add initial changes from langchain

* feat: remove kwargs of being processed

* feat: remove langchain, update uv.lock and fix type_hint

* feat: change docs

* feat: remove forced requirements for parameter

* feat add tests for new structure tool

* feat: fix tests and adapt code for args

* fix tool calling for langchain tools

* doc strings

---------

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-11-27 11:22:49 -05:00
Eduardo Chiarotti
293305790d Feat/remove langchain (#1654)
* feat: add initial changes from langchain

* feat: remove kwargs of being processed

* feat: remove langchain, update uv.lock and fix type_hint

* feat: change docs

* feat: remove forced requirements for parameter

* feat add tests for new structure tool

* feat: fix tests and adapt code for args
2024-11-26 16:59:52 -03:00
Ivan Peevski
8bc09eb054 Update readme for running mypy (#1614)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-11-26 12:45:08 -05:00
Brandon Hancock (bhancock_ai)
db1b678c3a fix spelling issue found by @Jacques-Murray (#1660) 2024-11-26 11:36:29 -05:00
Bowen Liang
6f32bf52cc update (#1638)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-11-26 11:24:21 -05:00
Bowen Liang
49d173a02d Update Github actions (#1639)
* actions/checkout@v4

* actions/cache@v4

* actions/setup-python@v5

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-11-26 11:08:50 -05:00
Brandon Hancock (bhancock_ai)
4069b621d5 Improve typed task outputs (#1651)
* V1 working

* clean up imports and prints

* more clean up and add tests

* fixing tests

* fix test

* fix linting

* Fix tests

* Fix linting

* add doc string as requested by eduardo
2024-11-26 09:41:14 -05:00
Tony Kipkemboi
a7147c99c6 Merge pull request #1652 from tonykipkemboi/main
add knowledge to mint.json
2024-11-25 16:51:48 -05:00
Tony Kipkemboi
6fe308202e add knowledge to mint.json 2024-11-25 20:37:27 +00:00
Vini Brasil
63ecb7395d Log in to Tool Repository on crewai login (#1650)
This commit adds an extra step to `crewai login` to ensure users also
log in to Tool Repository, that is, exchanging their Auth0 tokens for a
Tool Repository username and password to be used by UV downloads and API
tool uploads.
2024-11-25 15:57:47 -03:00
João Moura
8cf1cd5a62 preparing new version 2024-11-25 10:05:15 -03:00
Gui Vieira
93c0467bba Merge pull request #1640 from crewAIInc/gui/fix-threading
Fix threading
2024-11-21 15:50:46 -03:00
132 changed files with 4211 additions and 11310 deletions

View File

@@ -6,7 +6,7 @@ jobs:
lint: lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v4
- name: Install Requirements - name: Install Requirements
run: | run: |

View File

@@ -13,10 +13,10 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Setup Python - name: Setup Python
uses: actions/setup-python@v4 uses: actions/setup-python@v5
with: with:
python-version: '3.10' python-version: '3.10'
@@ -25,7 +25,7 @@ jobs:
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')" run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
- name: Setup cache - name: Setup cache
uses: actions/cache@v3 uses: actions/cache@v4
with: with:
key: mkdocs-material-${{ steps.req-hash.outputs.hash }} key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
path: .cache path: .cache
@@ -42,4 +42,4 @@ jobs:
GH_TOKEN: ${{ secrets.GH_TOKEN }} GH_TOKEN: ${{ secrets.GH_TOKEN }}
- name: Build and deploy MkDocs - name: Build and deploy MkDocs
run: mkdocs gh-deploy --force run: mkdocs gh-deploy --force

View File

@@ -11,7 +11,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v4 uses: actions/setup-python@v5
with: with:
python-version: "3.11.9" python-version: "3.11.9"

View File

@@ -14,7 +14,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Setup Python - name: Setup Python
uses: actions/setup-python@v4 uses: actions/setup-python@v5
with: with:
python-version: "3.11.9" python-version: "3.11.9"

View File

@@ -121,7 +121,7 @@ researcher:
You're a seasoned researcher with a knack for uncovering the latest You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner. information and present it in a clear and concise manner.
reporting_analyst: reporting_analyst:
role: > role: >
{topic} Reporting Analyst {topic} Reporting Analyst
@@ -205,7 +205,7 @@ class LatestAiDevelopmentCrew():
tasks=self.tasks, # Automatically created by the @task decorator tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential, process=Process.sequential,
verbose=True, verbose=True,
) )
``` ```
**main.py** **main.py**
@@ -357,7 +357,7 @@ uv run pytest .
### Running static type checks ### Running static type checks
```bash ```bash
uvx mypy uvx mypy src
``` ```
### Packaging ### Packaging
@@ -376,7 +376,7 @@ pip install dist/*.tar.gz
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools. CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. We don't offer a way to disable it now, but we will in the future. It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
Data collected includes: Data collected includes:

View File

@@ -1,161 +1,343 @@
--- ---
title: Agents title: Agents
description: What are CrewAI Agents and how to use them. description: Detailed guide on creating and managing agents within the CrewAI framework.
icon: robot icon: robot
--- ---
## What is an agent? ## Overview of an Agent
An agent is an **autonomous unit** programmed to: In the CrewAI framework, an `Agent` is an autonomous unit that can:
<ul> - Perform specific tasks
<li class='leading-3'>Perform tasks</li> - Make decisions based on its role and goal
<li class='leading-3'>Make decisions</li> - Use tools to accomplish objectives
<li class='leading-3'>Communicate with other agents</li> - Communicate and collaborate with other agents
</ul> - Maintain memory of interactions
- Delegate tasks when allowed
<Tip> <Tip>
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like `Researcher`, `Writer`, or `Customer Support`, each contributing to the overall goal of the crew. Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
</Tip> </Tip>
## Agent attributes ## Agent Attributes
| Attribute | Parameter | Description | | Attribute | Parameter | Type | Description |
| :------------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. | | **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. | | **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
| **Backstory** | `backstory`| Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. | | **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. | | **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. | | **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. | | **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. | | **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. | | **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. | | **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. | | **Memory** _(optional)_ | `memory` | `bool` | Whether the agent should maintain memory of interactions. Default is True. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. | | **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. | | **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. | | **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. | | **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. | | **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. | | **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. | | **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. | | **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. | | **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. | | **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** *(optional)* | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. | | **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Embedder Config** _(optional)_ | `embedder_config` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
## Creating an agent ## Creating Agents
There are two ways to create agents in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
<Note> <Note>
**Agent interaction**: Agents can interact with each other using CrewAI's built-in delegation and communication mechanisms. This allows for dynamic task management and problem-solving within the crew. Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
```python Code
crew.kickoff(inputs={'topic': 'AI Agents'})
```
</Note> </Note>
To create an agent, you would typically initialize an instance of the `Agent` class with the desired properties. Here's a conceptual example including all attributes: Here's an example of how to configure agents using YAML:
```python Code example ```yaml agents.yaml
# src/latest_ai_development/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
```
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
```python Code
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process
from crewai.project import CrewBase, agent, crew
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
```
<Note>
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
</Note>
### Direct Code Definition
You can create agents directly in code by instantiating the `Agent` class. Here's a comprehensive example showing all available parameters:
```python Code
from crewai import Agent from crewai import Agent
from crewai_tools import SerperDevTool
# Create an agent with all available parameters
agent = Agent( agent = Agent(
role='Data Analyst', role="Senior Data Scientist",
goal='Extract actionable insights', goal="Analyze and interpret complex datasets to provide actionable insights",
backstory="""You're a data analyst at a large company. backstory="With over 10 years of experience in data science and machine learning, "
You're responsible for analyzing data and providing insights "you excel at finding patterns in complex datasets.",
to the business. llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4"
You're currently working on a project to analyze the function_calling_llm=None, # Optional: Separate LLM for tool calling
performance of our marketing campaigns.""", memory=True, # Default: True
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list verbose=False, # Default: False
llm=my_llm, # Optional allow_delegation=False, # Default: False
function_calling_llm=my_llm, # Optional max_iter=20, # Default: 20 iterations
max_iter=15, # Optional max_rpm=None, # Optional: Rate limit for API calls
max_rpm=None, # Optional max_execution_time=None, # Optional: Maximum execution time in seconds
max_execution_time=None, # Optional max_retry_limit=2, # Default: 2 retries on error
verbose=True, # Optional allow_code_execution=False, # Default: False
allow_delegation=False, # Optional code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe")
step_callback=my_intermediate_step_callback, # Optional respect_context_window=True, # Default: True
cache=True, # Optional use_system_prompt=True, # Default: True
system_template=my_system_template, # Optional tools=[SerperDevTool()], # Optional: List of tools
prompt_template=my_prompt_template, # Optional knowledge_sources=None, # Optional: List of knowledge sources
response_template=my_response_template, # Optional embedder_config=None, # Optional: Custom embedder configuration
config=my_config, # Optional system_template=None, # Optional: Custom system prompt template
crew=my_crew, # Optional prompt_template=None, # Optional: Custom prompt template
tools_handler=my_tools_handler, # Optional response_template=None, # Optional: Custom response template
cache_handler=my_cache_handler, # Optional step_callback=None, # Optional: Callback function for monitoring
callbacks=[callback1, callback2], # Optional
allow_code_execution=True, # Optional
max_retry_limit=2, # Optional
use_system_prompt=True, # Optional
respect_context_window=True, # Optional
code_execution_mode='safe', # Optional, defaults to 'safe'
) )
``` ```
## Setting prompt templates Let's break down some key parameter combinations for common use cases:
Prompt templates are used to format the prompt for the agent. You can use to update the system, regular and response templates for the agent. Here's an example of how to set prompt templates: #### Basic Research Agent
```python Code
research_agent = Agent(
role="Research Analyst",
goal="Find and summarize information about specific topics",
backstory="You are an experienced researcher with attention to detail",
tools=[SerperDevTool()],
verbose=True # Enable logging for debugging
)
```
```python Code example #### Code Development Agent
agent = Agent( ```python Code
role="{topic} specialist", dev_agent = Agent(
goal="Figure {goal} out", role="Senior Python Developer",
backstory="I am the master of {role}", goal="Write and debug Python code",
system_template="""<|start_header_id|>system<|end_header_id|> backstory="Expert Python developer with 10 years of experience",
allow_code_execution=True,
code_execution_mode="safe", # Uses Docker for safety
max_execution_time=300, # 5-minute timeout
max_retry_limit=3 # More retries for complex code tasks
)
```
#### Long-Running Analysis Agent
```python Code
analysis_agent = Agent(
role="Data Analyst",
goal="Perform deep analysis of large datasets",
backstory="Specialized in big data analysis and pattern recognition",
memory=True,
respect_context_window=True,
max_rpm=10, # Limit API calls
function_calling_llm="gpt-4o-mini" # Cheaper model for tool calls
)
```
#### Custom Template Agent
```python Code
custom_agent = Agent(
role="Customer Service Representative",
goal="Assist customers with their inquiries",
backstory="Experienced in customer support with a focus on satisfaction",
system_template="""<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>""", {{ .System }}<|eot_id|>""",
prompt_template="""<|start_header_id|>user<|end_header_id|> prompt_template="""<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>""", {{ .Prompt }}<|eot_id|>""",
response_template="""<|start_header_id|>assistant<|end_header_id|> response_template="""<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>""", {{ .Response }}<|eot_id|>""",
) )
``` ```
## Bring your third-party agents ### Parameter Details
Extend your third-party agents like LlamaIndex, Langchain, Autogen or fully custom agents using the the CrewAI's `BaseAgent` class. #### Critical Parameters
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
- `llm` determines the language model used (default: OpenAI's GPT-4)
<Note> #### Memory and Context
**BaseAgent** includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew. - `memory`: Enable to maintain conversation history
- `respect_context_window`: Prevents token limit issues
- `knowledge_sources`: Add domain-specific knowledge bases
#### Execution Control
- `max_iter`: Maximum attempts before giving best answer
- `max_execution_time`: Timeout in seconds
- `max_rpm`: Rate limiting for API calls
- `max_retry_limit`: Retries on error
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
#### Templates
- `system_template`: Defines agent's core behavior
- `prompt_template`: Structures input format
- `response_template`: Formats agent responses
<Note>
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{input}` in your templates. These will be automatically populated during execution.
</Note> </Note>
CrewAI is a universal multi-agent framework that allows for all agents to work together to automate tasks and solve problems. ## Agent Tools
```python Code example Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
from crewai import Agent, Task, Crew - [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
from custom_agent import CustomAgent # You need to build and extend your own agent logic with the CrewAI BaseAgent class then import it here. - [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
from langchain.agents import load_tools Here's how to add tools to an agent:
langchain_tools = load_tools(["google-serper"], llm=llm) ```python Code
from crewai import Agent
from crewai_tools import SerperDevTool, WikipediaTools
agent1 = CustomAgent( # Create tools
role="agent role", search_tool = SerperDevTool()
goal="who is {input}?", wiki_tool = WikipediaTools()
backstory="agent backstory",
verbose=True, # Add tools to agent
researcher = Agent(
role="AI Technology Researcher",
goal="Research the latest AI developments",
tools=[search_tool, wiki_tool],
verbose=True
) )
task1 = Task(
expected_output="a short biography of {input}",
description="a short biography of {input}",
agent=agent1,
)
agent2 = Agent(
role="agent role",
goal="summarize the short bio for {input} and if needed do more research",
backstory="agent backstory",
verbose=True,
)
task2 = Task(
description="a tldr summary of the short biography",
expected_output="5 bullet point summary of the biography",
agent=agent2,
context=[task1],
)
my_crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
``` ```
## Conclusion ## Agent Memory and Context
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, Agents can maintain memory of their interactions and use context from previous tasks. This is particularly useful for complex workflows where information needs to be retained across multiple tasks.
you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options.
```python Code
from crewai import Agent
analyst = Agent(
role="Data Analyst",
goal="Analyze and remember complex data patterns",
memory=True, # Enable memory
verbose=True
)
```
<Note>
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
</Note>
## Important Considerations and Best Practices
### Security and Code Execution
- When using `allow_code_execution`, be cautious with user input and always validate it
- Use `code_execution_mode: "safe"` (Docker) in production environments
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
### Performance Optimization
- Use `respect_context_window: true` to prevent token limit issues
- Set appropriate `max_rpm` to avoid rate limiting
- Enable `cache: true` to improve performance for repetitive tasks
- Adjust `max_iter` and `max_retry_limit` based on task complexity
### Memory and Context Management
- Use `memory: true` for tasks requiring historical context
- Leverage `knowledge_sources` for domain-specific information
- Configure `embedder_config` when using custom embedding models
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
### Agent Collaboration
- Enable `allow_delegation: true` when agents need to work together
- Use `step_callback` to monitor and log agent interactions
- Consider using different LLMs for different purposes:
- Main `llm` for complex reasoning
- `function_calling_llm` for efficient tool usage
### Model Compatibility
- Set `use_system_prompt: false` for older models that don't support system messages
- Ensure your chosen `llm` supports the features you need (like function calling)
## Troubleshooting Common Issues
1. **Rate Limiting**: If you're hitting API rate limits:
- Implement appropriate `max_rpm`
- Use caching for repetitive operations
- Consider batching requests
2. **Context Window Errors**: If you're exceeding context limits:
- Enable `respect_context_window`
- Use more efficient prompts
- Clear agent memory periodically
3. **Code Execution Issues**: If code execution fails:
- Verify Docker is installed for safe mode
- Check execution permissions
- Review code sandbox settings
4. **Memory Issues**: If agent responses seem inconsistent:
- Verify memory is enabled
- Check knowledge source configuration
- Review conversation history management
Remember that agents are most effective when configured according to their specific use case. Take time to understand your requirements and adjust these parameters accordingly.

View File

@@ -28,20 +28,19 @@ crewai [COMMAND] [OPTIONS] [ARGUMENTS]
### 1. Create ### 1. Create
Create a new crew or pipeline. Create a new crew or flow.
```shell ```shell
crewai create [OPTIONS] TYPE NAME crewai create [OPTIONS] TYPE NAME
``` ```
- `TYPE`: Choose between "crew" or "pipeline" - `TYPE`: Choose between "crew" or "flow"
- `NAME`: Name of the crew or pipeline - `NAME`: Name of the crew or flow
- `--router`: (Optional) Create a pipeline with router functionality
Example: Example:
```shell ```shell
crewai create crew my_new_crew crewai create crew my_new_crew
crewai create pipeline my_new_pipeline --router crewai create flow my_new_flow
``` ```
### 2. Version ### 2. Version

View File

@@ -41,6 +41,155 @@ A crew in crewAI represents a collaborative group of agents working together to
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. **Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
</Tip> </Tip>
## Creating Crews
There are two ways to create crews in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
#### Example Crew Class with Decorators
```python code
from crewai import Agent, Crew, Task, Process
from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff
@CrewBase
class YourCrewName:
"""Description of your crew"""
# Paths to your YAML configuration files
# To see an example agent and task defined in YAML, checkout the following:
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@before_kickoff
def prepare_inputs(self, inputs):
# Modify inputs before the crew starts
inputs['additional_data'] = "Some extra information"
return inputs
@after_kickoff
def process_output(self, output):
# Modify output after the crew finishes
output.raw += "\nProcessed after kickoff."
return output
@agent
def agent_one(self) -> Agent:
return Agent(
config=self.agents_config['agent_one'],
verbose=True
)
@agent
def agent_two(self) -> Agent:
return Agent(
config=self.agents_config['agent_two'],
verbose=True
)
@task
def task_one(self) -> Task:
return Task(
config=self.tasks_config['task_one']
)
@task
def task_two(self) -> Task:
return Task(
config=self.tasks_config['task_two']
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the @agent decorator
tasks=self.tasks, # Automatically collected by the @task decorator.
process=Process.sequential,
verbose=True,
)
```
<Note>
Tasks will be executed in the order they are defined.
</Note>
The `CrewBase` class, along with these decorators, automates the collection of agents and tasks, reducing the need for manual management.
#### Decorators overview from `annotations.py`
CrewAI provides several decorators in the `annotations.py` file that are used to mark methods within your crew class for special handling:
- `@CrewBase`: Marks the class as a crew base class.
- `@agent`: Denotes a method that returns an `Agent` object.
- `@task`: Denotes a method that returns a `Task` object.
- `@crew`: Denotes the method that returns the `Crew` object.
- `@before_kickoff`: (Optional) Marks a method to be executed before the crew starts.
- `@after_kickoff`: (Optional) Marks a method to be executed after the crew finishes.
These decorators help in organizing your crew's structure and automatically collecting agents and tasks without manually listing them.
### Direct Code Definition (Alternative)
Alternatively, you can define the crew directly in code without using YAML configuration files.
```python code
from crewai import Agent, Crew, Task, Process
from crewai_tools import YourCustomTool
class YourCrewName:
def agent_one(self) -> Agent:
return Agent(
role="Data Analyst",
goal="Analyze data trends in the market",
backstory="An experienced data analyst with a background in economics",
verbose=True,
tools=[YourCustomTool()]
)
def agent_two(self) -> Agent:
return Agent(
role="Market Researcher",
goal="Gather information on market dynamics",
backstory="A diligent researcher with a keen eye for detail",
verbose=True
)
def task_one(self) -> Task:
return Task(
description="Collect recent market data and identify trends.",
expected_output="A report summarizing key trends in the market.",
agent=self.agent_one()
)
def task_two(self) -> Task:
return Task(
description="Research factors affecting market dynamics.",
expected_output="An analysis of factors influencing the market.",
agent=self.agent_two()
)
def crew(self) -> Crew:
return Crew(
agents=[self.agent_one(), self.agent_two()],
tasks=[self.task_one(), self.task_two()],
process=Process.sequential,
verbose=True
)
```
In this example:
- Agents and tasks are defined directly within the class without decorators.
- We manually create and manage the list of agents and tasks.
- This approach provides more control but can be less maintainable for larger projects.
## Crew Output ## Crew Output
@@ -188,4 +337,4 @@ Then, to replay from a specific task, use:
crewai replay -t <task_id> crewai replay -t <task_id>
``` ```
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks. These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.

View File

@@ -18,63 +18,60 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows. 4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
5. **Input Flexibility**: Flows can accept inputs to initialize or update their state, with different handling for structured and unstructured state management.
## Getting Started ## Getting Started
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task. Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
### Passing Inputs to Flows ```python Code
Flows can accept inputs to initialize or update their state before execution. The way inputs are handled depends on whether the flow uses structured or unstructured state management.
#### Structured State Management
In structured state management, the flow's state is defined using a Pydantic `BaseModel`. Inputs must match the model's schema, and any updates will overwrite the default values.
```python
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from dotenv import load_dotenv
from litellm import completion
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow[ExampleState]): class ExampleFlow(Flow):
model = "gpt-4o-mini"
@start() @start()
def first_method(self): def generate_city(self):
# Implementation print("Starting flow")
flow = StructuredExampleFlow() response = completion(
flow.kickoff(inputs={"counter": 10}) model=self.model,
``` messages=[
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
)
In this example, the `counter` is initialized to `10`, while `message` retains its default value. random_city = response["choices"][0]["message"]["content"]
print(f"Random City: {random_city}")
#### Unstructured State Management return random_city
In unstructured state management, the flow's state is a dictionary. You can pass any dictionary to update the state. @listen(generate_city)
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
)
```python fun_fact = response["choices"][0]["message"]["content"]
from crewai.flow.flow import Flow, listen, start return fun_fact
class UnstructuredExampleFlow(Flow):
@start()
def first_method(self):
# Implementation
flow = UnstructuredExampleFlow()
flow.kickoff(inputs={"counter": 5, "message": "Initial message"})
```
Here, both `counter` and `message` are updated based on the provided inputs. flow = ExampleFlow()
result = flow.kickoff()
**Note:** Ensure that inputs for structured state management adhere to the defined schema to avoid validation errors. print(f"Generated fun fact: {result}")
### Example Flow
```python
# Existing example code
``` ```
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task. In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
@@ -97,14 +94,14 @@ The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered. 1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python ```python Code
@listen("generate_city") @listen("generate_city")
def generate_fun_fact(self, random_city): def generate_fun_fact(self, random_city):
# Implementation # Implementation
``` ```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered. 2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python ```python Code
@listen(generate_city) @listen(generate_city)
def generate_fun_fact(self, random_city): def generate_fun_fact(self, random_city):
# Implementation # Implementation
@@ -121,7 +118,7 @@ When you run a Flow, the final output is determined by the last method that comp
Here's how you can access the final output: Here's how you can access the final output:
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
class OutputExampleFlow(Flow): class OutputExampleFlow(Flow):
@@ -133,17 +130,18 @@ class OutputExampleFlow(Flow):
def second_method(self, first_output): def second_method(self, first_output):
return f"Second method received: {first_output}" return f"Second method received: {first_output}"
flow = OutputExampleFlow() flow = OutputExampleFlow()
final_output = flow.kickoff() final_output = flow.kickoff()
print("---- Final Output ----") print("---- Final Output ----")
print(final_output) print(final_output)
``` ````
```text ``` text Output
---- Final Output ---- ---- Final Output ----
Second method received: Output from first_method Second method received: Output from first_method
``` ````
</CodeGroup> </CodeGroup>
@@ -158,7 +156,7 @@ Here's an example of how to update and access the state:
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from pydantic import BaseModel
@@ -186,7 +184,7 @@ print("Final State:")
print(flow.state) print(flow.state)
``` ```
```text ```text Output
Final Output: Hello from first_method - updated by second_method Final Output: Hello from first_method - updated by second_method
Final State: Final State:
counter=2 message='Hello from first_method - updated by second_method' counter=2 message='Hello from first_method - updated by second_method'
@@ -210,10 +208,10 @@ allowing developers to choose the approach that best fits their application's ne
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class. In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema. This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
class UnstructuredExampleFlow(Flow): class UntructuredExampleFlow(Flow):
@start() @start()
def first_method(self): def first_method(self):
@@ -232,7 +230,8 @@ class UnstructuredExampleFlow(Flow):
print(f"State after third_method: {self.state}") print(f"State after third_method: {self.state}")
flow = UnstructuredExampleFlow()
flow = UntructuredExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
@@ -246,14 +245,16 @@ flow.kickoff()
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow. Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments. By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from pydantic import BaseModel
class ExampleState(BaseModel): class ExampleState(BaseModel):
counter: int = 0 counter: int = 0
message: str = "" message: str = ""
class StructuredExampleFlow(Flow[ExampleState]): class StructuredExampleFlow(Flow[ExampleState]):
@start() @start()
@@ -272,6 +273,7 @@ class StructuredExampleFlow(Flow[ExampleState]):
print(f"State after third_method: {self.state}") print(f"State after third_method: {self.state}")
flow = StructuredExampleFlow() flow = StructuredExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
@@ -305,7 +307,7 @@ The `or_` function in Flows allows you to listen to multiple methods and trigger
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, listen, or_, start from crewai.flow.flow import Flow, listen, or_, start
class OrExampleFlow(Flow): class OrExampleFlow(Flow):
@@ -322,11 +324,13 @@ class OrExampleFlow(Flow):
def logger(self, result): def logger(self, result):
print(f"Logger: {result}") print(f"Logger: {result}")
flow = OrExampleFlow() flow = OrExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
```text ```text Output
Logger: Hello from the start method Logger: Hello from the start method
Logger: Hello from the second method Logger: Hello from the second method
``` ```
@@ -342,7 +346,7 @@ The `and_` function in Flows allows you to listen to multiple methods and trigge
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, and_, listen, start from crewai.flow.flow import Flow, and_, listen, start
class AndExampleFlow(Flow): class AndExampleFlow(Flow):
@@ -364,7 +368,7 @@ flow = AndExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
```text ```text Output
---- Logger ---- ---- Logger ----
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'} {'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
``` ```
@@ -381,7 +385,7 @@ You can specify different routes based on the output of the method, allowing you
<CodeGroup> <CodeGroup>
```python ```python Code
import random import random
from crewai.flow.flow import Flow, listen, router, start from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel from pydantic import BaseModel
@@ -412,11 +416,12 @@ class RouterFlow(Flow[ExampleState]):
def fourth_method(self): def fourth_method(self):
print("Fourth method running") print("Fourth method running")
flow = RouterFlow() flow = RouterFlow()
flow.kickoff() flow.kickoff()
``` ```
```text ```text Output
Starting the structured flow Starting the structured flow
Third method running Third method running
Fourth method running Fourth method running
@@ -479,7 +484,7 @@ The `main.py` file is where you create your flow and connect the crews together.
Here's an example of how you can connect the `poem_crew` in the `main.py` file: Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python ```python Code
#!/usr/bin/env python #!/usr/bin/env python
from random import randint from random import randint
@@ -555,42 +560,6 @@ uv run kickoff
The flow will execute, and you should see the output in the console. The flow will execute, and you should see the output in the console.
### Adding Additional Crews Using the CLI
Once you have created your initial flow, you can easily add additional crews to your project using the CLI. This allows you to expand your flow's capabilities by integrating new crews without starting from scratch.
To add a new crew to your existing flow, use the following command:
```bash
crewai flow add-crew <crew_name>
```
This command will create a new directory for your crew within the `crews` folder of your flow project. It will include the necessary configuration files and a crew definition file, similar to the initial setup.
#### Folder Structure
After adding a new crew, your folder structure will look like this:
| Directory/File | Description |
| :--------------------- | :----------------------------------------------------------------- |
| `name_of_flow/` | Root directory for the flow. |
| ├── `crews/` | Contains directories for specific crews. |
| │ ├── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
| │ │ ├── `config/` | Configuration files directory for the "poem_crew". |
| │ │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
| │ │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
| │ │ └── `poem_crew.py` | Script for "poem_crew" functionality. |
| └── `name_of_crew/` | Directory for the new crew. |
| ├── `config/` | Configuration files directory for the new crew. |
| │ ├── `agents.yaml` | YAML file defining the agents for the new crew. |
| │ └── `tasks.yaml` | YAML file defining the tasks for the new crew. |
| └── `name_of_crew.py` | Script for the new crew functionality. |
You can then customize the `agents.yaml` and `tasks.yaml` files to define the agents and tasks for your new crew. The `name_of_crew.py` file will contain the crew's logic, which you can modify to suit your needs.
By using the CLI to add additional crews, you can efficiently build complex AI workflows that leverage multiple crews working together.
## Plot Flows ## Plot Flows
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows. Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
@@ -607,7 +576,7 @@ CrewAI provides two convenient methods to generate plots of your flows:
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow. If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python ```python Code
# Assuming you have a flow instance # Assuming you have a flow instance
flow.plot("my_flow_plot") flow.plot("my_flow_plot")
``` ```
@@ -630,114 +599,13 @@ The generated plot will display nodes representing the tasks in your flow, with
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others. By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
## Advanced Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
In this section, we explore more complex use cases of CrewAI Flows, starting with a self-evaluation loop. This pattern is crucial for developing AI systems that can iteratively improve their outputs through feedback.
### 1) Self-Evaluation Loop
The self-evaluation loop is a powerful pattern that allows AI workflows to automatically assess and refine their outputs. This example demonstrates how to set up a flow that generates content, evaluates it, and iterates based on feedback until the desired quality is achieved.
#### Overview
The self-evaluation loop involves two main Crews:
1. **ShakespeareanXPostCrew**: Generates a Shakespearean-style post on a given topic.
2. **XPostReviewCrew**: Evaluates the generated post, providing feedback on its validity and quality.
The process iterates until the post meets the criteria or a maximum retry limit is reached. This approach ensures high-quality outputs through iterative refinement.
#### Importance
This pattern is essential for building robust AI systems that can adapt and improve over time. By automating the evaluation and feedback loop, developers can ensure that their AI workflows produce reliable and high-quality results.
#### Main Code Highlights
Below is the `main.py` file for the self-evaluation loop flow:
```python
from typing import Optional
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
from self_evaluation_loop_flow.crews.shakespeare_crew.shakespeare_crew import (
ShakespeareanXPostCrew,
)
from self_evaluation_loop_flow.crews.x_post_review_crew.x_post_review_crew import (
XPostReviewCrew,
)
class ShakespeareXPostFlowState(BaseModel):
x_post: str = ""
feedback: Optional[str] = None
valid: bool = False
retry_count: int = 0
class ShakespeareXPostFlow(Flow[ShakespeareXPostFlowState]):
@start("retry")
def generate_shakespeare_x_post(self):
print("Generating Shakespearean X post")
topic = "Flying cars"
result = (
ShakespeareanXPostCrew()
.crew()
.kickoff(inputs={"topic": topic, "feedback": self.state.feedback})
)
print("X post generated", result.raw)
self.state.x_post = result.raw
@router(generate_shakespeare_x_post)
def evaluate_x_post(self):
if self.state.retry_count > 3:
return "max_retry_exceeded"
result = XPostReviewCrew().crew().kickoff(inputs={"x_post": self.state.x_post})
self.state.valid = result["valid"]
self.state.feedback = result["feedback"]
print("valid", self.state.valid)
print("feedback", self.state.feedback)
self.state.retry_count += 1
if self.state.valid:
return "complete"
return "retry"
@listen("complete")
def save_result(self):
print("X post is valid")
print("X post:", self.state.x_post)
with open("x_post.txt", "w") as file:
file.write(self.state.x_post)
@listen("max_retry_exceeded")
def max_retry_exceeded_exit(self):
print("Max retry count exceeded")
print("X post:", self.state.x_post)
print("Feedback:", self.state.feedback)
def kickoff():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.kickoff()
def plot():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.plot()
if __name__ == "__main__":
kickoff()
```
#### Code Highlights
- **Retry Mechanism**: The flow uses a retry mechanism to regenerate the post if it doesn't meet the criteria, up to a maximum of three retries.
- **Feedback Loop**: Feedback from the `XPostReviewCrew` is used to refine the post iteratively.
- **State Management**: The flow maintains state using a Pydantic model, ensuring type safety and clarity.
For a complete example and further details, please refer to the [Self Evaluation Loop Flow repository](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow).
## Next Steps ## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are five specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example: If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow) 1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
@@ -747,8 +615,6 @@ If you're interested in exploring additional examples of flows, we have a variet
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow) 4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
5. **Self Evaluation Loop Flow**: This flow demonstrates a self-evaluation loop where AI workflows automatically assess and refine their outputs through feedback. It involves generating content, evaluating it, and iterating until the desired quality is achieved. This pattern is crucial for developing robust AI systems that can adapt and improve over time. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback. By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
Also, check out our YouTube video on how to use flows in CrewAI below! Also, check out our YouTube video on how to use flows in CrewAI below!
@@ -762,4 +628,4 @@ Also, check out our YouTube video on how to use flows in CrewAI below!
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin" referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen allowfullscreen
></iframe> ></iframe>

View File

@@ -6,28 +6,54 @@ icon: book
# Using Knowledge in CrewAI # Using Knowledge in CrewAI
## Introduction ## What is Knowledge?
The Knowledge class in CrewAI provides a powerful way to manage and query knowledge sources for your AI agents. This guide will show you how to implement knowledge management in your CrewAI projects. Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks.
Additionally, we have specific tools for generate knowledge sources for strings, text files, PDF's, and Spreadsheets. You can expand on any source type by extending the `KnowledgeSource` class. Think of it as giving your agents a reference library they can consult while working.
## Basic Implementation <Info>
Key benefits of using Knowledge:
- Enhance agents with domain-specific information
- Support decisions with real-world data
- Maintain context across conversations
- Ground responses in factual information
</Info>
Here's a simple example of how to use the Knowledge class: ## Supported Knowledge Sources
```python CrewAI supports various types of knowledge sources out of the box:
<CardGroup cols={2}>
<Card title="Text Sources" icon="text">
- Raw strings
- Text files (.txt)
- PDF documents
</Card>
<Card title="Structured Data" icon="table">
- CSV files
- Excel spreadsheets
- JSON documents
</Card>
</CardGroup>
## Quick Start
Here's an example using string-based knowledge:
```python Code
from crewai import Agent, Task, Crew, Process, LLM from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
# Create a knowledge source # Create a knowledge source
content = "Users name is John. He is 30 years old and lives in San Francisco." content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource( string_source = StringKnowledgeSource(
content=content, metadata={"preference": "personal"} content=content,
) )
# Create an LLM with a temperature of 0 to ensure deterministic outputs
llm = LLM(model="gpt-4o-mini", temperature=0) llm = LLM(model="gpt-4o-mini", temperature=0)
# Create an agent with the knowledge store
# Create an agent with the knowledge store
agent = Agent( agent = Agent(
role="About User", role="About User",
goal="You know everything about the user.", goal="You know everything about the user.",
@@ -47,29 +73,240 @@ crew = Crew(
tasks=[task], tasks=[task],
verbose=True, verbose=True,
process=Process.sequential, process=Process.sequential,
knowledge={"sources": [string_source], "metadata": {"preference": "personal"}}, # Enable knowledge by adding the sources here. You can also add more sources to the sources list. knowledge_sources=[string_source], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
) )
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"}) result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
``` ```
## Knowledge Configuration
### Chunking Configuration
Control how content is split for processing by setting the chunk size and overlap.
```python Code
knowledge_source = StringKnowledgeSource(
content="Long content...",
chunk_size=4000, # Characters per chunk (default)
chunk_overlap=200 # Overlap between chunks (default)
)
```
## Embedder Configuration ## Embedder Configuration
You can also configure the embedder for the knowledge store. This is useful if you want to use a different embedder for the knowledge store than the one used for the agents. You can also configure the embedder for the knowledge store. This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
```python ```python Code
... ...
string_source = StringKnowledgeSource( string_source = StringKnowledgeSource(
content="Users name is John. He is 30 years old and lives in San Francisco.", content="Users name is John. He is 30 years old and lives in San Francisco.",
metadata={"preference": "personal"}
) )
crew = Crew( crew = Crew(
... ...
knowledge={ knowledge_sources=[string_source],
"sources": [string_source], embedder={
"metadata": {"preference": "personal"}, "provider": "openai",
"embedder_config": {"provider": "openai", "config": {"model": "text-embedding-3-small"}}, "config": {"model": "text-embedding-3-small"},
}, },
) )
``` ```
## Clearing Knowledge
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.
```bash Command
crewai reset-memories --knowledge
```
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
## Custom Knowledge Sources
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.
#### Space News Knowledge Source Example
<CodeGroup>
```python Code
from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
import requests
from datetime import datetime
from typing import Dict, Any
from pydantic import BaseModel, Field
class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
"""Knowledge source that fetches data from Space News API."""
api_endpoint: str = Field(description="API endpoint URL")
limit: int = Field(default=10, description="Number of articles to fetch")
def load_content(self) -> Dict[Any, str]:
"""Fetch and format space news articles."""
try:
response = requests.get(
f"{self.api_endpoint}?limit={self.limit}"
)
response.raise_for_status()
data = response.json()
articles = data.get('results', [])
formatted_data = self._format_articles(articles)
return {self.api_endpoint: formatted_data}
except Exception as e:
raise ValueError(f"Failed to fetch space news: {str(e)}")
def _format_articles(self, articles: list) -> str:
"""Format articles into readable text."""
formatted = "Space News Articles:\n\n"
for article in articles:
formatted += f"""
Title: {article['title']}
Published: {article['published_at']}
Summary: {article['summary']}
News Site: {article['news_site']}
URL: {article['url']}
-------------------"""
return formatted
def add(self) -> None:
"""Process and store the articles."""
content = self.load_content()
for _, text in content.items():
chunks = self._chunk_text(text)
self.chunks.extend(chunks)
self._save_documents()
# Create knowledge source
recent_news = SpaceNewsKnowledgeSource(
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles",
limit=10,
)
# Create specialized agent
space_analyst = Agent(
role="Space News Analyst",
goal="Answer questions about space news accurately and comprehensively",
backstory="""You are a space industry analyst with expertise in space exploration,
satellite technology, and space industry trends. You excel at answering questions
about space news and providing detailed, accurate information.""",
knowledge_sources=[recent_news],
llm=LLM(model="gpt-4", temperature=0.0)
)
# Create task that handles user questions
analysis_task = Task(
description="Answer this question about space news: {user_question}",
expected_output="A detailed answer based on the recent space news articles",
agent=space_analyst
)
# Create and run the crew
crew = Crew(
agents=[space_analyst],
tasks=[analysis_task],
verbose=True,
process=Process.sequential
)
# Example usage
result = crew.kickoff(
inputs={"user_question": "What are the latest developments in space exploration?"}
)
```
```output Output
# Agent: Space News Analyst
## Task: Answer this question about space news: What are the latest developments in space exploration?
# Agent: Space News Analyst
## Final Answer:
The latest developments in space exploration, based on recent space news articles, include the following:
1. SpaceX has received the final regulatory approvals to proceed with the second integrated Starship/Super Heavy launch, scheduled for as soon as the morning of Nov. 17, 2023. This is a significant step in SpaceX's ambitious plans for space exploration and colonization. [Source: SpaceNews](https://spacenews.com/starship-cleared-for-nov-17-launch/)
2. SpaceX has also informed the US Federal Communications Commission (FCC) that it plans to begin launching its first next-generation Starlink Gen2 satellites. This represents a major upgrade to the Starlink satellite internet service, which aims to provide high-speed internet access worldwide. [Source: Teslarati](https://www.teslarati.com/spacex-first-starlink-gen2-satellite-launch-2022/)
3. AI startup Synthetaic has raised $15 million in Series B funding. The company uses artificial intelligence to analyze data from space and air sensors, which could have significant applications in space exploration and satellite technology. [Source: SpaceNews](https://spacenews.com/ai-startup-synthetaic-raises-15-million-in-series-b-funding/)
4. The Space Force has formally established a unit within the U.S. Indo-Pacific Command, marking a permanent presence in the Indo-Pacific region. This could have significant implications for space security and geopolitics. [Source: SpaceNews](https://spacenews.com/space-force-establishes-permanent-presence-in-indo-pacific-region/)
5. Slingshot Aerospace, a space tracking and data analytics company, is expanding its network of ground-based optical telescopes to increase coverage of low Earth orbit. This could improve our ability to track and analyze objects in low Earth orbit, including satellites and space debris. [Source: SpaceNews](https://spacenews.com/slingshots-space-tracking-network-to-extend-coverage-of-low-earth-orbit/)
6. The National Natural Science Foundation of China has outlined a five-year project for researchers to study the assembly of ultra-large spacecraft. This could lead to significant advancements in spacecraft technology and space exploration capabilities. [Source: SpaceNews](https://spacenews.com/china-researching-challenges-of-kilometer-scale-ultra-large-spacecraft/)
7. The Center for AEroSpace Autonomy Research (CAESAR) at Stanford University is focusing on spacecraft autonomy. The center held a kickoff event on May 22, 2024, to highlight the industry, academia, and government collaboration it seeks to foster. This could lead to significant advancements in autonomous spacecraft technology. [Source: SpaceNews](https://spacenews.com/stanford-center-focuses-on-spacecraft-autonomy/)
```
</CodeGroup>
#### Key Components Explained
1. **Custom Knowledge Source (`SpaceNewsKnowledgeSource`)**:
- Extends `BaseKnowledgeSource` for integration with CrewAI
- Configurable API endpoint and article limit
- Implements three key methods:
- `load_content()`: Fetches articles from the API
- `_format_articles()`: Structures the articles into readable text
- `add()`: Processes and stores the content
2. **Agent Configuration**:
- Specialized role as a Space News Analyst
- Uses the knowledge source to access space news
3. **Task Setup**:
- Takes a user question as input through `{user_question}`
- Designed to provide detailed answers based on the knowledge source
4. **Crew Orchestration**:
- Manages the workflow between agent and task
- Handles input/output through the kickoff method
This example demonstrates how to:
- Create a custom knowledge source that fetches real-time data
- Process and format external data for AI consumption
- Use the knowledge source to answer specific user questions
- Integrate everything seamlessly with CrewAI's agent system
#### About the Spaceflight News API
The example uses the [Spaceflight News API](https://api.spaceflightnewsapi.net/v4/documentation), which:
- Provides free access to space-related news articles
- Requires no authentication
- Returns structured data about space news
- Supports pagination and filtering
You can customize the API query by modifying the endpoint URL:
```python
# Fetch more articles
recent_news = SpaceNewsKnowledgeSource(
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles",
limit=20, # Increase the number of articles
)
# Add search parameters
recent_news = SpaceNewsKnowledgeSource(
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles?search=NASA", # Search for NASA news
limit=10,
)
```
## Best Practices
<AccordionGroup>
<Accordion title="Content Organization">
- Keep chunk sizes appropriate for your content type
- Consider content overlap for context preservation
- Organize related information into separate knowledge sources
</Accordion>
<Accordion title="Performance Tips">
- Adjust chunk sizes based on content complexity
- Configure appropriate embedding models
- Consider using local embedding providers for faster processing
</Accordion>
</AccordionGroup>

View File

@@ -7,32 +7,45 @@ icon: link
## Using LangChain Tools ## Using LangChain Tools
<Info> <Info>
CrewAI seamlessly integrates with LangChains comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI. CrewAI seamlessly integrates with LangChain's comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI.
</Info> </Info>
```python Code ```python Code
import os import os
from crewai import Agent from dotenv import load_dotenv
from langchain.agents import Tool from crewai import Agent, Task, Crew
from langchain.utilities import GoogleSerperAPIWrapper from crewai.tools import BaseTool
from pydantic import Field
from langchain_community.utilities import GoogleSerperAPIWrapper
# Setup API keys # Set up your SERPER_API_KEY key in an .env file, eg:
os.environ["SERPER_API_KEY"] = "Your Key" # SERPER_API_KEY=<your api key>
load_dotenv()
search = GoogleSerperAPIWrapper() search = GoogleSerperAPIWrapper()
# Create and assign the search tool to an agent class SearchTool(BaseTool):
serper_tool = Tool( name: str = "Search"
name="Intermediate Answer", description: str = "Useful for search-based queries. Use this to find current information about markets, companies, and trends."
func=search.run, search: GoogleSerperAPIWrapper = Field(default_factory=GoogleSerperAPIWrapper)
description="Useful for search-based queries",
)
agent = Agent( def _run(self, query: str) -> str:
role='Research Analyst', """Execute the search query and return results"""
goal='Provide up-to-date market analysis', try:
backstory='An expert analyst with a keen eye for market trends.', return self.search.run(query)
tools=[serper_tool] except Exception as e:
return f"Error performing search: {str(e)}"
# Create Agents
researcher = Agent(
role='Research Analyst',
goal='Gather current market data and trends',
backstory="""You are an expert research analyst with years of experience in
gathering market intelligence. You're known for your ability to find
relevant and up-to-date market information and present it in a clear,
actionable format.""",
tools=[SearchTool()],
verbose=True
) )
# rest of the code ... # rest of the code ...
@@ -40,6 +53,6 @@ agent = Agent(
## Conclusion ## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively. Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms, When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms,
and the flexibility of tool arguments to optimize your agents' performance and capabilities. and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -1,205 +1,323 @@
--- ---
title: LLMs title: 'LLMs'
description: Learn how to configure and optimize LLMs for your CrewAI projects. description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects'
icon: microchip-ai icon: 'microchip-ai'
--- ---
# Large Language Models (LLMs) in CrewAI <Note>
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
</Note>
Large Language Models (LLMs) are the backbone of intelligent agents in the CrewAI framework. This guide will help you understand, configure, and optimize LLM usage for your CrewAI projects. ## What are LLMs?
## Key Concepts Large Language Models (LLMs) are the core intelligence behind CrewAI agents. They enable agents to understand context, make decisions, and generate human-like responses. Here's what you need to know:
- **LLM**: Large Language Model, the AI powering agent intelligence <CardGroup cols={2}>
- **Agent**: A CrewAI entity that uses an LLM to perform tasks <Card title="LLM Basics" icon="brain">
- **Provider**: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, [more providers](https://docs.litellm.ai/docs/providers)) Large Language Models are AI systems trained on vast amounts of text data. They power the intelligence of your CrewAI agents, enabling them to understand and generate human-like text.
</Card>
<Card title="Context Window" icon="window">
The context window determines how much text an LLM can process at once. Larger windows (e.g., 128K tokens) allow for more context but may be more expensive and slower.
</Card>
<Card title="Temperature" icon="temperature-three-quarters">
Temperature (0.0 to 1.0) controls response randomness. Lower values (e.g., 0.2) produce more focused, deterministic outputs, while higher values (e.g., 0.8) increase creativity and variability.
</Card>
<Card title="Provider Selection" icon="server">
Each LLM provider (e.g., OpenAI, Anthropic, Google) offers different models with varying capabilities, pricing, and features. Choose based on your needs for accuracy, speed, and cost.
</Card>
</CardGroup>
## Configuring LLMs for Agents ## Available Models and Their Capabilities
CrewAI offers flexible options for setting up LLMs: Here's a detailed breakdown of supported models and their capabilities:
### 1. Default Configuration
By default, CrewAI uses the `gpt-4o-mini` model. It uses environment variables if no LLM is specified:
- `OPENAI_MODEL_NAME` (defaults to "gpt-4o-mini" if not set)
- `OPENAI_API_BASE`
- `OPENAI_API_KEY`
### 2. Updating YAML files
You can update the `agents.yml` file to refer to the LLM you want to use:
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: openai/gpt-4o
# llm: azure/gpt-4o-mini
# llm: gemini/gemini-pro
# llm: anthropic/claude-3-5-sonnet-20240620
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: mistral/mistral-large-latest
# llm: ollama/llama3:70b
# llm: groq/llama-3.2-90b-vision-preview
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# llm: nvidia_nim/meta/llama3-70b-instruct
# llm: sambanova/Meta-Llama-3.1-8B-Instruct
# ...
```
Keep in mind that you will need to set certain ENV vars depending on the model you are
using to account for the credentials or set a custom LLM object like described below.
Here are some of the required ENV vars for some of the LLM integrations:
<AccordionGroup>
<Accordion title="OpenAI">
```python Code
OPENAI_API_KEY=<your-api-key>
OPENAI_API_BASE=<optional-custom-base-url>
OPENAI_MODEL_NAME=<openai-model-name>
OPENAI_ORGANIZATION=<your-org-id> # OPTIONAL
OPENAI_API_BASE=<openaiai-api-base> # OPTIONAL
```
</Accordion>
<Accordion title="Anthropic">
```python Code
ANTHROPIC_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Google">
```python Code
GEMINI_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Azure">
```python Code
AZURE_API_KEY=<your-api-key> # "my-azure-api-key"
AZURE_API_BASE=<your-resource-url> # "https://example-endpoint.openai.azure.com"
AZURE_API_VERSION=<api-version> # "2023-05-15"
AZURE_AD_TOKEN=<your-azure-ad-token> # Optional
AZURE_API_TYPE=<your-azure-api-type> # Optional
```
</Accordion>
<Accordion title="AWS Bedrock">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
</Accordion>
<Accordion title="Mistral">
```python Code
MISTRAL_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
WATSONX_URL=<your-url> # (required) Base URL of your WatsonX instance
WATSONX_APIKEY=<your-apikey> # (required) IBM cloud API key
WATSONX_TOKEN=<your-token> # (required) IAM auth token (alternative to APIKEY)
WATSONX_PROJECT_ID=<your-project-id> # (optional) Project ID of your WatsonX instance
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id> # (optional) ID of deployment space for deployed models
```
</Accordion>
</AccordionGroup>
### 3. Custom LLM Objects
Pass a custom LLM implementation or object from another library.
See below for examples.
<Tabs> <Tabs>
<Tab title="String Identifier"> <Tab title="OpenAI">
```python Code | Model | Context Window | Best For |
agent = Agent(llm="gpt-4o", ...) |-------|---------------|-----------|
``` | GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
</Tab> | GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
<Tab title="LLM Instance"> <Note>
```python Code 1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
from crewai import LLM </Note>
</Tab>
<Tab title="Groq">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
llm = LLM(model="gpt-4", temperature=0.7) <Tip>
agent = Agent(llm=llm, ...) Groq is known for its fast inference speeds, making it suitable for real-time applications.
``` </Tip>
</Tab> </Tab>
<Tab title="Others">
| Provider | Context Window | Key Features |
|----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemini | Varies by model | Multimodal capabilities |
<Info>
Provider selection should consider factors like:
- API availability in your region
- Pricing structure
- Required features (e.g., streaming, function calling)
- Performance requirements
</Info>
</Tab>
</Tabs> </Tabs>
## Connecting to OpenAI-Compatible LLMs ## Setting Up Your LLM
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class: There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
<Tabs> <Tabs>
<Tab title="Using Environment Variables"> <Tab title="1. Environment Variables">
```python Code The simplest way to get started. Set these variables in your environment:
import os
os.environ["OPENAI_API_KEY"] = "your-api-key" ```bash
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1" # Required: Your API key for authentication
OPENAI_API_KEY=<your-api-key>
# Optional: Default model selection
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
# Optional: Organization ID (if applicable)
OPENAI_ORGANIZATION_ID=<your-org-id>
``` ```
</Tab>
<Tab title="Using LLM Class Attributes"> <Warning>
```python Code Never commit API keys to version control. Use environment files (.env) or your system's secret management.
</Warning>
</Tab>
<Tab title="2. YAML Configuration">
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
```yaml
researcher:
# Agent Definition
role: Research Specialist
goal: Conduct comprehensive research and analysis
backstory: A dedicated research professional with years of experience
verbose: true
# Model Selection (uncomment your choice)
# OpenAI Models - Known for reliability and performance
llm: openai/gpt-4o-mini
# llm: openai/gpt-4 # More accurate but expensive
# llm: openai/gpt-4-turbo # Fast with large context
# llm: openai/gpt-4o # Optimized for longer texts
# llm: openai/o1-preview # Latest features
# llm: openai/o1-mini # Cost-effective
# Azure Models - For enterprise deployments
# llm: azure/gpt-4o-mini
# llm: azure/gpt-4
# llm: azure/gpt-35-turbo
# Anthropic Models - Strong reasoning capabilities
# llm: anthropic/claude-3-opus-20240229-v1:0
# llm: anthropic/claude-3-sonnet-20240229-v1:0
# llm: anthropic/claude-3-haiku-20240307-v1:0
# llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0
# Google Models - Good for general tasks
# llm: gemini/gemini-pro
# llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.0-pro-latest
# AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: bedrock/anthropic.claude-v2:1
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
# llm: mistral/mistral-small-latest
# Groq Models - Fast inference
# llm: groq/mixtral-8x7b-32768
# llm: groq/llama-3.1-70b-versatile
# llm: groq/llama-3.2-90b-text-preview
# llm: groq/gemma2-9b-it
# llm: groq/gemma-7b-it
# IBM watsonx.ai Models - Enterprise features
# llm: watsonx/ibm/granite-13b-chat-v2
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# llm: watsonx/bigcode/starcoder2-15b
# Ollama Models - Local deployment
# llm: ollama/llama3:70b
# llm: ollama/codellama
# llm: ollama/mistral
# llm: ollama/mixtral
# llm: ollama/phi
# Fireworks AI Models - Specialized tasks
# llm: fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct
# llm: fireworks_ai/accounts/fireworks/models/mixtral-8x7b
# llm: fireworks_ai/accounts/fireworks/models/zephyr-7b-beta
# Perplexity AI Models - Research focused
# llm: pplx/llama-3.1-sonar-large-128k-online
# llm: pplx/mistral-7b-instruct
# llm: pplx/codellama-34b-instruct
# llm: pplx/mixtral-8x7b-instruct
# Hugging Face Models - Community models
# llm: huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct
# llm: huggingface/mistralai/Mixtral-8x7B-Instruct-v0.1
# llm: huggingface/tiiuae/falcon-180B-chat
# llm: huggingface/google/gemma-7b-it
# Nvidia NIM Models - GPU-optimized
# llm: nvidia_nim/meta/llama3-70b-instruct
# llm: nvidia_nim/mistral/mixtral-8x7b
# llm: nvidia_nim/google/gemma-7b
# SambaNova Models - Enterprise AI
# llm: sambanova/Meta-Llama-3.1-8B-Instruct
# llm: sambanova/BioMistral-7B
# llm: sambanova/Falcon-180B
```
<Info>
The YAML configuration allows you to:
- Version control your agent settings
- Easily switch between different models
- Share configurations across team members
- Document model choices and their purposes
</Info>
</Tab>
<Tab title="3. Direct Code">
For maximum flexibility, configure LLMs directly in your Python code:
```python
from crewai import LLM from crewai import LLM
# Basic configuration
llm = LLM(model="gpt-4")
# Advanced configuration with detailed parameters
llm = LLM(
model="gpt-4o-mini",
temperature=0.7, # Higher for more creative outputs
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1, # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
response_format={"type": "json"}, # For structured outputs
seed=42 # For reproducible results
)
```
<Info>
Parameter explanations:
- `temperature`: Controls randomness (0.0-1.0)
- `timeout`: Maximum wait time for response
- `max_tokens`: Limits response length
- `top_p`: Alternative to temperature for sampling
- `frequency_penalty`: Reduces word repetition
- `presence_penalty`: Encourages new topics
- `response_format`: Specifies output structure
- `seed`: Ensures consistent outputs
</Info>
</Tab>
</Tabs>
## Advanced Features and Optimization
Learn how to get the most out of your LLM configuration:
<AccordionGroup>
<Accordion title="Context Window Management">
CrewAI includes smart context management features:
```python
from crewai import LLM
# CrewAI automatically handles:
# 1. Token counting and tracking
# 2. Content summarization when needed
# 3. Task splitting for large contexts
llm = LLM( llm = LLM(
model="custom-model-name", model="gpt-4",
api_key="your-api-key", max_tokens=4000, # Limit response length
base_url="https://api.your-provider.com/v1"
) )
agent = Agent(llm=llm, ...)
``` ```
</Tab>
</Tabs>
## LLM Configuration Options <Info>
Best practices for context management:
1. Choose models with appropriate context windows
2. Pre-process long inputs when possible
3. Use chunking for large documents
4. Monitor token usage to optimize costs
</Info>
</Accordion>
When configuring an LLM for your agent, you have access to a wide range of parameters: <Accordion title="Performance Optimization">
<Steps>
<Step title="Token Usage Optimization">
Choose the right context window for your task:
- Small tasks (up to 4K tokens): Standard models
- Medium tasks (between 4K-32K): Enhanced models
- Large tasks (over 32K): Large context models
```python
# Configure model with appropriate settings
llm = LLM(
model="openai/gpt-4-turbo-preview",
temperature=0.7, # Adjust based on task
max_tokens=4096, # Set based on output needs
timeout=300 # Longer timeout for complex tasks
)
```
<Tip>
- Lower temperature (0.1 to 0.3) for factual responses
- Higher temperature (0.7 to 0.9) for creative tasks
</Tip>
</Step>
| Parameter | Type | Description | <Step title="Best Practices">
|:------------------|:---------------:|:-------------------------------------------------------------------------------------------------| 1. Monitor token usage
| **model** | `str` | Name of the model to use (e.g., "gpt-4", "gpt-3.5-turbo", "ollama/llama3.1"). For more options, visit the providers documentation. | 2. Implement rate limiting
| **timeout** | `float, int` | Maximum time (in seconds) to wait for a response. | 3. Use caching when possible
| **temperature** | `float` | Controls randomness in output (0.0 to 1.0). | 4. Set appropriate max_tokens limits
| **top_p** | `float` | Controls diversity of output (0.0 to 1.0). | </Step>
| **n** | `int` | Number of completions to generate. | </Steps>
| **stop** | `str, List[str]` | Sequence(s) where generation should stop. |
| **max_tokens** | `int` | Maximum number of tokens to generate. |
| **presence_penalty** | `float` | Penalizes new tokens based on their presence in prior text. |
| **frequency_penalty**| `float` | Penalizes new tokens based on their frequency in prior text. |
| **logit_bias** | `Dict[int, float]`| Modifies likelihood of specified tokens appearing. |
| **response_format** | `Dict[str, Any]` | Specifies the format of the response (e.g., JSON object). |
| **seed** | `int` | Sets a random seed for deterministic results. |
| **logprobs** | `bool` | Returns log probabilities of output tokens if enabled. |
| **top_logprobs** | `int` | Number of most likely tokens for which to return log probabilities. |
| **base_url** | `str` | The base URL for the API endpoint. |
| **api_version** | `str` | Version of the API to use. |
| **api_key** | `str` | Your API key for authentication. |
<Info>
Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance.
</Info>
</Accordion>
</AccordionGroup>
These are examples of how to configure LLMs for your agent. ## Provider Configuration Examples
<AccordionGroup> <AccordionGroup>
<Accordion title="OpenAI"> <Accordion title="OpenAI">
```python Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
```
Example usage:
```python Code ```python Code
from crewai import LLM from crewai import LLM
@@ -211,193 +329,306 @@ These are examples of how to configure LLMs for your agent.
frequency_penalty=0.1, frequency_penalty=0.1,
presence_penalty=0.1, presence_penalty=0.1,
stop=["END"], stop=["END"],
seed=42, seed=42
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Cerebras">
<Accordion title="Anthropic">
```python Code ```python Code
from crewai import LLM ANTHROPIC_API_KEY=sk-ant-...
```
Example usage:
```python Code
llm = LLM( llm = LLM(
model="cerebras/llama-3.1-70b", model="anthropic/claude-3-sonnet-20240229-v1:0",
api_key="your-api-key-here" temperature=0.7
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
CrewAI supports using Ollama for running open-source models locally:
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure agent:
```python Code
from crewai import LLM
agent = Agent(
llm=LLM(
model="ollama/llama3.1",
base_url="http://localhost:11434"
),
...
) )
``` ```
</Accordion> </Accordion>
<Accordion title="Groq">
<Accordion title="Google">
```python Code ```python Code
from crewai import LLM GEMINI_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM( llm = LLM(
model="groq/llama3-8b-8192", model="gemini/gemini-pro",
api_key="your-api-key-here" temperature=0.7
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Anthropic">
<Accordion title="Azure">
```python Code ```python Code
from crewai import LLM # Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
```
Example usage:
```python Code
llm = LLM( llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022", model="azure/gpt-4",
api_key="your-api-key-here" api_version="2023-05-15"
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Fireworks AI"> <Accordion title="AWS Bedrock">
```python Code ```python Code
from crewai import LLM AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM( llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct", model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Gemini">
<Accordion title="Mistral">
```python Code ```python Code
from crewai import LLM MISTRAL_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM( llm = LLM(
model="gemini/gemini-1.5-pro-002", model="mistral/mistral-large-latest",
api_key="your-api-key-here" temperature=0.7
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Perplexity AI (pplx-api)">
<Accordion title="Groq">
```python Code ```python Code
from crewai import LLM GROQ_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM( llm = LLM(
model="llama-3.1-sonar-large-128k-online", model="groq/llama-3.2-90b-text-preview",
base_url="https://api.perplexity.ai/", temperature=0.7
api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="IBM watsonx.ai">
You can use IBM Watson by seeting the following ENV vars:
<Accordion title="IBM watsonx.ai">
```python Code ```python Code
# Required
WATSONX_URL=<your-url> WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey> WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id> WATSONX_PROJECT_ID=<your-project-id>
# Optional
WATSONX_TOKEN=<your-token>
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
``` ```
You can then define your agents llms by updating the `agents.yml` Example usage:
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: watsonx/meta-llama/llama-3-1-70b-instruct
```
You can also set up agents more dynamically as a base level LLM instance, like bellow:
```python Code ```python Code
from crewai import LLM
llm = LLM( llm = LLM(
model="watsonx/ibm/granite-13b-chat-v2", model="watsonx/meta-llama/llama-3-1-70b-instruct",
base_url="https://api.watsonx.ai/v1", base_url="https://api.watsonx.ai/v1"
api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Hugging Face"> <Accordion title="Ollama (Local LLMs)">
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure:
```python Code ```python Code
from crewai import LLM llm = LLM(
model="ollama/llama3:70b",
base_url="http://localhost:11434"
)
```
</Accordion>
<Accordion title="Fireworks AI">
```python Code
FIREWORKS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Perplexity AI">
```python Code
PERPLEXITY_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="llama-3.1-sonar-large-128k-online",
base_url="https://api.perplexity.ai/"
)
```
</Accordion>
<Accordion title="Hugging Face">
```python Code
HUGGINGFACE_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM( llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
api_key="your-api-key-here",
base_url="your_api_endpoint" base_url="your_api_endpoint"
) )
agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="SambaNova">
```python Code
SAMBANOVA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="sambanova/Meta-Llama-3.1-8B-Instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Cerebras">
```python Code
# Required
CEREBRAS_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="cerebras/llama3.1-70b",
temperature=0.7,
max_tokens=8192
)
```
<Info>
Cerebras features:
- Fast inference speeds
- Competitive pricing
- Good balance of speed and quality
- Support for long context windows
</Info>
</Accordion>
</AccordionGroup> </AccordionGroup>
## Changing the Base API URL ## Common Issues and Solutions
You can change the base API URL for any LLM provider by setting the `base_url` parameter: <Tabs>
<Tab title="Authentication">
<Warning>
Most authentication issues can be resolved by checking API key format and environment variable names.
</Warning>
```bash
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
```
</Tab>
<Tab title="Model Names">
<Check>
Always include the provider prefix in model names
</Check>
```python
# Correct
llm = LLM(model="openai/gpt-4")
# Incorrect
llm = LLM(model="gpt-4")
```
</Tab>
<Tab title="Context Length">
<Tip>
Use larger context models for extensive tasks
</Tip>
```python
# Large context model
llm = LLM(model="openai/gpt-4o") # 128K tokens
```
</Tab>
</Tabs>
```python Code ## Getting Help
from crewai import LLM
llm = LLM( If you need assistance, these resources are available:
model="custom-model-name",
base_url="https://api.your-provider.com/v1",
api_key="your-api-key"
)
agent = Agent(llm=llm, ...)
```
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider. <CardGroup cols={3}>
<Card
title="LiteLLM Documentation"
href="https://docs.litellm.ai/docs/"
icon="book"
>
Comprehensive documentation for LiteLLM integration and troubleshooting common issues.
</Card>
<Card
title="GitHub Issues"
href="https://github.com/joaomdmoura/crewAI/issues"
icon="bug"
>
Report bugs, request features, or browse existing issues for solutions.
</Card>
<Card
title="Community Forum"
href="https://community.crewai.com"
icon="comment-question"
>
Connect with other CrewAI users, share experiences, and get help from the community.
</Card>
</CardGroup>
## Best Practices <Note>
Best Practices for API Key Security:
1. **Choose the right model**: Balance capability and cost. - Use environment variables or secure vaults
2. **Optimize prompts**: Clear, concise instructions improve output. - Never commit keys to version control
3. **Manage tokens**: Monitor and limit token usage for efficiency. - Rotate keys regularly
4. **Use appropriate temperature**: Lower for factual tasks, higher for creative ones. - Use separate keys for development and production
5. **Implement error handling**: Gracefully manage API errors and rate limits. - Monitor key usage for unusual patterns
</Note>
## Troubleshooting
- **API Errors**: Check your API key, network connection, and rate limits.
- **Unexpected Outputs**: Refine your prompts and adjust temperature or top_p.
- **Performance Issues**: Consider using a more powerful model or optimizing your queries.
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.

View File

@@ -1,6 +1,6 @@
--- ---
title: Tasks title: Tasks
description: Detailed guide on managing and creating tasks within the CrewAI framework, reflecting the latest codebase updates. description: Detailed guide on managing and creating tasks within the CrewAI framework.
icon: list-check icon: list-check
--- ---
@@ -8,41 +8,171 @@ icon: list-check
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`. In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
They provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities. Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency. Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
### Task Execution Flow
Tasks can be executed in two ways:
- **Sequential**: Tasks are executed in the order they are defined
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
The execution flow is defined when creating the crew:
```python Code
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
process=Process.sequential # or Process.hierarchical
)
```
## Task Attributes ## Task Attributes
| Attribute | Parameters | Type | Description | | Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- | | :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. | | **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | `Optional[BaseAgent]` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. | | **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Tools** _(optional)_ | `tools` | `Optional[List[Any]]` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. | | **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. | | **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Specifies tasks whose outputs are used as context for this task. | | **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. | | **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. | | **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. | | **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. | | **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Output** _(optional)_ | `output` | `Optional[TaskOutput]` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. | | **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | A callable that is executed with the task's output upon completion. | | **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Indicates if the task should involve human review at the end, useful for tasks needing human oversight. Defaults to False.| | **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
| **Converter Class** _(optional)_ | `converter_cls` | `Optional[Type[Converter]]` | A converter class used to export structured output. Defaults to None. |
## Creating a Task ## Creating Tasks
Creating a task involves defining its scope, responsible agent, and any additional attributes for flexibility: There are two ways to create tasks in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
### YAML Configuration (Recommended)
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
<Note>
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
```python Code ```python Code
crew.kickoff(inputs={'topic': 'AI Agents'})
```
</Note>
Here's an example of how to configure tasks using YAML:
```yaml tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
```
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task']
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task']
)
@crew
def crew(self) -> Crew:
return Crew(
agents=[
self.researcher(),
self.reporting_analyst()
],
tasks=[
self.research_task(),
self.reporting_task()
],
process=Process.sequential
)
```
<Note>
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
</Note>
### Direct Code Definition (Alternative)
Alternatively, you can define tasks directly in your code without using YAML configuration:
```python task.py
from crewai import Task from crewai import Task
task = Task( research_task = Task(
description='Find and summarize the latest and most relevant news on AI', description="""
agent=sales_agent, Conduct a thorough research about AI Agents.
expected_output='A bullet list summary of the top 5 most important AI news', Make sure you find any interesting and relevant information given
the current year is 2024.
""",
expected_output="""
A list with 10 bullet points of the most relevant information about AI Agents
""",
agent=researcher
)
reporting_task = Task(
description="""
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
""",
expected_output="""
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
""",
agent=reporting_analyst,
output_file="report.md"
) )
``` ```
@@ -52,6 +182,8 @@ task = Task(
## Task Output ## Task Output
Understanding task outputs is crucial for building effective AI workflows. CrewAI provides a structured way to handle task results through the `TaskOutput` class, which supports multiple output formats and can be easily passed between tasks.
The output of a task in CrewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models. The output of a task in CrewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively. By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
@@ -112,6 +244,186 @@ if task_output.pydantic:
print(f"Pydantic Output: {task_output.pydantic}") print(f"Pydantic Output: {task_output.pydantic}")
``` ```
## Task Dependencies and Context
Tasks can depend on the output of other tasks using the `context` attribute. For example:
```python Code
research_task = Task(
description="Research the latest developments in AI",
expected_output="A list of recent AI developments",
agent=researcher
)
analysis_task = Task(
description="Analyze the research findings and identify key trends",
expected_output="Analysis report of AI trends",
agent=analyst,
context=[research_task] # This task will wait for research_task to complete
)
```
## Getting Structured Consistent Outputs from Tasks
When you need to ensure that a task outputs a structured and consistent format, you can use the `output_pydantic` or `output_json` properties on a task. These properties allow you to define the expected output structure, making it easier to parse and utilize the results in your application.
<Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
</Note>
### Using `output_pydantic`
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
Heres an example demonstrating how to use output_pydantic:
```python Code
import json
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
class Blog(BaseModel):
title: str
content: str
blog_agent = Agent(
role="Blog Content Generator Agent",
goal="Generate a blog title and content",
backstory="""You are an expert content creator, skilled in crafting engaging and informative blog posts.""",
verbose=False,
allow_delegation=False,
llm="gpt-4o",
)
task1 = Task(
description="""Create a blog title and content on a given topic. Make sure the content is under 200 words.""",
expected_output="A compelling blog title and well-written content.",
agent=blog_agent,
output_pydantic=Blog,
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[blog_agent],
tasks=[task1],
verbose=True,
process=Process.sequential,
)
result = crew.kickoff()
# Option 1: Accessing Properties Using Dictionary-Style Indexing
print("Accessing Properties - Option 1")
title = result["title"]
content = result["content"]
print("Title:", title)
print("Content:", content)
# Option 2: Accessing Properties Directly from the Pydantic Model
print("Accessing Properties - Option 2")
title = result.pydantic.title
content = result.pydantic.content
print("Title:", title)
print("Content:", content)
# Option 3: Accessing Properties Using the to_dict() Method
print("Accessing Properties - Option 3")
output_dict = result.to_dict()
title = output_dict["title"]
content = output_dict["content"]
print("Title:", title)
print("Content:", content)
# Option 4: Printing the Entire Blog Object
print("Accessing Properties - Option 5")
print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields.
* The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
* After executing the crew, you can access the structured output in multiple ways as shown.
#### Explanation of Accessing the Output
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the __getitem__ method.
2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object.
3. Using to_dict() Method: Convert the output to a dictionary and access the fields.
4. Printing the Entire Object: Simply print the result object to see the structured output.
### Using `output_json`
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
Heres an example demonstrating how to use `output_json`:
```python Code
import json
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
# Define the Pydantic model for the blog
class Blog(BaseModel):
title: str
content: str
# Define the agent
blog_agent = Agent(
role="Blog Content Generator Agent",
goal="Generate a blog title and content",
backstory="""You are an expert content creator, skilled in crafting engaging and informative blog posts.""",
verbose=False,
allow_delegation=False,
llm="gpt-4o",
)
# Define the task with output_json set to the Blog model
task1 = Task(
description="""Create a blog title and content on a given topic. Make sure the content is under 200 words.""",
expected_output="A JSON object with 'title' and 'content' fields.",
agent=blog_agent,
output_json=Blog,
)
# Instantiate the crew with a sequential process
crew = Crew(
agents=[blog_agent],
tasks=[task1],
verbose=True,
process=Process.sequential,
)
# Kickoff the crew to execute the task
result = crew.kickoff()
# Option 1: Accessing Properties Using Dictionary-Style Indexing
print("Accessing Properties - Option 1")
title = result["title"]
content = result["content"]
print("Title:", title)
print("Content:", content)
# Option 2: Printing the Entire Blog Object
print("Accessing Properties - Option 2")
print("Blog:", result)
```
In this example:
* A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
* The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
* After executing the crew, you can access the structured JSON output in two ways as shown.
#### Explanation of Accessing the Output
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the __getitem__ method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the __str__ method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
---
By using output_pydantic or output_json, you ensure that your tasks produce outputs in a consistent and structured format, making it easier to process and utilize the data within your application or across multiple tasks.
## Integrating Tools with Tasks ## Integrating Tools with Tasks
Leverage tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) for enhanced task performance and agent interaction. Leverage tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) for enhanced task performance and agent interaction.
@@ -167,16 +479,16 @@ This is useful when you have a task that depends on the output of another task t
# ... # ...
research_ai_task = Task( research_ai_task = Task(
description='Find and summarize the latest AI news', description="Research the latest developments in AI",
expected_output='A bullet list summary of the top 5 most important AI news', expected_output="A list of recent AI developments",
async_execution=True, async_execution=True,
agent=research_agent, agent=research_agent,
tools=[search_tool] tools=[search_tool]
) )
research_ops_task = Task( research_ops_task = Task(
description='Find and summarize the latest AI Ops news', description="Research the latest developments in AI Ops",
expected_output='A bullet list summary of the top 5 most important AI Ops news', expected_output="A list of recent AI Ops developments",
async_execution=True, async_execution=True,
agent=research_agent, agent=research_agent,
tools=[search_tool] tools=[search_tool]
@@ -184,7 +496,7 @@ research_ops_task = Task(
write_blog_task = Task( write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news", description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long', expected_output="Full blog post that is 4 paragraphs long",
agent=writer_agent, agent=writer_agent,
context=[research_ai_task, research_ops_task] context=[research_ai_task, research_ops_task]
) )
@@ -320,4 +632,4 @@ save_output_task = Task(
Tasks are the driving force behind the actions of agents in CrewAI. Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended. ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -0,0 +1,59 @@
---
title: Before and After Kickoff Hooks
description: Learn how to use before and after kickoff hooks in CrewAI
---
CrewAI provides hooks that allow you to execute code before and after a crew's kickoff. These hooks are useful for preprocessing inputs or post-processing results.
## Before Kickoff Hook
The before kickoff hook is executed before the crew starts its tasks. It receives the input dictionary and can modify it before passing it to the crew. You can use this hook to set up your environment, load necessary data, or preprocess your inputs. This is useful in scenarios where the input data might need enrichment or validation before being processed by the crew.
Here's an example of defining a before kickoff function in your `crew.py`:
```python
from crewai import CrewBase, before_kickoff
@CrewBase
class MyCrew:
@before_kickoff
def prepare_data(self, inputs):
# Preprocess or modify inputs
inputs['processed'] = True
return inputs
#...
```
In this example, the prepare_data function modifies the inputs by adding a new key-value pair indicating that the inputs have been processed.
## After Kickoff Hook
The after kickoff hook is executed after the crew has completed its tasks. It receives the result object, which contains the outputs of the crew's execution. This hook is ideal for post-processing results, such as logging, data transformation, or further analysis.
Here's how you can define an after kickoff function in your `crew.py`:
```python
from crewai import CrewBase, after_kickoff
@CrewBase
class MyCrew:
@after_kickoff
def log_results(self, result):
# Log or modify the results
print("Crew execution completed with result:", result)
return result
# ...
```
In the `log_results` function, the results of the crew execution are simply printed out. You can extend this to perform more complex operations such as sending notifications or integrating with other services.
## Utilizing Both Hooks
Both hooks can be used together to provide a comprehensive setup and teardown process for your crew's execution. They are particularly useful in maintaining clean code architecture by separating concerns and enhancing the modularity of your CrewAI implementations.
## Conclusion
Before and after kickoff hooks in CrewAI offer powerful ways to interact with the lifecycle of a crew's execution. By understanding and utilizing these hooks, you can greatly enhance the robustness and flexibility of your AI agents.

View File

@@ -32,6 +32,7 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Cloudflare Workers AI - Cloudflare Workers AI
- DeepInfra - DeepInfra
- Groq - Groq
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
- And many more! - And many more!
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers). For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
@@ -125,10 +126,10 @@ You can connect to OpenAI-compatible LLMs using either environment variables or
</Tab> </Tab>
<Tab title="Using LLM Class Attributes"> <Tab title="Using LLM Class Attributes">
<CodeGroup> <CodeGroup>
```python Code ```python Code
llm = LLM( llm = LLM(
model="custom-model-name", model="custom-model-name",
api_key="your-api-key", api_key="your-api-key",
base_url="https://api.your-provider.com/v1" base_url="https://api.your-provider.com/v1"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
@@ -179,4 +180,4 @@ This is particularly useful when working with OpenAI-compatible APIs or when you
## Conclusion ## Conclusion
By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options. By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.

View File

@@ -0,0 +1,181 @@
---
title: Agent Monitoring with OpenLIT
description: Quickly start monitoring your Agents in just a single line of code with OpenTelemetry.
icon: magnifying-glass-chart
---
# OpenLIT Overview
[OpenLIT](https://github.com/openlit/openlit?src=crewai-docs) is an open-source tool that makes it simple to monitor the performance of AI agents, LLMs, VectorDBs, and GPUs with just **one** line of code.
It provides OpenTelemetry-native tracing and metrics to track important parameters like cost, latency, interactions and task sequences.
This setup enables you to track hyperparameters and monitor for performance issues, helping you find ways to enhance and fine-tune your agents over time.
<Frame caption="OpenLIT Dashboard">
<img src="/images/openlit1.png" alt="Overview Agent usage including cost and tokens" />
<img src="/images/openlit2.png" alt="Overview of agent otel traces and metrics" />
<img src="/images/openlit3.png" alt="Overview of agent traces in details" />
</Frame>
### Features
- **Analytics Dashboard**: Monitor your Agents health and performance with detailed dashboards that track metrics, costs, and user interactions.
- **OpenTelemetry-native Observability SDK**: Vendor-neutral SDKs to send traces and metrics to your existing observability tools like Grafana, DataDog and more.
- **Cost Tracking for Custom and Fine-Tuned Models**: Tailor cost estimations for specific models using custom pricing files for precise budgeting.
- **Exceptions Monitoring Dashboard**: Quickly spot and resolve issues by tracking common exceptions and errors with a monitoring dashboard.
- **Compliance and Security**: Detect potential threats such as profanity and PII leaks.
- **Prompt Injection Detection**: Identify potential code injection and secret leaks.
- **API Keys and Secrets Management**: Securely handle your LLM API keys and secrets centrally, avoiding insecure practices.
- **Prompt Management**: Manage and version Agent prompts using PromptHub for consistent and easy access across Agents.
- **Model Playground** Test and compare different models for your CrewAI agents before deployment.
## Setup Instructions
<Steps>
<Step title="Deploy OpenLIT">
<Steps>
<Step title="Git Clone OpenLIT Repository">
```shell
git clone git@github.com:openlit/openlit.git
```
</Step>
<Step title="Start Docker Compose">
From the root directory of the [OpenLIT Repo](https://github.com/openlit/openlit), Run the below command:
```shell
docker compose up -d
```
</Step>
</Steps>
</Step>
<Step title="Install OpenLIT SDK">
```shell
pip install openlit
```
</Step>
<Step title="Initialize OpenLIT in Your Application">
Add the following two lines to your application code:
<Tabs>
<Tab title="Setup using function arguments">
```python
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
```
Example Usage for monitoring a CrewAI Agent:
```python
from crewai import Agent, Task, Crew, Process
import openlit
openlit.init(disable_metrics=True)
# Define your agents
researcher = Agent(
role="Researcher",
goal="Conduct thorough research and analysis on AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently researching for a new client.",
allow_delegation=False,
llm='command-r'
)
# Define your task
task = Task(
description="Generate a list of 5 interesting ideas for an article, then write one captivating paragraph for each idea that showcases the potential of a full article on this topic. Return the list of ideas with their paragraphs and your notes.",
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
)
# Define the manager agent
manager = Agent(
role="Project Manager",
goal="Efficiently manage the crew and ensure high-quality task completion",
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
allow_delegation=True,
llm='command-r'
)
# Instantiate your crew with a custom manager
crew = Crew(
agents=[researcher],
tasks=[task],
manager_agent=manager,
process=Process.hierarchical,
)
# Start the crew's work
result = crew.kickoff()
print(result)
```
</Tab>
<Tab title="Setup using Environment Variables">
Add the following two lines to your application code:
```python
import openlit
openlit.init()
```
Run the following command to configure the OTEL export endpoint:
```shell
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
```
Example Usage for monitoring a CrewAI Async Agent:
```python
import asyncio
from crewai import Crew, Agent, Task
import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True,
llm="command-r"
)
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent,
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Async function to kickoff the crew asynchronously
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
# Run the async function
asyncio.run(async_crew_execution())
```
</Tab>
</Tabs>
Refer to OpenLIT [Python SDK repository](https://github.com/openlit/openlit/tree/main/sdk/python) for more advanced configurations and use cases.
</Step>
<Step title="Visualize and Analyze">
With the Agent Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your Agent's performance, behavior, and identify areas of improvement.
Just head over to OpenLIT at `127.0.0.1:3000` on your browser to start exploring. You can login using the default credentials
- **Email**: `user@openlit.io`
- **Password**: `openlituser`
<Frame caption="OpenLIT Dashboard">
<img src="/images/openlit1.png" alt="Overview Agent usage including cost and tokens" />
<img src="/images/openlit2.png" alt="Overview of agent otel traces and metrics" />
</Frame>
</Step>
</Steps>

BIN
docs/images/openlit1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 390 KiB

BIN
docs/images/openlit2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 422 KiB

BIN
docs/images/openlit3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 799 KiB

View File

@@ -1,128 +1,145 @@
--- ---
title: Installation title: Installation
description: description: Get started with CrewAI - Install, configure, and build your first AI crew
icon: wrench icon: wrench
--- ---
This guide will walk you through the installation process for CrewAI and its dependencies. <Note>
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently. **Python Version Requirements**
Let's get started! 🚀
CrewAI requires `Python >=3.10 and <=3.13`. Here's how to check your version:
```bash
python3 --version
```
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
<Tip> # Installing CrewAI
Make sure you have `Python >=3.10 <=3.13` installed on your system before you proceed.
</Tip> CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get you set up! 🚀
<Steps> <Steps>
<Step title="Install CrewAI"> <Step title="Install CrewAI">
Install the main CrewAI package with the following command: Install CrewAI with all recommended tools using either method:
<CodeGroup>
```shell Terminal
pip install crewai
```
</CodeGroup>
You can also install the main CrewAI package and the tools package that include a series of helpful tools for your agents:
<CodeGroup>
```shell Terminal
pip install 'crewai[tools]'
```
</CodeGroup>
Alternatively, you can also use:
<CodeGroup>
```shell Terminal
pip install crewai crewai-tools
```
</CodeGroup>
</Step>
<Step title="Upgrade CrewAI">
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command
<CodeGroup>
```shell Terminal
pip install --upgrade crewai crewai-tools
```
</CodeGroup>
<Note>
1. If you're using an older version of CrewAI, you may receive a warning about using `Poetry` for dependency management.
![Error from older versions](./images/crewai-run-poetry-error.png)
2. In this case, you'll need to run the command below to update your project.
This command will migrate your project to use [UV](https://github.com/astral-sh/uv) and update the necessary files.
```shell Terminal ```shell Terminal
crewai update pip install 'crewai[tools]'
```
or
```shell Terminal
pip install crewai crewai-tools
``` ```
3. After running the command above, you should see the following output:
![Successfully migrated to UV](./images/crewai-update.png)
4. You're all set! You can now proceed to the next step! 🎉 <Note>
</Note> Both methods install the core package and additional tools needed for most use cases.
</Note>
</Step> </Step>
<Step title="Verify the installation">
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command <Step title="Upgrade CrewAI (Existing Installations Only)">
<CodeGroup> If you have an older version of CrewAI installed, you can upgrade it:
```shell Terminal ```shell Terminal
pip freeze | grep crewai pip install --upgrade crewai crewai-tools
``` ```
</CodeGroup>
You should see the version number of `crewai` and `crewai-tools`. <Warning>
<CodeGroup> If you see a Poetry-related warning, you'll need to migrate to our new dependency manager:
```markdown Version ```shell Terminal
crewai==X.X.X crewai update
crewai-tools==X.X.X ```
``` This will update your project to use [UV](https://github.com/astral-sh/uv), our new faster dependency manager.
</CodeGroup> </Warning>
If you see the version number, then the installation was successful! 🎉
<Note>
Skip this step if you're doing a fresh installation.
</Note>
</Step>
<Step title="Verify Installation">
Check your installed versions:
```shell Terminal
pip freeze | grep crewai
```
You should see something like:
```markdown Output
crewai==X.X.X
crewai-tools==X.X.X
```
<Check>Installation successful! You're ready to create your first crew.</Check>
</Step> </Step>
</Steps> </Steps>
## Create a new CrewAI project # Creating a New Project
The next step is to create a new CrewAI project. <Info>
We recommend using the YAML Template scaffolding to get started as it provides a structured approach to defining agents and tasks. We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks.
</Info>
<Steps> <Steps>
<Step title="Create a new CrewAI project using the YAML Template Configuration"> <Step title="Generate Project Structure">
To create a new CrewAI project, run the following CLI (Command Line Interface) command: Run the CrewAI CLI command:
<CodeGroup> ```shell Terminal
```shell Terminal crewai create crew <project_name>
crewai create crew <project_name> ```
```
</CodeGroup>
This command creates a new project folder with the following structure:
| File/Directory | Description |
|:------------------------|:-------------------------------------------------|
| `my_project/` | Root directory of the project |
| ├── `.gitignore` | Specifies files and directories to ignore in Git |
| ├── `pyproject.toml` | Project configuration and dependencies |
| ├── `README.md` | Project documentation |
| ├── `.env` | Environment variables |
| └── `src/` | Source code directory |
| &nbsp;&nbsp;&nbsp;&nbsp;└── `my_project/` | Main application package |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `__init__.py` | Marks the directory as a Python package |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `main.py` | Main application script |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `crew.py` | Crew-related functionalities |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `tools/` | Custom tools directory |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;│ ├── `custom_tool.py` | Custom tool implementation |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;│ └── `__init__.py` | Marks tools directory as a package |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;└── `config/` | Configuration files directory |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;├── `agents.yaml` | Agent configurations |
| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;└── `tasks.yaml` | Task configurations |
You can now start developing your crew by editing the files in the `src/my_project` folder. This creates a new project with the following structure:
The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents, <Frame>
and the `tasks.yaml` file is where you define your tasks. ```
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
</Frame>
</Step> </Step>
<Step title="Customize your project">
To customize your project, you can: <Step title="Customize Your Project">
- Modify `src/my_project/config/agents.yaml` to define your agents. Your project will contain these essential files:
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments. | File | Purpose |
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks. | --- | --- |
- Add your environment variables into the `.env` file. | `agents.yaml` | Define your AI agents and their roles |
| `tasks.yaml` | Set up agent tasks and workflows |
| `.env` | Store API keys and environment variables |
| `main.py` | Project entry point and execution flow |
| `crew.py` | Crew orchestration and coordination |
| `tools/` | Directory for custom agent tools |
<Tip>
Start by editing `agents.yaml` and `tasks.yaml` to define your crew's behavior.
Keep sensitive information like API keys in `.env`.
</Tip>
</Step> </Step>
</Steps> </Steps>
## Next steps ## Next Steps
Now that you have installed `crewai` and `crewai-tools`, you're ready to spin up your first crew! <CardGroup cols={2}>
<Card
- 👨‍💻 Build your first agent with CrewAI by following the [Quickstart](/quickstart) guide. title="Build Your First Agent"
- 💬 Join the [Community](https://community.crewai.com) to get help and share your feedback. icon="code"
href="/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card>
<Card
title="Join the Community"
icon="comments"
href="https://community.crewai.com"
>
Connect with other developers, get help, and share your CrewAI experiences.
</Card>
</CardGroup>

View File

@@ -1,49 +1,85 @@
--- ---
title: Introduction title: Introduction
description: Welcome to CrewAI docs! description: Build AI agent teams that work together to tackle complex tasks
icon: handshake icon: handshake
--- ---
# What is CrewAI? # What is CrewAI?
**CrewAI is a cutting-edge Python framework for orchestrating role-playing, autonomous AI agents.** **CrewAI is a cutting-edge framework for orchestrating autonomous AI agents.**
By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. CrewAI enables you to create AI teams where each agent has specific roles, tools, and goals, working together to accomplish complex tasks.
<Frame caption="CrewAI Mindmap"> Think of it as assembling your dream team - each member (agent) brings unique skills and expertise, collaborating seamlessly to achieve your objectives.
<img src="crewAI-mindmap.png" alt="CrewAI Mindmap" />
</Frame>
## Why CrewAI? ## How CrewAI Works
- 🤼‍♀️ **Role-Playing Agents**: Agents can take on different roles and personas to better understand and interact with complex systems.
- 🤖 **Autonomous Decision Making**: Agents can make decisions autonomously based on the given context and available tools.
- 🤝 **Seamless Collaboration**: Agents can work together seamlessly, sharing information and resources to achieve common goals.
- 🧠 **Complex Task Tackling**: CrewAI is designed to tackle complex tasks, such as multi-step workflows, decision making, and problem solving.
# Get Started with CrewAI <Note>
Just like a company has departments (Sales, Engineering, Marketing) working together under leadership to achieve business goals, CrewAI helps you create an organization of AI agents with specialized roles collaborating to accomplish complex tasks.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="crewAI-mindmap.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
|:----------|:-----------:|:------------|
| **Crew** | The top-level organization | • Manages AI agent teams<br/>• Oversees workflows<br/>• Ensures collaboration<br/>• Delivers outcomes |
| **AI Agents** | Specialized team members | • Have specific roles (researcher, writer)<br/>• Use designated tools<br/>• Can delegate tasks<br/>• Make autonomous decisions |
| **Process** | Workflow management system | • Defines collaboration patterns<br/>• Controls task assignments<br/>• Manages interactions<br/>• Ensures efficient execution |
| **Tasks** | Individual assignments | • Have clear objectives<br/>• Use specific tools<br/>• Feed into larger process<br/>• Produce actionable results |
### How It All Works Together
1. The **Crew** organizes the overall operation
2. **AI Agents** work on their specialized tasks
3. The **Process** ensures smooth collaboration
4. **Tasks** get completed to achieve the goal
## Key Features
<CardGroup cols={2}> <CardGroup cols={2}>
<Card title="Role-Based Agents" icon="users">
Create specialized agents with defined roles, expertise, and goals - from researchers to analysts to writers
</Card>
<Card title="Flexible Tools" icon="screwdriver-wrench">
Equip agents with custom tools and APIs to interact with external services and data sources
</Card>
<Card title="Intelligent Collaboration" icon="people-arrows">
Agents work together, sharing insights and coordinating tasks to achieve complex objectives
</Card>
<Card title="Task Management" icon="list-check">
Define sequential or parallel workflows, with agents automatically handling task dependencies
</Card>
</CardGroup>
## Why Choose CrewAI?
- 🧠 **Autonomous Operation**: Agents make intelligent decisions based on their roles and available tools
- 📝 **Natural Interaction**: Agents communicate and collaborate like human team members
- 🛠️ **Extensible Design**: Easy to add new tools, roles, and capabilities
- 🚀 **Production Ready**: Built for reliability and scalability in real-world applications
<CardGroup cols={3}>
<Card <Card
title="Quickstart" title="Install CrewAI"
color="#F3A78B" icon="wrench"
href="quickstart" href="/installation"
icon="terminal"
iconType="solid"
> >
Getting started with CrewAI Get started with CrewAI in your development environment.
</Card>
<Card
title="Quick Start"
icon="bolt"
href="/quickstart"
>
Follow our quickstart guide to create your first CrewAI agent and get hands-on experience.
</Card> </Card>
<Card <Card
title="Join the Community" title="Join the Community"
color="#F3A78B" icon="comments"
href="https://community.crewai.com" href="https://community.crewai.com"
icon="comment-question" >
iconType="duotone" Connect with other developers, get help, and share your CrewAI experiences.
> </Card>
Join the CrewAI community and get help with your project! </CardGroup>
</Card>
</CardGroup>
## Next Step
- [Install CrewAI](/installation) to get started with your first agent.

View File

@@ -68,6 +68,7 @@
"concepts/tasks", "concepts/tasks",
"concepts/crews", "concepts/crews",
"concepts/flows", "concepts/flows",
"concepts/knowledge",
"concepts/llms", "concepts/llms",
"concepts/processes", "concepts/processes",
"concepts/collaboration", "concepts/collaboration",
@@ -98,7 +99,8 @@
"how-to/replay-tasks-from-latest-crew-kickoff", "how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/conditional-tasks", "how-to/conditional-tasks",
"how-to/agentops-observability", "how-to/agentops-observability",
"how-to/langtrace-observability" "how-to/langtrace-observability",
"how-to/openlit-observability"
] ]
}, },
{ {

View File

@@ -8,7 +8,7 @@ icon: rocket
Let's create a simple crew that will help us `research` and `report` on the `latest AI developments` for a given topic or subject. Let's create a simple crew that will help us `research` and `report` on the `latest AI developments` for a given topic or subject.
Before we proceed, make sure you have `crewai` and `crewai-tools` installed. Before we proceed, make sure you have `crewai` and `crewai-tools` installed.
If you haven't installed them yet, you can do so by following the [installation guide](/installation). If you haven't installed them yet, you can do so by following the [installation guide](/installation).
Follow the steps below to get crewing! 🚣‍♂️ Follow the steps below to get crewing! 🚣‍♂️
@@ -23,7 +23,7 @@ Follow the steps below to get crewing! 🚣‍♂️
``` ```
</CodeGroup> </CodeGroup>
</Step> </Step>
<Step title="Modify your `agents.yaml` file"> <Step title="Modify your `agents.yaml` file">
<Tip> <Tip>
You can also modify the agents as needed to fit your use case or copy and paste as is to your project. You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file. Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file.
@@ -39,7 +39,7 @@ Follow the steps below to get crewing! 🚣‍♂️
You're a seasoned researcher with a knack for uncovering the latest You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner. information and present it in a clear and concise manner.
reporting_analyst: reporting_analyst:
role: > role: >
{topic} Reporting Analyst {topic} Reporting Analyst
@@ -51,7 +51,7 @@ Follow the steps below to get crewing! 🚣‍♂️
it easy for others to understand and act on the information you provide. it easy for others to understand and act on the information you provide.
``` ```
</Step> </Step>
<Step title="Modify your `tasks.yaml` file"> <Step title="Modify your `tasks.yaml` file">
```yaml tasks.yaml ```yaml tasks.yaml
# src/latest_ai_development/config/tasks.yaml # src/latest_ai_development/config/tasks.yaml
research_task: research_task:
@@ -73,8 +73,8 @@ Follow the steps below to get crewing! 🚣‍♂️
agent: reporting_analyst agent: reporting_analyst
output_file: report.md output_file: report.md
``` ```
</Step> </Step>
<Step title="Modify your `crew.py` file"> <Step title="Modify your `crew.py` file">
```python crew.py ```python crew.py
# src/latest_ai_development/crew.py # src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task from crewai import Agent, Crew, Process, Task
@@ -121,10 +121,34 @@ Follow the steps below to get crewing! 🚣‍♂️
tasks=self.tasks, # Automatically created by the @task decorator tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential, process=Process.sequential,
verbose=True, verbose=True,
) )
``` ```
</Step> </Step>
<Step title="Feel free to pass custom inputs to your crew"> <Step title="[Optional] Add before and after crew functions">
```python crew.py
# src/latest_ai_development/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
@before_kickoff
def before_kickoff_function(self, inputs):
print(f"Before kickoff function with inputs: {inputs}")
return inputs # You can return the inputs or modify them as needed
@after_kickoff
def after_kickoff_function(self, result):
print(f"After kickoff function with result: {result}")
return result # You can return the result or modify it as needed
# ... remaining code
```
</Step>
<Step title="Feel free to pass custom inputs to your crew">
For example, you can pass the `topic` input to your crew to customize the research and reporting. For example, you can pass the `topic` input to your crew to customize the research and reporting.
```python main.py ```python main.py
#!/usr/bin/env python #!/usr/bin/env python
@@ -237,14 +261,14 @@ Follow the steps below to get crewing! 🚣‍♂️
### Note on Consistency in Naming ### Note on Consistency in Naming
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code. The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
For example, you can reference the agent for specific tasks from `tasks.yaml` file. For example, you can reference the agent for specific tasks from `tasks.yaml` file.
This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly. This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly.
#### Example References #### Example References
<Tip> <Tip>
Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file. Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file.
</Tip> </Tip>
```yaml agents.yaml ```yaml agents.yaml
email_summarizer: email_summarizer:
@@ -281,6 +305,8 @@ Use the annotations to properly reference the agent and task in the `crew.py` fi
* `@task` * `@task`
* `@crew` * `@crew`
* `@tool` * `@tool`
* `@before_kickoff`
* `@after_kickoff`
* `@callback` * `@callback`
* `@output_json` * `@output_json`
* `@output_pydantic` * `@output_pydantic`
@@ -304,7 +330,7 @@ def email_summarizer_task(self) -> Task:
<Tip> <Tip>
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process), In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
You can learn more about the core concepts [here](/concepts). You can learn more about the core concepts [here](/concepts).
</Tip> </Tip>
@@ -323,11 +349,28 @@ Replace `<task_id>` with the ID of the task you want to replay.
If you need to reset the memory of your crew before running it again, you can do so by calling the reset memory feature: If you need to reset the memory of your crew before running it again, you can do so by calling the reset memory feature:
```shell ```shell
crewai reset-memory crewai reset-memories --all
``` ```
This will clear the crew's memory, allowing for a fresh start. This will clear the crew's memory, allowing for a fresh start.
## Deploying Your Project ## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI Enterprise](http://app.crewai.com/), where you can deploy your crew in a few clicks. The easiest way to deploy your crew is through CrewAI Enterprise, where you can deploy your crew in a few clicks.
<CardGroup cols={2}>
<Card
title="Deploy on Enterprise"
icon="rocket"
href="http://app.crewai.com"
>
Get started with CrewAI Enterprise and deploy your crew in a production environment with just a few clicks.
</Card>
<Card
title="Join the Community"
icon="comments"
href="https://community.crewai.com"
>
Join our open source community to discuss ideas, share your projects, and connect with other CrewAI developers.
</Card>
</CardGroup>

View File

@@ -129,7 +129,6 @@ nav:
- Processes: 'core-concepts/Processes.md' - Processes: 'core-concepts/Processes.md'
- Crews: 'core-concepts/Crews.md' - Crews: 'core-concepts/Crews.md'
- Collaboration: 'core-concepts/Collaboration.md' - Collaboration: 'core-concepts/Collaboration.md'
- Pipeline: 'core-concepts/Pipeline.md'
- Training: 'core-concepts/Training-Crew.md' - Training: 'core-concepts/Training-Crew.md'
- Memory: 'core-concepts/Memory.md' - Memory: 'core-concepts/Memory.md'
- Planning: 'core-concepts/Planning.md' - Planning: 'core-concepts/Planning.md'
@@ -152,6 +151,7 @@ nav:
- Conditional Tasks: 'how-to/Conditional-Tasks.md' - Conditional Tasks: 'how-to/Conditional-Tasks.md'
- Agent Monitoring with AgentOps: 'how-to/AgentOps-Observability.md' - Agent Monitoring with AgentOps: 'how-to/AgentOps-Observability.md'
- Agent Monitoring with LangTrace: 'how-to/Langtrace-Observability.md' - Agent Monitoring with LangTrace: 'how-to/Langtrace-Observability.md'
- Agent Monitoring with OpenLIT: 'how-to/openlit-Observability.md'
- Tools Docs: - Tools Docs:
- Browserbase Web Loader: 'tools/BrowserbaseLoadTool.md' - Browserbase Web Loader: 'tools/BrowserbaseLoadTool.md'
- Code Docs RAG Search: 'tools/CodeDocsSearchTool.md' - Code Docs RAG Search: 'tools/CodeDocsSearchTool.md'

7507
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.80.0" version = "0.86.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
@@ -9,14 +9,13 @@ authors = [
] ]
dependencies = [ dependencies = [
"pydantic>=2.4.2", "pydantic>=2.4.2",
"langchain>=0.2.16",
"openai>=1.13.3", "openai>=1.13.3",
"opentelemetry-api>=1.22.0", "opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0", "opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0", "opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3", "instructor>=1.3.3",
"regex>=2024.9.11", "regex>=2024.9.11",
"crewai-tools>=0.14.0", "crewai-tools>=0.17.0",
"click>=8.1.7", "click>=8.1.7",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",
"appdirs>=1.4.4", "appdirs>=1.4.4",
@@ -29,6 +28,8 @@ dependencies = [
"tomli-w>=1.1.0", "tomli-w>=1.1.0",
"tomli>=2.0.2", "tomli>=2.0.2",
"chromadb>=0.5.18", "chromadb>=0.5.18",
"pdfplumber>=0.11.4",
"openpyxl>=3.1.5",
] ]
[project.urls] [project.urls]

View File

@@ -5,9 +5,7 @@ from crewai.crew import Crew
from crewai.flow.flow import Flow from crewai.flow.flow import Flow
from crewai.knowledge.knowledge import Knowledge from crewai.knowledge.knowledge import Knowledge
from crewai.llm import LLM from crewai.llm import LLM
from crewai.pipeline import Pipeline
from crewai.process import Process from crewai.process import Process
from crewai.routers import Router
from crewai.task import Task from crewai.task import Task
warnings.filterwarnings( warnings.filterwarnings(
@@ -16,14 +14,12 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.80.0" __version__ = "0.86.0"
__all__ = [ __all__ = [
"Agent", "Agent",
"Crew", "Crew",
"Process", "Process",
"Task", "Task",
"Pipeline",
"Router",
"LLM", "LLM",
"Flow", "Flow",
"Knowledge", "Knowledge",

View File

@@ -1,20 +1,25 @@
import os import os
import shutil import shutil
import subprocess import subprocess
from typing import Any, List, Literal, Optional, Union from typing import Any, Dict, List, Literal, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS from crewai.cli.constants import ENV_VARS, LITELLM_PARAMS
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
from crewai.llm import LLM from crewai.llm import LLM
from crewai.memory.contextual.contextual_memory import ContextualMemory from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.task import Task
from crewai.tools import BaseTool from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.utilities import Converter, Prompts from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.converter import generate_model_description
from crewai.utilities.token_counter_callback import TokenCalcHandler from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
@@ -63,6 +68,7 @@ class Agent(BaseAgent):
allow_delegation: Whether the agent is allowed to delegate tasks to other agents. allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
tools: Tools at agents disposal tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution. step_callback: Callback to be executed after each step of the agent execution.
knowledge_sources: Knowledge sources for the agent.
""" """
_times_executed: int = PrivateAttr(default=0) _times_executed: int = PrivateAttr(default=0)
@@ -120,11 +126,23 @@ class Agent(BaseAgent):
default="safe", default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).", description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
) )
embedder_config: Optional[Dict[str, Any]] = Field(
default=None,
description="Embedder configuration for the agent.",
)
knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
default=None,
description="Knowledge sources for the agent.",
)
_knowledge: Optional[Knowledge] = PrivateAttr(
default=None,
)
@model_validator(mode="after") @model_validator(mode="after")
def post_init_setup(self): def post_init_setup(self):
self._set_knowledge()
self.agent_ops_agent_name = self.role self.agent_ops_agent_name = self.role
unnacepted_attributes = [ unaccepted_attributes = [
"AWS_ACCESS_KEY_ID", "AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY", "AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME", "AWS_REGION_NAME",
@@ -158,30 +176,16 @@ class Agent(BaseAgent):
for provider, env_vars in ENV_VARS.items(): for provider, env_vars in ENV_VARS.items():
if provider == set_provider: if provider == set_provider:
for env_var in env_vars: for env_var in env_vars:
if env_var["key_name"] in unnacepted_attributes:
continue
# Check if the environment variable is set # Check if the environment variable is set
if "key_name" in env_var: key_name = env_var.get("key_name")
env_value = os.environ.get(env_var["key_name"]) if key_name and key_name not in unaccepted_attributes:
env_value = os.environ.get(key_name)
if env_value: if env_value:
# Map key names containing "API_KEY" to "api_key" key_name = key_name.lower()
key_name = ( for pattern in LITELLM_PARAMS:
"api_key" if pattern in key_name:
if "API_KEY" in env_var["key_name"] key_name = pattern
else env_var["key_name"] break
)
# Map key names containing "API_BASE" to "api_base"
key_name = (
"api_base"
if "API_BASE" in env_var["key_name"]
else key_name
)
# Map key names containing "API_VERSION" to "api_version"
key_name = (
"api_version"
if "API_VERSION" in env_var["key_name"]
else key_name
)
llm_params[key_name] = env_value llm_params[key_name] = env_value
# Check for default values if the environment variable is not set # Check for default values if the environment variable is not set
elif env_var.get("default", False): elif env_var.get("default", False):
@@ -235,9 +239,24 @@ class Agent(BaseAgent):
self.cache_handler = CacheHandler() self.cache_handler = CacheHandler()
self.set_cache_handler(self.cache_handler) self.set_cache_handler(self.cache_handler)
def _set_knowledge(self):
try:
if self.knowledge_sources:
knowledge_agent_name = f"{self.role.replace(' ', '_')}"
if isinstance(self.knowledge_sources, list) and all(
isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources
):
self._knowledge = Knowledge(
sources=self.knowledge_sources,
embedder_config=self.embedder_config,
collection_name=knowledge_agent_name,
)
except (TypeError, ValueError) as e:
raise ValueError(f"Invalid Knowledge Configuration: {str(e)}")
def execute_task( def execute_task(
self, self,
task: Any, task: Task,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None, tools: Optional[List[BaseTool]] = None,
) -> str: ) -> str:
@@ -256,6 +275,22 @@ class Agent(BaseAgent):
task_prompt = task.prompt() task_prompt = task.prompt()
# If the task requires output in JSON or Pydantic format,
# append specific instructions to the task prompt to ensure
# that the final answer does not include any code block markers
if task.output_json or task.output_pydantic:
# Generate the schema based on the output format
if task.output_json:
# schema = json.dumps(task.output_json, indent=2)
schema = generate_model_description(task.output_json)
elif task.output_pydantic:
schema = generate_model_description(task.output_pydantic)
task_prompt += "\n" + self.i18n.slice("formatted_task_instructions").format(
output_format=schema
)
if context: if context:
task_prompt = self.i18n.slice("task_with_context").format( task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context task=task_prompt, context=context
@@ -273,17 +308,21 @@ class Agent(BaseAgent):
if memory.strip() != "": if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory) task_prompt += self.i18n.slice("memory").format(memory=memory)
# Integrate the knowledge base if self._knowledge:
if self.crew and self.crew.knowledge: agent_knowledge_snippets = self._knowledge.query([task.prompt()])
knowledge_snippets = self.crew.knowledge.query([task.prompt()]) if agent_knowledge_snippets:
valid_snippets = [ agent_knowledge_context = extract_knowledge_context(
result["context"] agent_knowledge_snippets
for result in knowledge_snippets )
if result and result.get("context") if agent_knowledge_context:
] task_prompt += agent_knowledge_context
if valid_snippets:
formatted_knowledge = "\n".join(valid_snippets) if self.crew:
task_prompt += f"\n\nAdditional Information:\n{formatted_knowledge}" knowledge_snippets = self.crew.query_knowledge([task.prompt()])
if knowledge_snippets:
crew_knowledge_context = extract_knowledge_context(knowledge_snippets)
if crew_knowledge_context:
task_prompt += crew_knowledge_context
tools = tools or self.tools or [] tools = tools or self.tools or []
self.create_agent_executor(tools=tools, task=task) self.create_agent_executor(tools=tools, task=task)
@@ -399,7 +438,7 @@ class Agent(BaseAgent):
for tool in tools: for tool in tools:
if isinstance(tool, CrewAITool): if isinstance(tool, CrewAITool):
tools_list.append(tool.to_langchain()) tools_list.append(tool.to_structured_tool())
else: else:
tools_list.append(tool) tools_list.append(tool)
except ModuleNotFoundError: except ModuleNotFoundError:

View File

@@ -19,6 +19,7 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
from crewai.agents.cache.cache_handler import CacheHandler from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.tools import BaseTool from crewai.tools import BaseTool
from crewai.tools.base_tool import Tool
from crewai.utilities import I18N, Logger, RPMController from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities.config import process_config from crewai.utilities.config import process_config
@@ -106,7 +107,7 @@ class BaseAgent(ABC, BaseModel):
default=False, default=False,
description="Enable agent to delegate and ask questions among each other.", description="Enable agent to delegate and ask questions among each other.",
) )
tools: Optional[List[BaseTool]] = Field( tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents' disposal" default_factory=list, description="Tools at agents' disposal"
) )
max_iter: Optional[int] = Field( max_iter: Optional[int] = Field(
@@ -135,6 +136,35 @@ class BaseAgent(ABC, BaseModel):
def process_model_config(cls, values): def process_model_config(cls, values):
return process_config(values, cls) return process_config(values, cls)
@field_validator("tools")
@classmethod
def validate_tools(cls, tools: List[Any]) -> List[BaseTool]:
"""Validate and process the tools provided to the agent.
This method ensures that each tool is either an instance of BaseTool
or an object with 'name', 'func', and 'description' attributes. If the
tool meets these criteria, it is processed and added to the list of
tools. Otherwise, a ValueError is raised.
"""
processed_tools = []
for tool in tools:
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif (
hasattr(tool, "name")
and hasattr(tool, "func")
and hasattr(tool, "description")
):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
raise ValueError(
f"Invalid tool type: {type(tool)}. "
"Tool must be an instance of BaseTool or "
"an object with 'name', 'func', and 'description' attributes."
)
return processed_tools
@model_validator(mode="after") @model_validator(mode="after")
def validate_and_set_attributes(self): def validate_and_set_attributes(self):
# Validate required fields # Validate required fields

View File

@@ -3,16 +3,15 @@ from typing import TYPE_CHECKING, Optional
from crewai.memory.entity.entity_memory_item import EntityMemoryItem from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.utilities import I18N
from crewai.utilities.converter import ConverterError from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities import I18N
from crewai.utilities.printer import Printer from crewai.utilities.printer import Printer
if TYPE_CHECKING: if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.crew import Crew from crewai.crew import Crew
from crewai.task import Task from crewai.task import Task
from crewai.agents.agent_builder.base_agent import BaseAgent
class CrewAgentExecutorMixin: class CrewAgentExecutorMixin:
@@ -100,14 +99,19 @@ class CrewAgentExecutorMixin:
print(f"Failed to add to long term memory: {e}") print(f"Failed to add to long term memory: {e}")
pass pass
def _ask_human_input(self, final_answer: dict) -> str: def _ask_human_input(self, final_answer: str) -> str:
"""Prompt human input for final decision making.""" """Prompt human input for final decision making."""
self._printer.print( self._printer.print(
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m" content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
) )
self._printer.print( self._printer.print(
content="\n\n=====\n## Please provide feedback on the Final Result and the Agent's actions:", content=(
"\n\n=====\n"
"## Please provide feedback on the Final Result and the Agent's actions. "
"Respond with 'looks good' or a similar phrase when you're satisfied.\n"
"=====\n"
),
color="bold_yellow", color="bold_yellow",
) )
return input() return input()

View File

@@ -1,5 +1,6 @@
import json import json
import re import re
from dataclasses import dataclass
from typing import Any, Dict, List, Union from typing import Any, Dict, List, Union
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
@@ -12,9 +13,10 @@ from crewai.agents.parser import (
OutputParserException, OutputParserException,
) )
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.base_tool import BaseTool
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N, Printer from crewai.utilities import I18N, Printer
from crewai.utilities.constants import TRAINING_DATA_FILE from crewai.utilities.constants import MAX_LLM_RETRY, TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import ( from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException, LLMContextLengthExceededException,
) )
@@ -22,6 +24,12 @@ from crewai.utilities.logger import Logger
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
@dataclass
class ToolResult:
result: Any
result_as_answer: bool
class CrewAgentExecutor(CrewAgentExecutorMixin): class CrewAgentExecutor(CrewAgentExecutorMixin):
_logger: Logger = Logger() _logger: Logger = Logger()
@@ -33,7 +41,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
agent: BaseAgent, agent: BaseAgent,
prompt: dict[str, str], prompt: dict[str, str],
max_iter: int, max_iter: int,
tools: List[Any], tools: List[BaseTool],
tools_names: str, tools_names: str,
stop_words: List[str], stop_words: List[str],
tools_description: str, tools_description: str,
@@ -70,7 +78,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.iterations = 0 self.iterations = 0
self.log_error_after = 3 self.log_error_after = 3
self.have_forced_answer = False self.have_forced_answer = False
self.name_to_tool_map = {tool.name: tool for tool in self.tools} self.tool_name_to_tool_map: Dict[str, BaseTool] = {
tool.name: tool for tool in self.tools
}
if self.llm.stop: if self.llm.stop:
self.llm.stop = list(set(self.llm.stop + self.stop)) self.llm.stop = list(set(self.llm.stop + self.stop))
else: else:
@@ -80,7 +90,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if "system" in self.prompt: if "system" in self.prompt:
system_prompt = self._format_prompt(self.prompt.get("system", ""), inputs) system_prompt = self._format_prompt(self.prompt.get("system", ""), inputs)
user_prompt = self._format_prompt(self.prompt.get("user", ""), inputs) user_prompt = self._format_prompt(self.prompt.get("user", ""), inputs)
self.messages.append(self._format_msg(system_prompt, role="system")) self.messages.append(self._format_msg(system_prompt, role="system"))
self.messages.append(self._format_msg(user_prompt)) self.messages.append(self._format_msg(user_prompt))
else: else:
@@ -93,17 +102,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer = self._invoke_loop() formatted_answer = self._invoke_loop()
if self.ask_for_human_input: if self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output) formatted_answer = self._handle_human_feedback(formatted_answer)
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback)
# Making sure we only ask for it once, so disabling for the next thought loop
self.ask_for_human_input = False
self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))
formatted_answer = self._invoke_loop()
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer)
self._create_short_term_memory(formatted_answer) self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer) self._create_long_term_memory(formatted_answer)
return {"output": formatted_answer.output} return {"output": formatted_answer.output}
@@ -140,9 +140,17 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer = self._format_answer(answer) formatted_answer = self._format_answer(answer)
if isinstance(formatted_answer, AgentAction): if isinstance(formatted_answer, AgentAction):
action_result = self._use_tool(formatted_answer) tool_result = self._execute_tool_and_check_finality(
formatted_answer.text += f"\nObservation: {action_result}" formatted_answer
formatted_answer.result = action_result )
formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result
if tool_result.result_as_answer:
return AgentFinish(
thought="",
output=tool_result.result,
text=formatted_answer.text,
)
self._show_logs(formatted_answer) self._show_logs(formatted_answer)
if self.step_callback: if self.step_callback:
@@ -239,7 +247,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m\n\n" content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m\n\n"
) )
def _use_tool(self, agent_action: AgentAction) -> Any: def _execute_tool_and_check_finality(self, agent_action: AgentAction) -> ToolResult:
tool_usage = ToolUsage( tool_usage = ToolUsage(
tools_handler=self.tools_handler, tools_handler=self.tools_handler,
tools=self.tools, tools=self.tools,
@@ -255,19 +263,25 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if isinstance(tool_calling, ToolUsageErrorException): if isinstance(tool_calling, ToolUsageErrorException):
tool_result = tool_calling.message tool_result = tool_calling.message
return ToolResult(result=tool_result, result_as_answer=False)
else: else:
if tool_calling.tool_name.casefold().strip() in [ if tool_calling.tool_name.casefold().strip() in [
name.casefold().strip() for name in self.name_to_tool_map name.casefold().strip() for name in self.tool_name_to_tool_map
] or tool_calling.tool_name.casefold().replace("_", " ") in [ ] or tool_calling.tool_name.casefold().replace("_", " ") in [
name.casefold().strip() for name in self.name_to_tool_map name.casefold().strip() for name in self.tool_name_to_tool_map
]: ]:
tool_result = tool_usage.use(tool_calling, agent_action.text) tool_result = tool_usage.use(tool_calling, agent_action.text)
tool = self.tool_name_to_tool_map.get(tool_calling.tool_name)
if tool:
return ToolResult(
result=tool_result, result_as_answer=tool.result_as_answer
)
else: else:
tool_result = self._i18n.errors("wrong_tool_name").format( tool_result = self._i18n.errors("wrong_tool_name").format(
tool=tool_calling.tool_name, tool=tool_calling.tool_name,
tools=", ".join([tool.name.casefold() for tool in self.tools]), tools=", ".join([tool.name.casefold() for tool in self.tools]),
) )
return tool_result return ToolResult(result=tool_result, result_as_answer=False)
def _summarize_messages(self) -> None: def _summarize_messages(self) -> None:
messages_groups = [] messages_groups = []
@@ -302,16 +316,14 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def _handle_context_length(self) -> None: def _handle_context_length(self) -> None:
if self.respect_context_window: if self.respect_context_window:
self._logger.log( self._printer.print(
"debug", content="Context length exceeded. Summarizing content to fit the model context window.",
"Context length exceeded. Summarizing content to fit the model context window.",
color="yellow", color="yellow",
) )
self._summarize_messages() self._summarize_messages()
else: else:
self._logger.log( self._printer.print(
"debug", content="Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red", color="red",
) )
raise SystemExit( raise SystemExit(
@@ -333,20 +345,18 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.crew is not None and hasattr(self.crew, "_train_iteration"): if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration train_iteration = self.crew._train_iteration
if agent_id in training_data and isinstance(train_iteration, int): if agent_id in training_data and isinstance(train_iteration, int):
training_data[agent_id][train_iteration]["improved_output"] = ( training_data[agent_id][train_iteration][
result.output "improved_output"
) ] = result.output
training_handler.save(training_data) training_handler.save(training_data)
else: else:
self._logger.log( self._printer.print(
"error", content="Invalid train iteration type or agent_id not in training data.",
"Invalid train iteration type or agent_id not in training data.",
color="red", color="red",
) )
else: else:
self._logger.log( self._printer.print(
"error", content="Crew is None or does not have _train_iteration attribute.",
"Crew is None or does not have _train_iteration attribute.",
color="red", color="red",
) )
@@ -364,15 +374,13 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
train_iteration, agent_id, training_data train_iteration, agent_id, training_data
) )
else: else:
self._logger.log( self._printer.print(
"error", content="Invalid train iteration type. Expected int.",
"Invalid train iteration type. Expected int.",
color="red", color="red",
) )
else: else:
self._logger.log( self._printer.print(
"error", content="Crew is None or does not have _train_iteration attribute.",
"Crew is None or does not have _train_iteration attribute.",
color="red", color="red",
) )
@@ -388,3 +396,82 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]: def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
prompt = prompt.rstrip() prompt = prompt.rstrip()
return {"role": role, "content": prompt} return {"role": role, "content": prompt}
def _handle_human_feedback(self, formatted_answer: AgentFinish) -> AgentFinish:
"""
Handles the human feedback loop, allowing the user to provide feedback
on the agent's output and determining if additional iterations are needed.
Parameters:
formatted_answer (AgentFinish): The initial output from the agent.
Returns:
AgentFinish: The final output after incorporating human feedback.
"""
while self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output)
print("Human feedback: ", human_feedback)
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback)
# Make an LLM call to verify if additional changes are requested based on human feedback
additional_changes_prompt = self._i18n.slice(
"human_feedback_classification"
).format(feedback=human_feedback)
retry_count = 0
llm_call_successful = False
additional_changes_response = None
while retry_count < MAX_LLM_RETRY and not llm_call_successful:
try:
additional_changes_response = (
self.llm.call(
[
self._format_msg(
additional_changes_prompt, role="system"
)
],
callbacks=self.callbacks,
)
.strip()
.lower()
)
llm_call_successful = True
except Exception as e:
retry_count += 1
self._printer.print(
content=f"Error during LLM call to classify human feedback: {e}. Retrying... ({retry_count}/{MAX_LLM_RETRY})",
color="red",
)
if not llm_call_successful:
self._printer.print(
content="Error processing feedback after multiple attempts.",
color="red",
)
self.ask_for_human_input = False
break
if additional_changes_response == "false":
self.ask_for_human_input = False
elif additional_changes_response == "true":
self.ask_for_human_input = True
# Add human feedback to messages
self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))
# Invoke the loop again with updated messages
formatted_answer = self._invoke_loop()
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer)
else:
# Unexpected response
self._printer.print(
content=f"Unexpected response from LLM: '{additional_changes_response}'. Assuming no additional changes requested.",
color="red",
)
self.ask_for_human_input = False
return formatted_answer

View File

@@ -7,6 +7,7 @@ from rich.console import Console
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
from .utils import TokenManager, validate_token from .utils import TokenManager, validate_token
from crewai.cli.tools.main import ToolCommand
console = Console() console = Console()
@@ -63,7 +64,22 @@ class AuthenticationCommand:
validate_token(token_data["id_token"]) validate_token(token_data["id_token"])
expires_in = 360000 # Token expiration time in seconds expires_in = 360000 # Token expiration time in seconds
self.token_manager.save_tokens(token_data["access_token"], expires_in) self.token_manager.save_tokens(token_data["access_token"], expires_in)
console.print("\nWelcome to CrewAI+ !!", style="green")
try:
ToolCommand().login()
except Exception:
console.print(
"\n[bold yellow]Warning:[/bold yellow] Authentication with the Tool Repository failed.",
style="yellow",
)
console.print(
"Other features will work normally, but you may experience limitations "
"with downloading and publishing tools."
"\nRun [bold]crewai login[/bold] to try logging in again.\n",
style="yellow",
)
console.print("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
return return
if token_data["error"] not in ("authorization_pending", "slow_down"): if token_data["error"] not in ("authorization_pending", "slow_down"):

View File

@@ -0,0 +1,10 @@
from .utils import TokenManager
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token

View File

@@ -6,7 +6,6 @@ import pkg_resources
from crewai.cli.add_crew_to_flow import add_crew_to_flow from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.create_crew import create_crew from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow from crewai.cli.create_flow import create_flow
from crewai.cli.create_pipeline import create_pipeline
from crewai.memory.storage.kickoff_task_outputs_storage import ( from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage, KickoffTaskOutputsSQLiteStorage,
) )
@@ -26,27 +25,24 @@ from .update_crew import update_crew
@click.group() @click.group()
@click.version_option(pkg_resources.get_distribution("crewai").version)
def crewai(): def crewai():
"""Top-level command group for crewai.""" """Top-level command group for crewai."""
@crewai.command() @crewai.command()
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"])) @click.argument("type", type=click.Choice(["crew", "flow"]))
@click.argument("name") @click.argument("name")
@click.option("--provider", type=str, help="The provider to use for the crew") @click.option("--provider", type=str, help="The provider to use for the crew")
@click.option("--skip_provider", is_flag=True, help="Skip provider validation") @click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False): def create(type, name, provider, skip_provider=False):
"""Create a new crew, pipeline, or flow.""" """Create a new crew, or flow."""
if type == "crew": if type == "crew":
create_crew(name, provider, skip_provider) create_crew(name, provider, skip_provider)
elif type == "pipeline":
create_pipeline(name)
elif type == "flow": elif type == "flow":
create_flow(name) create_flow(name)
else: else:
click.secho( click.secho("Error: Invalid type. Must be 'crew' or 'flow'.", fg="red")
"Error: Invalid type. Must be 'crew', 'pipeline', or 'flow'.", fg="red"
)
@crewai.command() @crewai.command()
@@ -55,7 +51,10 @@ def create(type, name, provider, skip_provider=False):
) )
def version(tools): def version(tools):
"""Show the installed version of crewai.""" """Show the installed version of crewai."""
crewai_version = pkg_resources.get_distribution("crewai").version try:
crewai_version = pkg_resources.get_distribution("crewai").version
except Exception:
crewai_version = "unknown version"
click.echo(f"crewai version: {crewai_version}") click.echo(f"crewai version: {crewai_version}")
if tools: if tools:

View File

@@ -2,7 +2,7 @@ import requests
from requests.exceptions import JSONDecodeError from requests.exceptions import JSONDecodeError
from rich.console import Console from rich.console import Console
from crewai.cli.plus_api import PlusAPI from crewai.cli.plus_api import PlusAPI
from crewai.cli.utils import get_auth_token from crewai.cli.authentication.token import get_auth_token
from crewai.telemetry.telemetry import Telemetry from crewai.telemetry.telemetry import Telemetry
console = Console() console = Console()

View File

@@ -159,3 +159,6 @@ MODELS = {
} }
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json" JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
LITELLM_PARAMS = ["api_key", "api_base", "api_version"]

View File

@@ -39,6 +39,7 @@ def create_folder_structure(name, parent_folder=None):
folder_path.mkdir(parents=True) folder_path.mkdir(parents=True)
(folder_path / "tests").mkdir(exist_ok=True) (folder_path / "tests").mkdir(exist_ok=True)
(folder_path / "knowledge").mkdir(exist_ok=True)
if not parent_folder: if not parent_folder:
(folder_path / "src" / folder_name).mkdir(parents=True) (folder_path / "src" / folder_name).mkdir(parents=True)
(folder_path / "src" / folder_name / "tools").mkdir(parents=True) (folder_path / "src" / folder_name / "tools").mkdir(parents=True)
@@ -52,7 +53,14 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
templates_dir = package_dir / "templates" / "crew" templates_dir = package_dir / "templates" / "crew"
root_template_files = ( root_template_files = (
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else [] [
".gitignore",
"pyproject.toml",
"README.md",
"knowledge/user_preference.txt",
]
if not parent_folder
else []
) )
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"] tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
config_template_files = ["config/agents.yaml", "config/tasks.yaml"] config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
@@ -168,7 +176,9 @@ def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
templates_dir = package_dir / "templates" / "crew" templates_dir = package_dir / "templates" / "crew"
root_template_files = ( root_template_files = (
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else [] [".gitignore", "pyproject.toml", "README.md", "knowledge/user_preference.txt"]
if not parent_folder
else []
) )
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"] tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
config_template_files = ["config/agents.yaml", "config/tasks.yaml"] config_template_files = ["config/agents.yaml", "config/tasks.yaml"]

View File

@@ -1,107 +0,0 @@
import shutil
from pathlib import Path
import click
def create_pipeline(name, router=False):
"""Create a new pipeline project."""
folder_name = name.replace(" ", "_").replace("-", "_").lower()
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
click.secho(f"Creating pipeline {folder_name}...", fg="green", bold=True)
project_root = Path(folder_name)
if project_root.exists():
click.secho(f"Error: Folder {folder_name} already exists.", fg="red")
return
# Create directory structure
(project_root / "src" / folder_name).mkdir(parents=True)
(project_root / "src" / folder_name / "pipelines").mkdir(parents=True)
(project_root / "src" / folder_name / "crews").mkdir(parents=True)
(project_root / "src" / folder_name / "tools").mkdir(parents=True)
(project_root / "tests").mkdir(exist_ok=True)
# Create .env file
with open(project_root / ".env", "w") as file:
file.write("OPENAI_API_KEY=YOUR_API_KEY")
package_dir = Path(__file__).parent
template_folder = "pipeline_router" if router else "pipeline"
templates_dir = package_dir / "templates" / template_folder
# List of template files to copy
root_template_files = [".gitignore", "pyproject.toml", "README.md"]
src_template_files = ["__init__.py", "main.py"]
tools_template_files = ["tools/__init__.py", "tools/custom_tool.py"]
if router:
crew_folders = [
"classifier_crew",
"normal_crew",
"urgent_crew",
]
pipelines_folders = [
"pipelines/__init__.py",
"pipelines/pipeline_classifier.py",
"pipelines/pipeline_normal.py",
"pipelines/pipeline_urgent.py",
]
else:
crew_folders = [
"research_crew",
"write_linkedin_crew",
"write_x_crew",
]
pipelines_folders = ["pipelines/__init__.py", "pipelines/pipeline.py"]
def process_file(src_file, dst_file):
with open(src_file, "r") as file:
content = file.read()
content = content.replace("{{name}}", name)
content = content.replace("{{crew_name}}", class_name)
content = content.replace("{{folder_name}}", folder_name)
content = content.replace("{{pipeline_name}}", class_name)
with open(dst_file, "w") as file:
file.write(content)
# Copy and process root template files
for file_name in root_template_files:
src_file = templates_dir / file_name
dst_file = project_root / file_name
process_file(src_file, dst_file)
# Copy and process src template files
for file_name in src_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy tools files
for file_name in tools_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
shutil.copy(src_file, dst_file)
# Copy pipelines folders
for file_name in pipelines_folders:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy crew folders
for crew_folder in crew_folders:
src_crew_folder = templates_dir / "crews" / crew_folder
dst_crew_folder = project_root / "src" / folder_name / "crews" / crew_folder
if src_crew_folder.exists():
shutil.copytree(src_crew_folder, dst_crew_folder)
else:
click.secho(
f"Warning: Crew folder {crew_folder} not found in template.",
fg="yellow",
)
click.secho(f"Pipeline {name} created successfully!", fg="green", bold=True)

View File

@@ -1,7 +1,7 @@
from typing import Optional from typing import Optional
import requests import requests
from os import getenv from os import getenv
from crewai.cli.utils import get_crewai_version from crewai.cli.version import get_crewai_version
from urllib.parse import urljoin from urllib.parse import urljoin

View File

@@ -3,7 +3,8 @@ import subprocess
import click import click
from packaging import version from packaging import version
from crewai.cli.utils import get_crewai_version, read_toml from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
def run_crew() -> None: def run_crew() -> None:

View File

@@ -1,24 +1,26 @@
from crewai import Agent, Crew, Process, Task from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool # If you want to run a snippet of code before or after the crew starts,
# from {{folder_name}}.tools.custom_tool import MyCustomTool # you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
@CrewBase @CrewBase
class {{crew_name}}(): class {{crew_name}}():
"""{{crew_name}} crew""" """{{crew_name}} crew"""
# Learn more about YAML configuration files here:
# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
agents_config = 'config/agents.yaml' agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml' tasks_config = 'config/tasks.yaml'
# If you would like to add tools to your agents, you can learn more about it here:
# https://docs.crewai.com/concepts/agents#agent-tools
@agent @agent
def researcher(self) -> Agent: def researcher(self) -> Agent:
return Agent( return Agent(
config=self.agents_config['researcher'], config=self.agents_config['researcher'],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True verbose=True
) )
@@ -29,6 +31,9 @@ class {{crew_name}}():
verbose=True verbose=True
) )
# To learn more about structured task outputs,
# task dependencies, and task callbacks, check out the documentation:
# https://docs.crewai.com/concepts/tasks#overview-of-a-task
@task @task
def research_task(self) -> Task: def research_task(self) -> Task:
return Task( return Task(
@@ -45,6 +50,9 @@ class {{crew_name}}():
@crew @crew
def crew(self) -> Crew: def crew(self) -> Crew:
"""Creates the {{crew_name}} crew""" """Creates the {{crew_name}} crew"""
# To learn how to add knowledge sources to your crew, check out the documentation:
# https://docs.crewai.com/concepts/knowledge#what-is-knowledge
return Crew( return Crew(
agents=self.agents, # Automatically created by the @agent decorator agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator tasks=self.tasks, # Automatically created by the @task decorator

View File

@@ -0,0 +1,4 @@
User name is John Doe.
User is an AI Engineer.
User is interested in AI Agents.
User is based in San Francisco, California.

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.80.0,<1.0.0" "crewai[tools]>=0.86.0,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -1,31 +1,47 @@
from crewai import Agent, Crew, Process, Task from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task from crewai.project import CrewBase, agent, crew, task
# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators
@CrewBase @CrewBase
class PoemCrew(): class PoemCrew:
"""Poem Crew""" """Poem Crew"""
agents_config = 'config/agents.yaml' # Learn more about YAML configuration files here:
tasks_config = 'config/tasks.yaml' # Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent # If you would lik to add tools to your crew, you can learn more about it here:
def poem_writer(self) -> Agent: # https://docs.crewai.com/concepts/agents#agent-tools
return Agent( @agent
config=self.agents_config['poem_writer'], def poem_writer(self) -> Agent:
) return Agent(
config=self.agents_config["poem_writer"],
)
@task # To learn more about structured task outputs,
def write_poem(self) -> Task: # task dependencies, and task callbacks, check out the documentation:
return Task( # https://docs.crewai.com/concepts/tasks#overview-of-a-task
config=self.tasks_config['write_poem'], @task
) def write_poem(self) -> Task:
return Task(
config=self.tasks_config["write_poem"],
)
@crew @crew
def crew(self) -> Crew: def crew(self) -> Crew:
"""Creates the Research Crew""" """Creates the Research Crew"""
return Crew( # To learn how to add knowledge sources to your crew, check out the documentation:
agents=self.agents, # Automatically created by the @agent decorator # https://docs.crewai.com/concepts/knowledge#what-is-knowledge
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential, return Crew(
verbose=True, agents=self.agents, # Automatically created by the @agent decorator
) tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.80.0,<1.0.0", "crewai[tools]>=0.86.0,<1.0.0",
] ]
[project.scripts] [project.scripts]

View File

@@ -1,2 +0,0 @@
.env
__pycache__/

View File

@@ -1,57 +0,0 @@
# {{crew_name}} Crew
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
```bash
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
crewai run
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -1,19 +0,0 @@
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.

View File

@@ -1,16 +0,0 @@
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with a title, mains topics, each with a full section of information.
agent: reporting_analyst

View File

@@ -1,58 +0,0 @@
from pydantic import BaseModel
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool
# from demo_pipeline.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
class ResearchReport(BaseModel):
"""Research Report"""
title: str
body: str
@CrewBase
class ResearchCrew():
"""Research Crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_pydantic=ResearchReport
)
@crew
def crew(self) -> Crew:
"""Creates the Research Crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -1,51 +0,0 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool
# from {{folder_name}}.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
@CrewBase
class WriteLinkedInCrew():
"""Research Crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the {{crew_name}} crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -1,14 +0,0 @@
x_writer_agent:
role: >
Expert Social Media Content Creator specializing in short form written content
goal: >
Create viral-worthy, engaging short form posts that distill complex {topic} information
into compelling 280-character messages
backstory: >
You're a social media virtuoso with a particular talent for short form content. Your posts
consistently go viral due to your ability to craft hooks that stop users mid-scroll.
You've studied the techniques of social media masters like Justin Welsh, Dickie Bush,
Nicolas Cole, and Shaan Puri, incorporating their best practices into your own unique style.
Your superpower is taking intricate {topic} concepts and transforming them into
bite-sized, shareable content that resonates with a wide audience. You know exactly
how to structure a post for maximum impact and engagement.

View File

@@ -1,22 +0,0 @@
write_x_task:
description: >
Using the research report provided, create an engaging short form post about {topic}.
Your post should have a great hook, summarize key points, and be structured for easy
consumption on a digital platform. The post must be under 280 characters.
Follow these guidelines:
1. Start with an attention-grabbing hook
2. Condense the main insights from the research
3. Use clear, concise language
4. Include a call-to-action or thought-provoking question if space allows
5. Ensure the post flows well and is easy to read quickly
Here is the title of the research report you will be using
Title: {title}
Research:
{body}
expected_output: >
A compelling X post under 280 characters that effectively summarizes the key findings
about {topic}, starts with a strong hook, and is optimized for engagement on the platform.
agent: x_writer_agent

View File

@@ -1,36 +0,0 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool
# from demo_pipeline.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
@CrewBase
class WriteXCrew:
"""Research Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def x_writer_agent(self) -> Agent:
return Agent(config=self.agents_config["x_writer_agent"], verbose=True)
@task
def write_x_task(self) -> Task:
return Task(
config=self.tasks_config["write_x_task"],
)
@crew
def crew(self) -> Crew:
"""Creates the Write X Crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python
import asyncio
from {{folder_name}}.pipelines.pipeline import {{pipeline_name}}Pipeline
async def run():
"""
Run the pipeline.
"""
inputs = [
{"topic": "AI wearables"},
]
pipeline = {{pipeline_name}}Pipeline()
results = await pipeline.kickoff(inputs)
# Process and print results
for result in results:
print(f"Raw output: {result.raw}")
if result.json_dict:
print(f"JSON output: {result.json_dict}")
print("\n")
def main():
asyncio.run(run())
if __name__ == "__main__":
main()

View File

@@ -1,87 +0,0 @@
"""
This pipeline file includes two different examples to demonstrate the flexibility of crewAI pipelines.
Example 1: Two-Stage Pipeline
-----------------------------
This pipeline consists of two crews:
1. ResearchCrew: Performs research on a given topic.
2. WriteXCrew: Generates an X (Twitter) post based on the research findings.
Key features:
- The ResearchCrew's final task uses output_json to store all research findings in a JSON object.
- This JSON object is then passed to the WriteXCrew, where tasks can access the research findings.
Example 2: Two-Stage Pipeline with Parallel Execution
-------------------------------------------------------
This pipeline consists of three crews:
1. ResearchCrew: Performs research on a given topic.
2. WriteXCrew and WriteLinkedInCrew: Run in parallel, using the research findings to generate posts for X and LinkedIn, respectively.
Key features:
- Demonstrates the ability to run multiple crews in parallel.
- Shows how to structure a pipeline with both sequential and parallel stages.
Usage:
- To switch between examples, comment/uncomment the respective code blocks below.
- Ensure that you have implemented all necessary crew classes (ResearchCrew, WriteXCrew, WriteLinkedInCrew) before running.
"""
# Common imports for both examples
from crewai import Pipeline
# Uncomment the crews you need for your chosen example
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
# from .crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew # Uncomment for Example 2
# EXAMPLE 1: Two-Stage Pipeline
# -----------------------------
# Uncomment the following code block to use Example 1
class {{pipeline_name}}Pipeline:
def __init__(self):
# Initialize crews
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.research_crew,
self.write_x_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
# EXAMPLE 2: Two-Stage Pipeline with Parallel Execution
# -------------------------------------------------------
# Uncomment the following code block to use Example 2
# @PipelineBase
# class {{pipeline_name}}Pipeline:
# def __init__(self):
# # Initialize crews
# self.research_crew = ResearchCrew().crew()
# self.write_x_crew = WriteXCrew().crew()
# self.write_linkedin_crew = WriteLinkedInCrew().crew()
# @pipeline
# def create_pipeline(self):
# return Pipeline(
# stages=[
# self.research_crew,
# [self.write_x_crew, self.write_linkedin_crew] # Parallel execution
# ]
# )
# async def run(self, inputs):
# pipeline = self.create_pipeline()
# results = await pipeline.kickoff(inputs)
# return results

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.80.0,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.86.0,<1.0.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -1,19 +0,0 @@
from typing import Type
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here
return "this is an example of a tool output, ignore it and move along."

View File

@@ -1,2 +0,0 @@
.env
__pycache__/

View File

@@ -1,54 +0,0 @@
# {{crew_name}} Crew
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
```bash
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
crewai run
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -1,19 +0,0 @@
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.

View File

@@ -1,17 +0,0 @@
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst

View File

@@ -1,40 +0,0 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from pydantic import BaseModel
# Uncomment the following line to use an example of a custom tool
# from demo_pipeline.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
class UrgencyScore(BaseModel):
urgency_score: int
@CrewBase
class ClassifierCrew:
"""Email Classifier Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def classifier(self) -> Agent:
return Agent(config=self.agents_config["classifier"], verbose=True)
@task
def urgent_task(self) -> Task:
return Task(
config=self.tasks_config["classify_email"],
output_pydantic=UrgencyScore,
)
@crew
def crew(self) -> Crew:
"""Creates the Email Classifier Crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -1,7 +0,0 @@
classifier:
role: >
Email Classifier
goal: >
Classify the email: {email} as urgent or normal from a score of 1 to 10, where 1 is not urgent and 10 is urgent. Return the urgency score only.`
backstory: >
You are a highly efficient and experienced email classifier, trained to quickly assess and classify emails. Your ability to remain calm under pressure and provide concise, actionable responses has made you an invaluable asset in managing normal situations and maintaining smooth operations.

View File

@@ -1,7 +0,0 @@
classify_email:
description: >
Classify the email: {email}
as urgent or normal.
expected_output: >
Classify the email from a scale of 1 to 10, where 1 is not urgent and 10 is urgent. Return the urgency score only.
agent: classifier

View File

@@ -1,7 +0,0 @@
normal_handler:
role: >
Normal Email Processor
goal: >
Process normal emails and create an email to respond to the sender.
backstory: >
You are a highly efficient and experienced normal email handler, trained to quickly assess and respond to normal communications. Your ability to remain calm under pressure and provide concise, actionable responses has made you an invaluable asset in managing normal situations and maintaining smooth operations.

View File

@@ -1,6 +0,0 @@
normal_task:
description: >
Process and respond to normal email quickly.
expected_output: >
An email response to the normal email.
agent: normal_handler

View File

@@ -1,36 +0,0 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool
# from demo_pipeline.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
@CrewBase
class NormalCrew:
"""Normal Email Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def normal_handler(self) -> Agent:
return Agent(config=self.agents_config["normal_handler"], verbose=True)
@task
def urgent_task(self) -> Task:
return Task(
config=self.tasks_config["normal_task"],
)
@crew
def crew(self) -> Crew:
"""Creates the Normal Email Crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -1,7 +0,0 @@
urgent_handler:
role: >
Urgent Email Processor
goal: >
Process urgent emails and create an email to respond to the sender.
backstory: >
You are a highly efficient and experienced urgent email handler, trained to quickly assess and respond to time-sensitive communications. Your ability to remain calm under pressure and provide concise, actionable responses has made you an invaluable asset in managing critical situations and maintaining smooth operations.

View File

@@ -1,6 +0,0 @@
urgent_task:
description: >
Process and respond to urgent email quickly.
expected_output: >
An email response to the urgent email.
agent: urgent_handler

View File

@@ -1,36 +0,0 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool
# from demo_pipeline.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
@CrewBase
class UrgentCrew:
"""Urgent Email Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def urgent_handler(self) -> Agent:
return Agent(config=self.agents_config["urgent_handler"], verbose=True)
@task
def urgent_task(self) -> Task:
return Task(
config=self.tasks_config["urgent_task"],
)
@crew
def crew(self) -> Crew:
"""Creates the Urgent Email Crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -1,75 +0,0 @@
#!/usr/bin/env python
import asyncio
from crewai.routers.router import Route
from crewai.routers.router import Router
from {{folder_name}}.pipelines.pipeline_classifier import EmailClassifierPipeline
from {{folder_name}}.pipelines.pipeline_normal import NormalPipeline
from {{folder_name}}.pipelines.pipeline_urgent import UrgentPipeline
async def run():
"""
Run the pipeline.
"""
inputs = [
{
"email": """
Subject: URGENT: Marketing Campaign Launch - Immediate Action Required
Dear Team,
I'm reaching out regarding our upcoming marketing campaign that requires your immediate attention and swift action. We're facing a critical deadline, and our success hinges on our ability to mobilize quickly.
Key points:
Campaign launch: 48 hours from now
Target audience: 250,000 potential customers
Expected ROI: 35% increase in Q3 sales
What we need from you NOW:
Final approval on creative assets (due in 3 hours)
Confirmation of media placements (due by end of day)
Last-minute budget allocation for paid social media push
Our competitors are poised to launch similar campaigns, and we must act fast to maintain our market advantage. Delays could result in significant lost opportunities and potential revenue.
Please prioritize this campaign above all other tasks. I'll be available for the next 24 hours to address any concerns or roadblocks.
Let's make this happen!
[Your Name]
Marketing Director
P.S. I'll be scheduling an emergency team meeting in 1 hour to discuss our action plan. Attendance is mandatory.
"""
}
]
pipeline_classifier = EmailClassifierPipeline().create_pipeline()
pipeline_urgent = UrgentPipeline().create_pipeline()
pipeline_normal = NormalPipeline().create_pipeline()
router = Router(
routes={
"high_urgency": Route(
condition=lambda x: x.get("urgency_score", 0) > 7,
pipeline=pipeline_urgent
),
"low_urgency": Route(
condition=lambda x: x.get("urgency_score", 0) <= 7,
pipeline=pipeline_normal
)
},
default=pipeline_normal
)
pipeline = pipeline_classifier >> router
results = await pipeline.kickoff(inputs)
# Process and print results
for result in results:
print(f"Raw output: {result.raw}")
if result.json_dict:
print(f"JSON output: {result.json_dict}")
print("\n")
def main():
asyncio.run(run())
if __name__ == "__main__":
main()

View File

@@ -1,24 +0,0 @@
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.classifier_crew.classifier_crew import ClassifierCrew
@PipelineBase
class EmailClassifierPipeline:
def __init__(self):
# Initialize crews
self.classifier_crew = ClassifierCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.classifier_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results

View File

@@ -1,24 +0,0 @@
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.normal_crew.normal_crew import NormalCrew
@PipelineBase
class NormalPipeline:
def __init__(self):
# Initialize crews
self.normal_crew = NormalCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.normal_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results

View File

@@ -1,23 +0,0 @@
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.urgent_crew.urgent_crew import UrgentCrew
@PipelineBase
class UrgentPipeline:
def __init__(self):
# Initialize crews
self.urgent_crew = UrgentCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.urgent_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results

View File

@@ -1,21 +0,0 @@
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.80.0,<1.0.0"
]
[project.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
run_crew = "{{folder_name}}.main:main"
train = "{{folder_name}}.main:train"
replay = "{{folder_name}}.main:replay"
test = "{{folder_name}}.main:test"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

View File

@@ -1,19 +0,0 @@
from typing import Type
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here
return "this is an example of a tool output, ignore it and move along."

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.80.0" "crewai[tools]>=0.86.0"
] ]

View File

@@ -1,4 +1,3 @@
import importlib.metadata
import os import os
import shutil import shutil
import sys import sys
@@ -9,7 +8,6 @@ import click
import tomli import tomli
from rich.console import Console from rich.console import Console
from crewai.cli.authentication.utils import TokenManager
from crewai.cli.constants import ENV_VARS from crewai.cli.constants import ENV_VARS
if sys.version_info >= (3, 11): if sys.version_info >= (3, 11):
@@ -137,11 +135,6 @@ def _get_nested_value(data: Dict[str, Any], keys: List[str]) -> Any:
return reduce(dict.__getitem__, keys, data) return reduce(dict.__getitem__, keys, data)
def get_crewai_version() -> str:
"""Get the version number of CrewAI running the CLI"""
return importlib.metadata.version("crewai")
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict: def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
"""Fetch the environment variables from a .env file and return them as a dictionary.""" """Fetch the environment variables from a .env file and return them as a dictionary."""
try: try:
@@ -166,14 +159,6 @@ def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
return {} return {}
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token
def tree_copy(source, destination): def tree_copy(source, destination):
"""Copies the entire directory structure from the source to the destination.""" """Copies the entire directory structure from the source to the destination."""
for item in os.listdir(source): for item in os.listdir(source):

View File

@@ -0,0 +1,6 @@
import importlib.metadata
def get_crewai_version() -> str:
"""Get the version number of CrewAI running the CLI"""
return importlib.metadata.version("crewai")

View File

@@ -5,7 +5,7 @@ import uuid
import warnings import warnings
from concurrent.futures import Future from concurrent.futures import Future
from hashlib import md5 from hashlib import md5
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Union from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from pydantic import ( from pydantic import (
UUID4, UUID4,
@@ -23,11 +23,12 @@ from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
from crewai.crews.crew_output import CrewOutput from crewai.crews.crew_output import CrewOutput
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.llm import LLM from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.knowledge.knowledge import Knowledge
from crewai.memory.user.user_memory import UserMemory from crewai.memory.user.user_memory import UserMemory
from crewai.process import Process from crewai.process import Process
from crewai.task import Task from crewai.task import Task
@@ -55,8 +56,6 @@ if os.environ.get("AGENTOPS_API_KEY"):
except ImportError: except ImportError:
pass pass
if TYPE_CHECKING:
from crewai.pipeline.pipeline import Pipeline
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd") warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
@@ -202,10 +201,13 @@ class Crew(BaseModel):
default=[], default=[],
description="List of execution logs for tasks", description="List of execution logs for tasks",
) )
knowledge: Optional[Dict[str, Any]] = Field( knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
default=None, description="Knowledge for the crew. Add knowledge sources to the knowledge object." default=None,
description="Knowledge sources for the crew. Add knowledge sources to the knowledge object.",
)
_knowledge: Optional[Knowledge] = PrivateAttr(
default=None,
) )
@field_validator("id", mode="before") @field_validator("id", mode="before")
@classmethod @classmethod
@@ -282,11 +284,22 @@ class Crew(BaseModel):
@model_validator(mode="after") @model_validator(mode="after")
def create_crew_knowledge(self) -> "Crew": def create_crew_knowledge(self) -> "Crew":
if self.knowledge: """Create the knowledge for the crew."""
if self.knowledge_sources:
try: try:
self.knowledge = Knowledge(**self.knowledge) if isinstance(self.knowledge, dict) else self.knowledge if isinstance(self.knowledge_sources, list) and all(
except (TypeError, ValueError) as e: isinstance(k, BaseKnowledgeSource) for k in self.knowledge_sources
raise ValueError(f"Invalid knowledge configuration: {str(e)}") ):
self._knowledge = Knowledge(
sources=self.knowledge_sources,
embedder_config=self.embedder,
collection_name="crew",
)
except Exception as e:
self._logger.log(
"warning", f"Failed to init knowledge: {e}", color="yellow"
)
return self return self
@model_validator(mode="after") @model_validator(mode="after")
@@ -942,6 +955,11 @@ class Crew(BaseModel):
result = self._execute_tasks(self.tasks, start_index, True) result = self._execute_tasks(self.tasks, start_index, True)
return result return result
def query_knowledge(self, query: List[str]) -> Union[List[Dict[str, Any]], None]:
if self._knowledge:
return self._knowledge.query(query)
return None
def copy(self): def copy(self):
"""Create a deep copy of the Crew.""" """Create a deep copy of the Crew."""
@@ -1053,17 +1071,5 @@ class Crew(BaseModel):
evaluator.print_crew_evaluation_result() evaluator.print_crew_evaluation_result()
def __rshift__(self, other: "Crew") -> "Pipeline":
"""
Implements the >> operator to add another Crew to an existing Pipeline.
"""
from crewai.pipeline.pipeline import Pipeline
if not isinstance(other, Crew):
raise TypeError(
f"Unsupported operand type for >>: '{type(self).__name__}' and '{type(other).__name__}'"
)
return Pipeline(stages=[self, other])
def __repr__(self): def __repr__(self):
return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})" return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})"

View File

@@ -1,12 +1,11 @@
import os import os
from typing import Any, Dict, List, Optional
from typing import List, Optional, Dict, Any
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.utilities.logger import Logger
from crewai.utilities.constants import DEFAULT_SCORE_THRESHOLD
os.environ["TOKENIZERS_PARALLELISM"] = "false" # removes logging from fastembed os.environ["TOKENIZERS_PARALLELISM"] = "false" # removes logging from fastembed
@@ -18,28 +17,35 @@ class Knowledge(BaseModel):
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
embedder_config: Optional[Dict[str, Any]] = None embedder_config: Optional[Dict[str, Any]] = None
""" """
sources: List[BaseKnowledgeSource] = Field(default_factory=list) sources: List[BaseKnowledgeSource] = Field(default_factory=list)
model_config = ConfigDict(arbitrary_types_allowed=True) model_config = ConfigDict(arbitrary_types_allowed=True)
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
embedder_config: Optional[Dict[str, Any]] = None embedder_config: Optional[Dict[str, Any]] = None
collection_name: Optional[str] = None
def __init__(self, embedder_config: Optional[Dict[str, Any]] = None, **data): def __init__(
self,
collection_name: str,
sources: List[BaseKnowledgeSource],
embedder_config: Optional[Dict[str, Any]] = None,
storage: Optional[KnowledgeStorage] = None,
**data,
):
super().__init__(**data) super().__init__(**data)
self.storage = KnowledgeStorage(embedder_config=embedder_config or None) if storage:
self.storage = storage
try: else:
for source in self.sources: self.storage = KnowledgeStorage(
source.add() embedder_config=embedder_config, collection_name=collection_name
except Exception as e:
Logger(verbose=True).log(
"warning",
f"Failed to init knowledge: {e}",
color="yellow",
) )
self.sources = sources
self.storage.initialize_knowledge_storage()
for source in sources:
source.storage = self.storage
source.add()
def query( def query(self, query: List[str], limit: int = 3) -> List[Dict[str, Any]]:
self, query: List[str], limit: int = 3, preference: Optional[str] = None
) -> List[Dict[str, Any]]:
""" """
Query across all knowledge sources to find the most relevant information. Query across all knowledge sources to find the most relevant information.
Returns the top_k most relevant chunks. Returns the top_k most relevant chunks.
@@ -48,7 +54,10 @@ class Knowledge(BaseModel):
results = self.storage.search( results = self.storage.search(
query, query,
limit, limit,
filter={"preference": preference} if preference else None,
score_threshold=DEFAULT_SCORE_THRESHOLD,
) )
return results return results
def _add_sources(self):
for source in self.sources:
source.storage = self.storage
source.add()

View File

@@ -1,36 +1,71 @@
from abc import ABC, abstractmethod
from pathlib import Path from pathlib import Path
from typing import Union, List from typing import Dict, List, Union
from pydantic import Field from pydantic import Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from typing import Dict, Any
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
class BaseFileKnowledgeSource(BaseKnowledgeSource): class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Base class for knowledge sources that load content from files.""" """Base class for knowledge sources that load content from files."""
file_path: Union[Path, List[Path]] = Field(...) _logger: Logger = Logger(verbose=True)
file_path: Union[Path, List[Path], str, List[str]] = Field(
..., description="The path to the file"
)
content: Dict[Path, str] = Field(init=False, default_factory=dict) content: Dict[Path, str] = Field(init=False, default_factory=dict)
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
safe_file_paths: List[Path] = Field(default_factory=list)
def model_post_init(self, _): def model_post_init(self, _):
"""Post-initialization method to load content.""" """Post-initialization method to load content."""
self.safe_file_paths = self._process_file_paths()
self.validate_paths()
self.content = self.load_content() self.content = self.load_content()
@abstractmethod
def load_content(self) -> Dict[Path, str]: def load_content(self) -> Dict[Path, str]:
"""Load and preprocess file content. Should be overridden by subclasses.""" """Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory."""
paths = [self.file_path] if isinstance(self.file_path, Path) else self.file_path pass
for path in paths: def validate_paths(self):
"""Validate the paths."""
for path in self.safe_file_paths:
if not path.exists(): if not path.exists():
self._logger.log(
"error",
f"File not found: {path}. Try adding sources to the knowledge directory. If its inside the knowledge directory, use the relative path.",
color="red",
)
raise FileNotFoundError(f"File not found: {path}") raise FileNotFoundError(f"File not found: {path}")
if not path.is_file(): if not path.is_file():
raise ValueError(f"Path is not a file: {path}") self._logger.log(
return {} "error",
f"Path is not a file: {path}",
color="red",
)
def save_documents(self, metadata: Dict[str, Any]): def _save_documents(self):
"""Save the documents to the storage.""" """Save the documents to the storage."""
chunk_metadatas = [metadata.copy() for _ in self.chunks] self.storage.save(self.chunks)
self.storage.save(self.chunks, chunk_metadatas)
def convert_to_path(self, path: Union[Path, str]) -> Path:
"""Convert a path to a Path object."""
return Path(KNOWLEDGE_DIRECTORY + "/" + path) if isinstance(path, str) else path
def _process_file_paths(self) -> List[Path]:
"""Convert file_path to a list of Path objects."""
paths = (
[self.file_path]
if isinstance(self.file_path, (str, Path))
else self.file_path
)
if not isinstance(paths, list):
raise ValueError("file_path must be a Path, str, or a list of these types")
return [self.convert_to_path(path) for path in paths]

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import List, Dict, Any from typing import Any, Dict, List, Optional
import numpy as np import numpy as np
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field
@@ -17,7 +17,8 @@ class BaseKnowledgeSource(BaseModel, ABC):
model_config = ConfigDict(arbitrary_types_allowed=True) model_config = ConfigDict(arbitrary_types_allowed=True)
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage) storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
metadata: Dict[str, Any] = Field(default_factory=dict) metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused
collection_name: Optional[str] = Field(default=None)
@abstractmethod @abstractmethod
def load_content(self) -> Dict[Any, str]: def load_content(self) -> Dict[Any, str]:
@@ -40,9 +41,9 @@ class BaseKnowledgeSource(BaseModel, ABC):
for i in range(0, len(text), self.chunk_size - self.chunk_overlap) for i in range(0, len(text), self.chunk_size - self.chunk_overlap)
] ]
def save_documents(self, metadata: Dict[str, Any]): def _save_documents(self):
""" """
Save the documents to the storage. Save the documents to the storage.
This method should be called after the chunks and embeddings are generated. This method should be called after the chunks and embeddings are generated.
""" """
self.storage.save(self.chunks, metadata) self.storage.save(self.chunks)

View File

@@ -1,6 +1,6 @@
import csv import csv
from typing import Dict, List
from pathlib import Path from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -10,19 +10,15 @@ class CSVKnowledgeSource(BaseFileKnowledgeSource):
def load_content(self) -> Dict[Path, str]: def load_content(self) -> Dict[Path, str]:
"""Load and preprocess CSV file content.""" """Load and preprocess CSV file content."""
super().load_content() # Validate the file path content_dict = {}
for file_path in self.safe_file_paths:
file_path = ( with open(file_path, "r", encoding="utf-8") as csvfile:
self.file_path[0] if isinstance(self.file_path, list) else self.file_path reader = csv.reader(csvfile)
) content = ""
file_path = Path(file_path) if isinstance(file_path, str) else file_path for row in reader:
content += " ".join(row) + "\n"
with open(file_path, "r", encoding="utf-8") as csvfile: content_dict[file_path] = content
reader = csv.reader(csvfile) return content_dict
content = ""
for row in reader:
content += " ".join(row) + "\n"
return {file_path: content}
def add(self) -> None: def add(self) -> None:
""" """
@@ -34,7 +30,7 @@ class CSVKnowledgeSource(BaseFileKnowledgeSource):
) )
new_chunks = self._chunk_text(content_str) new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self.save_documents(metadata=self.metadata) self._save_documents()
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,5 +1,6 @@
from typing import Dict, List
from pathlib import Path from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -8,17 +9,15 @@ class ExcelKnowledgeSource(BaseFileKnowledgeSource):
def load_content(self) -> Dict[Path, str]: def load_content(self) -> Dict[Path, str]:
"""Load and preprocess Excel file content.""" """Load and preprocess Excel file content."""
super().load_content() # Validate the file path
pd = self._import_dependencies() pd = self._import_dependencies()
if isinstance(self.file_path, list): content_dict = {}
file_path = self.file_path[0] for file_path in self.safe_file_paths:
else: file_path = self.convert_to_path(file_path)
file_path = self.file_path df = pd.read_excel(file_path)
content = df.to_csv(index=False)
df = pd.read_excel(file_path) content_dict[file_path] = content
content = df.to_csv(index=False) return content_dict
return {file_path: content}
def _import_dependencies(self): def _import_dependencies(self):
"""Dynamically import dependencies.""" """Dynamically import dependencies."""
@@ -46,7 +45,7 @@ class ExcelKnowledgeSource(BaseFileKnowledgeSource):
new_chunks = self._chunk_text(content_str) new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self.save_documents(metadata=self.metadata) self._save_documents()
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,6 +1,6 @@
import json import json
from typing import Any, Dict, List
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -10,11 +10,9 @@ class JSONKnowledgeSource(BaseFileKnowledgeSource):
def load_content(self) -> Dict[Path, str]: def load_content(self) -> Dict[Path, str]:
"""Load and preprocess JSON file content.""" """Load and preprocess JSON file content."""
super().load_content() # Validate the file path
paths = [self.file_path] if isinstance(self.file_path, Path) else self.file_path
content: Dict[Path, str] = {} content: Dict[Path, str] = {}
for path in paths: for path in self.safe_file_paths:
path = self.convert_to_path(path)
with open(path, "r", encoding="utf-8") as json_file: with open(path, "r", encoding="utf-8") as json_file:
data = json.load(json_file) data = json.load(json_file)
content[path] = self._json_to_text(data) content[path] = self._json_to_text(data)
@@ -44,7 +42,7 @@ class JSONKnowledgeSource(BaseFileKnowledgeSource):
) )
new_chunks = self._chunk_text(content_str) new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self.save_documents(metadata=self.metadata) self._save_documents()
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,5 +1,5 @@
from typing import List, Dict
from pathlib import Path from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -9,14 +9,13 @@ class PDFKnowledgeSource(BaseFileKnowledgeSource):
def load_content(self) -> Dict[Path, str]: def load_content(self) -> Dict[Path, str]:
"""Load and preprocess PDF file content.""" """Load and preprocess PDF file content."""
super().load_content() # Validate the file paths
pdfplumber = self._import_pdfplumber() pdfplumber = self._import_pdfplumber()
paths = [self.file_path] if isinstance(self.file_path, Path) else self.file_path
content = {} content = {}
for path in paths: for path in self.safe_file_paths:
text = "" text = ""
path = self.convert_to_path(path)
with pdfplumber.open(path) as pdf: with pdfplumber.open(path) as pdf:
for page in pdf.pages: for page in pdf.pages:
page_text = page.extract_text() page_text = page.extract_text()
@@ -44,7 +43,7 @@ class PDFKnowledgeSource(BaseFileKnowledgeSource):
for _, text in self.content.items(): for _, text in self.content.items():
new_chunks = self._chunk_text(text) new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self.save_documents(metadata=self.metadata) self._save_documents()
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,4 +1,4 @@
from typing import List from typing import List, Optional
from pydantic import Field from pydantic import Field
@@ -9,6 +9,7 @@ class StringKnowledgeSource(BaseKnowledgeSource):
"""A knowledge source that stores and queries plain text content using embeddings.""" """A knowledge source that stores and queries plain text content using embeddings."""
content: str = Field(...) content: str = Field(...)
collection_name: Optional[str] = Field(default=None)
def model_post_init(self, _): def model_post_init(self, _):
"""Post-initialization method to validate content.""" """Post-initialization method to validate content."""
@@ -23,7 +24,7 @@ class StringKnowledgeSource(BaseKnowledgeSource):
"""Add string content to the knowledge source, chunk it, compute embeddings, and save them.""" """Add string content to the knowledge source, chunk it, compute embeddings, and save them."""
new_chunks = self._chunk_text(self.content) new_chunks = self._chunk_text(self.content)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self.save_documents(metadata=self.metadata) self._save_documents()
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,5 +1,5 @@
from typing import Dict, List
from pathlib import Path from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -9,12 +9,11 @@ class TextFileKnowledgeSource(BaseFileKnowledgeSource):
def load_content(self) -> Dict[Path, str]: def load_content(self) -> Dict[Path, str]:
"""Load and preprocess text file content.""" """Load and preprocess text file content."""
super().load_content()
paths = [self.file_path] if isinstance(self.file_path, Path) else self.file_path
content = {} content = {}
for path in paths: for path in self.safe_file_paths:
with path.open("r", encoding="utf-8") as f: path = self.convert_to_path(path)
content[path] = f.read() # type: ignore with open(path, "r", encoding="utf-8") as f:
content[path] = f.read()
return content return content
def add(self) -> None: def add(self) -> None:
@@ -25,7 +24,7 @@ class TextFileKnowledgeSource(BaseFileKnowledgeSource):
for _, text in self.content.items(): for _, text in self.content.items():
new_chunks = self._chunk_text(text) new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks) self.chunks.extend(new_chunks)
self.save_documents(metadata=self.metadata) self._save_documents()
def _chunk_text(self, text: str) -> List[str]: def _chunk_text(self, text: str) -> List[str]:
"""Utility method to split text into chunks.""" """Utility method to split text into chunks."""

View File

@@ -1,14 +1,20 @@
import contextlib import contextlib
import hashlib
import io import io
import logging import logging
import chromadb
import os import os
from crewai.utilities.paths import db_storage_path from typing import Any, Dict, List, Optional, Union, cast
from typing import Optional, List
from typing import Dict, Any import chromadb
from crewai.utilities import EmbeddingConfigurator import chromadb.errors
from chromadb.api import ClientAPI
from chromadb.api.types import OneOrMany
from chromadb.config import Settings
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
import hashlib from crewai.utilities import EmbeddingConfigurator
from crewai.utilities.logger import Logger
from crewai.utilities.paths import db_storage_path
@contextlib.contextmanager @contextlib.contextmanager
@@ -35,9 +41,16 @@ class KnowledgeStorage(BaseKnowledgeStorage):
""" """
collection: Optional[chromadb.Collection] = None collection: Optional[chromadb.Collection] = None
collection_name: Optional[str] = "knowledge"
app: Optional[ClientAPI] = None
def __init__(self, embedder_config: Optional[Dict[str, Any]] = None): def __init__(
self._initialize_app(embedder_config or {}) self,
embedder_config: Optional[Dict[str, Any]] = None,
collection_name: Optional[str] = None,
):
self.collection_name = collection_name
self._set_embedder_config(embedder_config)
def search( def search(
self, self,
@@ -67,43 +80,80 @@ class KnowledgeStorage(BaseKnowledgeStorage):
else: else:
raise Exception("Collection not initialized") raise Exception("Collection not initialized")
def _initialize_app(self, embedder_config: Optional[Dict[str, Any]] = None): def initialize_knowledge_storage(self):
import chromadb base_path = os.path.join(db_storage_path(), "knowledge")
from chromadb.config import Settings
self._set_embedder_config(embedder_config)
chroma_client = chromadb.PersistentClient( chroma_client = chromadb.PersistentClient(
path=f"{db_storage_path()}/knowledge", path=base_path,
settings=Settings(allow_reset=True), settings=Settings(allow_reset=True),
) )
self.app = chroma_client self.app = chroma_client
try: try:
self.collection = self.app.get_or_create_collection(name="knowledge") collection_name = (
f"knowledge_{self.collection_name}"
if self.collection_name
else "knowledge"
)
if self.app:
self.collection = self.app.get_or_create_collection(
name=collection_name, embedding_function=self.embedder_config
)
else:
raise Exception("Vector Database Client not initialized")
except Exception: except Exception:
raise Exception("Failed to create or get collection") raise Exception("Failed to create or get collection")
def reset(self): def reset(self):
if self.app: if self.app:
self.app.reset() self.app.reset()
else:
base_path = os.path.join(db_storage_path(), "knowledge")
self.app = chromadb.PersistentClient(
path=base_path,
settings=Settings(allow_reset=True),
)
self.app.reset()
def save( def save(
self, documents: List[str], metadata: Dict[str, Any] | List[Dict[str, Any]] self,
documents: List[str],
metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None,
): ):
if self.collection: if self.collection:
metadatas = [metadata] if isinstance(metadata, dict) else metadata try:
if metadata is None:
metadatas: Optional[OneOrMany[chromadb.Metadata]] = None
elif isinstance(metadata, list):
metadatas = [cast(chromadb.Metadata, m) for m in metadata]
else:
metadatas = cast(chromadb.Metadata, metadata)
ids = [ ids = [
hashlib.sha256(doc.encode("utf-8")).hexdigest() for doc in documents hashlib.sha256(doc.encode("utf-8")).hexdigest() for doc in documents
] ]
self.collection.upsert( self.collection.upsert(
documents=documents, documents=documents,
metadatas=metadatas, metadatas=metadatas,
ids=ids, ids=ids,
) )
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except Exception as e:
Logger(verbose=True).log(
"error", f"Failed to upsert documents: {e}", "red"
)
raise
else: else:
raise Exception("Collection not initialized") raise Exception("Collection not initialized")

View File

@@ -0,0 +1,12 @@
from typing import Any, Dict, List
def extract_knowledge_context(knowledge_snippets: List[Dict[str, Any]]) -> str:
"""Extract knowledge from the task prompt."""
valid_snippets = [
result["context"]
for result in knowledge_snippets
if result and result.get("context")
]
snippet = "\n".join(valid_snippets)
return f"Additional Information: {snippet}" if valid_snippets else ""

Some files were not shown because too many files have changed in this diff Show More