Compare commits

...

44 Commits

Author SHA1 Message Date
Lorenze Jay
6faa0317b2 added docs 2024-12-18 10:02:50 -08:00
Lorenze Jay
bc230b4edf Merge branch 'main' of github.com:crewAIInc/crewAI into feat/docling 2024-12-18 09:56:43 -08:00
Lorenze Jay
f380f8ee23 linted 2024-12-18 09:56:32 -08:00
Lorenze Jay
c3d31deff6 renamed to CrewDoclingSource 2024-12-18 07:22:14 -08:00
Vini Brasil
627b9f1abb Remove relative import in flow main.py template (#1782) 2024-12-18 10:47:44 -03:00
Lorenze Jay
aedaf01d19 Merge branch 'main' of github.com:crewAIInc/crewAI into feat/docling 2024-12-17 13:44:33 -08:00
alan blount
1b8001bf98 Gemini 2.0 (#1773)
* Update llms.mdx (Gemini 2.0)

- Add Gemini 2.0 flash to Gemini table.
- Add link to 2 hosting paths for Gemini in Tip.
- Change to lower case model slugs vs names, user convenience.
- Add https://artificialanalysis.ai/ as alternate leaderboard.
- Move Gemma to "other" tab.

* Update llm.py (gemini 2.0)

Add setting for Gemini 2.0 context window to llm.py

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-17 16:44:10 -05:00
Tony Kipkemboi
e59e07e4f7 Merge pull request #1777 from crewAIInc/fix/python-max-version
Fix/python max version
2024-12-17 16:09:44 -05:00
Brandon Hancock
ee239b1c06 change to <13 instead of <=12 2024-12-17 16:00:15 -05:00
Brandon Hancock
bf459bf983 include 12 but not 13 2024-12-17 15:29:11 -05:00
Lorenze Jay
436a458072 fix types 2024-12-17 10:18:48 -08:00
Lorenze Jay
7885c5f906 fixed run types 2024-12-17 10:05:36 -08:00
Lorenze Jay
ef7a101631 Merge branch 'main' of github.com:crewAIInc/crewAI into feat/docling 2024-12-16 21:55:33 -08:00
Lorenze Jay
e14a49f82c fix test and types 2024-12-16 21:52:36 -08:00
Lorenze Jay
0921f71fd2 linted 2024-12-16 20:23:42 -08:00
Lorenze Jay
10c04d54a9 enabling local files to work and type cleanup 2024-12-16 20:21:47 -08:00
Lorenze Jay
356eb07d5f fix run-types 2024-12-16 20:09:54 -08:00
Lorenze Jay
c2ed1f2355 added test for multiple sources for file_paths 2024-12-16 19:44:29 -08:00
Lorenze Jay
76c640b985 use file_paths instead of file_path 2 2024-12-16 19:41:10 -08:00
Lorenze Jay
054bc266b9 logged but file_path is backwards compatible 2024-12-16 16:30:47 -08:00
Karan Vaidya
94eaa6740e Fix bool and null handling (#1771) 2024-12-16 16:23:53 -05:00
Lorenze Jay
f1c9caa8ec fixed logic 2024-12-15 22:48:45 -08:00
Lorenze Jay
610ea40c2d needs to be list 2024-12-15 22:34:41 -08:00
Lorenze Jay
b14f6ffa59 run_type docs 2024-12-15 22:32:08 -08:00
Lorenze Jay
56172ecf1d organized imports 2024-12-15 22:25:40 -08:00
Lorenze Jay
ee74ad0d6d fix import 2024-12-15 22:23:17 -08:00
Lorenze Jay
a67ec7e37a use file_paths instead of file_path 2024-12-15 22:21:19 -08:00
Lorenze Jay
625c21da5b docling support installation 2024-12-15 22:16:07 -08:00
Lorenze Jay
04cb9afae5 added tool for docling support 2024-12-15 22:15:49 -08:00
Shahar Yair
6d7c1b0743 Fix: CrewJSONEncoder now accepts enums (#1752)
* bugfix: CrewJSONEncoder now accepts enums

* sort imports

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 15:13:10 -05:00
Brandon Hancock (bhancock_ai)
6b864ee21d drop print (#1755) 2024-12-12 15:08:37 -05:00
Brandon Hancock (bhancock_ai)
1ffa8904db apply agent ops changes and resolve merge conflicts (#1748)
* apply agent ops changes and resolve merge conflicts

* Trying to fix tests

* add back in vcr

* update tools

* remove pkg_resources which was causing issues

* Fix tests

* experimenting to see if unique content is an issue with knowledge

* experimenting to see if unique content is an issue with knowledge

* update chromadb which seems to have issues with upsert

* generate new yaml for failing test

* Investigating upsert

* Drop patch

* Update casettes

* Fix duplicate document issue

* more fixes

* add back in vcr

* new cassette for test

---------

Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
2024-12-12 15:04:32 -05:00
Brandon Hancock (bhancock_ai)
ad916abd76 remove pkg_resources which was causing issues (#1751) 2024-12-12 12:41:13 -05:00
Rip&Tear
9702711094 Feature/add workflow permissions (#1749)
* fix: Call ChromaDB reset before removing storage directory to fix disk I/O errors

* feat: add workflow permissions to stale.yml

* revert rag_storage.py changes

* revert rag_storage.py changes

---------

Co-authored-by: Matt B <mattb@Matts-MacBook-Pro.local>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 12:31:43 -05:00
André Lago
8094754239 Fix small typo in sample tool (#1747)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 10:11:47 -05:00
Rashmi Pawar
bc5e303d5f NVIDIA Provider : UI changes (#1746)
* docs: add nvidia as provider

* nvidia ui docs changes

* add note for updated list

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 10:01:53 -05:00
Anmol Deep
ec89e003c8 Added is_auto_end flag in agentops.end session in crew.py (#1320)
When using agentops, we have the option to pass the `skip_auto_end_session` parameter, which is supposed to not end the session if the `end_session` function is called by Crew.

Now the way it works is, the `agentops.end_session` accepts `is_auto_end` flag and crewai should have passed it as `True` (its `False` by default). 

I have changed the code to pass is_auto_end=True

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-11 11:34:17 -05:00
Bowen Liang
0b0f2d30ab sort imports with isort rules by ruff linter (#1730)
* sort imports

* update

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-12-11 10:46:53 -05:00
Brandon Hancock (bhancock_ai)
1df61aba4c include event emitter in flows (#1740)
* include event emitter in flows

* Clean up

* Fix linter
2024-12-11 10:16:05 -05:00
Paul Cowgill
da9220fa81 Remove manager_callbacks reference (#1741) 2024-12-11 10:13:57 -05:00
Archkon
da4f356fab fix:typo error (#1738)
* Update base_agent_tools.py

typo error

* Update main.py

typo error

* Update base_file_knowledge_source.py

typo error

* Update test_main.py

typo error

* Update en.json

* Update prompts.json

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-10 11:18:45 -05:00
Brandon Hancock (bhancock_ai)
d932b20c6e copy googles changes. Fix tests. Improve LLM file (#1737)
* copy googles changes. Fix tests. Improve LLM file

* Fix type issue
2024-12-10 11:14:37 -05:00
Brandon Hancock (bhancock_ai)
2f9a2afd9e Update pyproject.toml and uv.lock to drop crewai-tools as a default requirement (#1711) 2024-12-09 14:17:46 -05:00
Brandon Hancock (bhancock_ai)
c1df7c410e Bugfix/restrict python version compatibility (#1736)
* drop 3.13

* revert

* Drop test cassette that was causing error

* trying to fix failing test

* adding thiago changes

* resolve final tests

* Drop skip

* drop pipeline
2024-12-09 14:07:57 -05:00
89 changed files with 2187 additions and 11377 deletions

View File

@@ -13,4 +13,4 @@ jobs:
pip install ruff
- name: Run Ruff Linter
run: ruff check --exclude "templates","__init__.py"
run: ruff check

View File

@@ -1,5 +1,10 @@
name: Mark stale issues and pull requests
permissions:
contents: write
issues: write
pull-requests: write
on:
schedule:
- cron: '10 12 * * *'
@@ -8,9 +13,6 @@ on:
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:

View File

@@ -1,9 +1,7 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.4
rev: v0.8.2
hooks:
- id: ruff
args: ["--fix"]
exclude: "templates"
- id: ruff-format
exclude: "templates"

9
.ruff.toml Normal file
View File

@@ -0,0 +1,9 @@
exclude = [
"templates",
"__init__.py",
]
[lint]
select = [
"I", # isort rules
]

View File

@@ -44,7 +44,7 @@ To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <=3.12 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:

View File

@@ -32,7 +32,6 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |

View File

@@ -79,6 +79,55 @@ crew = Crew(
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
```
Here's another example with the `CrewDoclingSource`
```python Code
from crewai import LLM, Agent, Crew, Process, Task
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
# Create a knowledge source
content_source = CrewDoclingSource(
file_paths=[
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking",
"https://lilianweng.github.io/posts/2024-07-07-hallucination",
],
)
# Create an LLM with a temperature of 0 to ensure deterministic outputs
llm = LLM(model="gpt-4o-mini", temperature=0)
# Create an agent with the knowledge store
agent = Agent(
role="About papers",
goal="You know everything about the papers.",
backstory="""You are a master at understanding papers and their content.""",
verbose=True,
allow_delegation=False,
llm=llm,
)
task = Task(
description="Answer the following questions about the papers: {question}",
expected_output="An answer to the question.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
knowledge_sources=[
content_source
], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
)
result = crew.kickoff(
inputs={
"question": "What is the reward hacking paper about? Be sure to provide sources."
}
)
```
## Knowledge Configuration
### Chunking Configuration

View File

@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
## Available Models and Their Capabilities
Here's a detailed breakdown of supported models and their capabilities:
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/?leaderboard) and [artificialanalysis.ai](https://artificialanalysis.ai/):
<Tabs>
<Tab title="OpenAI">
@@ -43,13 +43,104 @@ Here's a detailed breakdown of supported models and their capabilities:
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note>
</Tab>
<Tab title="Nvidia NIM">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
| nvidia/nemotron-4-mini-hindi-4b-instruct| 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
| "nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses. |
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
| nvidia/neva-22| 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
| nvidia/nemotron-mini-4b-instruct | 8,192 tokens | General-purpose tasks |
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
<Note>
NVIDIA's NIM support for models is expanding continuously! For the most up-to-date list of available models, please visit build.nvidia.com.
</Note>
</Tab>
<Tab title="Gemini">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
<Tip>
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
These models are available via API_KEY from
[The Gemini API](https://ai.google.dev/gemini-api/docs) and also from
[Google Cloud Vertex](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai) as part of the
[Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models).
</Tip>
</Tab>
<Tab title="Groq">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
<Tip>
Groq is known for its fast inference speeds, making it suitable for real-time applications.
@@ -60,7 +151,7 @@ Here's a detailed breakdown of supported models and their capabilities:
|----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemini | Varies by model | Multimodal capabilities |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
<Info>
Provider selection should consider factors like:
@@ -128,10 +219,10 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0
# Google Models - Good for general tasks
# llm: gemini/gemini-pro
# Google Models - Strong reasoning, large cachable context window, multimodal
# llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.0-pro-latest
# llm: gemini/gemini-1.5-flash-latest
# llm: gemini/gemini-1.5-flash-8b-latest
# AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
@@ -350,13 +441,18 @@ Learn how to get the most out of your LLM configuration:
<Accordion title="Google">
```python Code
# Option 1. Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2. Vertex AI IAM credentials for Gemini, Anthropic, and anything in the Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Example usage:
```python Code
llm = LLM(
model="gemini/gemini-pro",
model="gemini/gemini-1.5-pro-latest",
temperature=0.7
)
```
@@ -412,6 +508,20 @@ Learn how to get the most out of your LLM configuration:
```
</Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
@@ -502,20 +612,6 @@ Learn how to get the most out of your LLM configuration:
```
</Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="SambaNova">
```python Code
SAMBANOVA_API_KEY=<your-api-key>

View File

@@ -7,7 +7,7 @@ icon: wrench
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <=3.12`. Here's how to check your version:
CrewAI requires `Python >=3.10 and <3.13`. Here's how to check your version:
```bash
python3 --version
```

View File

@@ -3,7 +3,7 @@ name = "crewai"
version = "0.86.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]
@@ -15,7 +15,6 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3",
"regex>=2024.9.11",
"crewai-tools>=0.17.0",
"click>=8.1.7",
"python-dotenv>=1.0.0",
"appdirs>=1.4.4",
@@ -27,9 +26,10 @@ dependencies = [
"uv>=0.4.25",
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"chromadb>=0.5.18",
"chromadb>=0.5.23",
"pdfplumber>=0.11.4",
"openpyxl>=3.1.5",
"blinker>=1.9.0",
]
[project.urls]
@@ -38,7 +38,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.14.0"]
tools = ["crewai-tools>=0.17.0"]
agentops = ["agentops>=0.3.0"]
fastembed = ["fastembed>=0.4.1"]
pdfplumber = [
@@ -51,10 +51,13 @@ openpyxl = [
"openpyxl>=3.1.5",
]
mem0 = ["mem0ai>=0.1.29"]
docling = [
"docling>=2.12.0",
]
[tool.uv]
dev-dependencies = [
"ruff>=0.4.10",
"ruff>=0.8.2",
"mypy>=1.10.0",
"pre-commit>=3.6.0",
"mkdocs>=1.4.3",
@@ -64,7 +67,7 @@ dev-dependencies = [
"mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0",
"cairosvg>=2.7.1",
"crewai-tools>=0.14.0",
"crewai-tools>=0.17.0",
"pytest>=8.0.0",
"pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0",

View File

@@ -23,27 +23,19 @@ from crewai.utilities.converter import generate_model_description
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
def mock_agent_ops_provider():
def track_agent(*args, **kwargs):
try:
import agentops # type: ignore # Name "agentops" is already defined
from agentops import track_agent # type: ignore
except ImportError:
def track_agent():
def noop(f):
return f
return noop
return track_agent
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
else:
track_agent = mock_agent_ops_provider()
@track_agent()
class Agent(BaseAgent):

View File

@@ -144,7 +144,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer
)
if self.step_callback:
self.step_callback(tool_result)
self.step_callback(tool_result)
formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result
@@ -413,7 +413,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
"""
while self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output)
print("Human feedback: ", human_feedback)
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback)

View File

@@ -1,5 +1,6 @@
import re
from typing import Any, Union
from json_repair import repair_json
from crewai.utilities import I18N

View File

@@ -5,9 +5,10 @@ from typing import Any, Dict
import requests
from rich.console import Console
from crewai.cli.tools.main import ToolCommand
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
from .utils import TokenManager, validate_token
from crewai.cli.tools.main import ToolCommand
console = Console()
@@ -79,7 +80,9 @@ class AuthenticationCommand:
style="yellow",
)
console.print("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
console.print(
"\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n"
)
return
if token_data["error"] not in ("authorization_pending", "slow_down"):

View File

@@ -1,10 +1,9 @@
from .utils import TokenManager
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token

View File

@@ -1,7 +1,7 @@
from importlib.metadata import version as get_version
from typing import Optional
import click
import pkg_resources
from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.create_crew import create_crew
@@ -25,7 +25,7 @@ from .update_crew import update_crew
@click.group()
@click.version_option(pkg_resources.get_distribution("crewai").version)
@click.version_option(get_version("crewai"))
def crewai():
"""Top-level command group for crewai."""
@@ -52,16 +52,16 @@ def create(type, name, provider, skip_provider=False):
def version(tools):
"""Show the installed version of crewai."""
try:
crewai_version = pkg_resources.get_distribution("crewai").version
crewai_version = get_version("crewai")
except Exception:
crewai_version = "unknown version"
click.echo(f"crewai version: {crewai_version}")
if tools:
try:
tools_version = pkg_resources.get_distribution("crewai-tools").version
tools_version = get_version("crewai")
click.echo(f"crewai tools version: {tools_version}")
except pkg_resources.DistributionNotFound:
except Exception:
click.echo("crewai tools not installed")

View File

@@ -1,8 +1,9 @@
import requests
from requests.exceptions import JSONDecodeError
from rich.console import Console
from crewai.cli.plus_api import PlusAPI
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI
from crewai.telemetry.telemetry import Telemetry
console = Console()

View File

@@ -1,13 +1,19 @@
import json
from pathlib import Path
from pydantic import BaseModel, Field
from typing import Optional
from pydantic import BaseModel, Field
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
class Settings(BaseModel):
tool_repository_username: Optional[str] = Field(None, description="Username for interacting with the Tool Repository")
tool_repository_password: Optional[str] = Field(None, description="Password for interacting with the Tool Repository")
tool_repository_username: Optional[str] = Field(
None, description="Username for interacting with the Tool Repository"
)
tool_repository_password: Optional[str] = Field(
None, description="Password for interacting with the Tool Repository"
)
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):

View File

@@ -1,9 +1,11 @@
from typing import Optional
import requests
from os import getenv
from crewai.cli.version import get_crewai_version
from typing import Optional
from urllib.parse import urljoin
import requests
from crewai.cli.version import get_crewai_version
class PlusAPI:
"""

View File

@@ -1,11 +1,12 @@
import subprocess
import click
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
def reset_memories_command(

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.12 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.86.0,<1.0.0"
]

View File

@@ -10,7 +10,7 @@ class MyCustomToolInput(BaseModel):
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
"Clear description for what this tool is useful for, your agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.12 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:

View File

@@ -5,7 +5,7 @@ from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
from {{folder_name}}.crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.86.0,<1.0.0",
]

View File

@@ -13,7 +13,7 @@ class MyCustomToolInput(BaseModel):
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
"Clear description for what this tool is useful for, your agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput

View File

@@ -1,17 +0,0 @@
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.12"
crewai = { extras = ["tools"], version = ">=0.86.0,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -5,7 +5,7 @@ custom tools to power up your crews.
## Installing
Ensure you have Python >=3.10 <=3.12 installed on your system. This project
Ensure you have Python >=3.10 <3.13 installed on your system. This project
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
handling, offering a seamless setup and execution experience.

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.86.0"
]

View File

@@ -117,7 +117,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
published_handle = publish_response.json()["handle"]
console.print(
f"Succesfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
style="bold green",
)
@@ -138,7 +138,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
self._add_package(get_response.json())
console.print(f"Succesfully installed {handle}", style="bold green")
console.print(f"Successfully installed {handle}", style="bold green")
def login(self):
login_response = self.plus_api_client.login_to_tool_repository()

View File

@@ -1,6 +1,6 @@
import importlib.metadata
def get_crewai_version() -> str:
"""Get the version number of CrewAI running the CLI"""
return importlib.metadata.version("crewai")

View File

@@ -1,6 +1,5 @@
import asyncio
import json
import os
import uuid
import warnings
from concurrent.futures import Future
@@ -49,12 +48,10 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore
except ImportError:
pass
try:
import agentops # type: ignore
except ImportError:
agentops = None
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
@@ -1032,6 +1029,7 @@ class Crew(BaseModel):
agentops.end_session(
end_state="Success",
end_state_reason="Finished Execution",
is_auto_end=True,
)
self._telemetry.end_crew(self, final_string_output)

View File

@@ -14,8 +14,15 @@ from typing import (
cast,
)
from blinker import Signal
from pydantic import BaseModel, ValidationError
from crewai.flow.flow_events import (
FlowFinishedEvent,
FlowStartedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry
@@ -159,6 +166,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
_routers: Dict[str, str] = {}
_router_paths: Dict[str, List[str]] = {}
initial_state: Union[Type[T], T, None] = None
event_emitter = Signal("event_emitter")
def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]:
class _FlowGeneric(cls): # type: ignore
@@ -253,6 +261,14 @@ class Flow(Generic[T], metaclass=FlowMeta):
Returns:
The final output from the flow execution.
"""
self.event_emitter.send(
self,
event=FlowStartedEvent(
type="flow_started",
flow_name=self.__class__.__name__,
),
)
if inputs is not None:
self._initialize_state(inputs)
return asyncio.run(self.kickoff_async())
@@ -267,8 +283,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
if not self._start_methods:
raise ValueError("No start method defined")
@@ -285,11 +299,19 @@ class Flow(Generic[T], metaclass=FlowMeta):
# Run all start methods concurrently
await asyncio.gather(*tasks)
# Return the final output (from the last executed method)
if self._method_outputs:
return self._method_outputs[-1]
else:
return None # Or raise an exception if no methods were executed
# Determine the final output (from the last executed method)
final_output = self._method_outputs[-1] if self._method_outputs else None
self.event_emitter.send(
self,
event=FlowFinishedEvent(
type="flow_finished",
flow_name=self.__class__.__name__,
result=final_output,
),
)
return final_output
async def _execute_start_method(self, start_method_name: str) -> None:
result = await self._execute_method(
@@ -352,6 +374,16 @@ class Flow(Generic[T], metaclass=FlowMeta):
async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
try:
method = self._methods[listener_name]
self.event_emitter.send(
self,
event=MethodExecutionStartedEvent(
type="method_execution_started",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
sig = inspect.signature(method)
params = list(sig.parameters.values())
@@ -367,6 +399,15 @@ class Flow(Generic[T], metaclass=FlowMeta):
# If listener does not expect parameters, call without arguments
listener_result = await self._execute_method(listener_name, method)
self.event_emitter.send(
self,
event=MethodExecutionFinishedEvent(
type="method_execution_finished",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
# Execute listeners of this listener
await self._execute_listeners(listener_name, listener_result)
except Exception as e:

View File

@@ -0,0 +1,33 @@
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@dataclass
class Event:
type: str
flow_name: str
timestamp: datetime = field(init=False)
def __post_init__(self):
self.timestamp = datetime.now()
@dataclass
class FlowStartedEvent(Event):
pass
@dataclass
class MethodExecutionStartedEvent(Event):
method_name: str
@dataclass
class MethodExecutionFinishedEvent(Event):
method_name: str
@dataclass
class FlowFinishedEvent(Event):
result: Optional[Any] = None

View File

@@ -1,8 +1,8 @@
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, List, Union
from typing import Dict, List, Optional, Union
from pydantic import Field
from pydantic import Field, field_validator
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
@@ -14,17 +14,28 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Base class for knowledge sources that load content from files."""
_logger: Logger = Logger(verbose=True)
file_path: Union[Path, List[Path], str, List[str]] = Field(
..., description="The path to the file"
file_path: Optional[Union[Path, List[Path], str, List[str]]] = Field(
default=None,
description="[Deprecated] The path to the file. Use file_paths instead.",
)
file_paths: Optional[Union[Path, List[Path], str, List[str]]] = Field(
default_factory=list, description="The path to the file"
)
content: Dict[Path, str] = Field(init=False, default_factory=dict)
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
safe_file_paths: List[Path] = Field(default_factory=list)
@field_validator("file_path", "file_paths", mode="before")
def validate_file_path(cls, v, values):
"""Validate that at least one of file_path or file_paths is provided."""
if v is None and ("file_path" not in values or values.get("file_path") is None):
raise ValueError("Either file_path or file_paths must be provided")
return v
def model_post_init(self, _):
"""Post-initialization method to load content."""
self.safe_file_paths = self._process_file_paths()
self.validate_paths()
self.validate_content()
self.content = self.load_content()
@abstractmethod
@@ -32,13 +43,13 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory."""
pass
def validate_paths(self):
def validate_content(self):
"""Validate the paths."""
for path in self.safe_file_paths:
if not path.exists():
self._logger.log(
"error",
f"File not found: {path}. Try adding sources to the knowledge directory. If its inside the knowledge directory, use the relative path.",
f"File not found: {path}. Try adding sources to the knowledge directory. If it's inside the knowledge directory, use the relative path.",
color="red",
)
raise FileNotFoundError(f"File not found: {path}")
@@ -59,13 +70,31 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
def _process_file_paths(self) -> List[Path]:
"""Convert file_path to a list of Path objects."""
paths = (
[self.file_path]
if isinstance(self.file_path, (str, Path))
else self.file_path
)
# Check if old file_path is being used
if hasattr(self, "file_path") and self.file_path is not None:
self._logger.log(
"warning",
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
paths = (
[self.file_path]
if isinstance(self.file_path, (str, Path))
else self.file_path
)
else:
if self.file_paths is None:
raise ValueError("Your source must be provided with a file_paths: []")
paths = (
[self.file_paths]
if isinstance(self.file_paths, (str, Path))
else self.file_paths
)
if not isinstance(paths, list):
raise ValueError("file_path must be a Path, str, or a list of these types")
raise ValueError(
"file_path/file_paths must be a Path, str, or a list of these types"
)
return [self.convert_to_path(path) for path in paths]

View File

@@ -21,7 +21,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
collection_name: Optional[str] = Field(default=None)
@abstractmethod
def load_content(self) -> Dict[Any, str]:
def validate_content(self) -> Any:
"""Load and preprocess content from the source."""
pass

View File

@@ -0,0 +1,120 @@
from pathlib import Path
from typing import Iterator, List, Optional, Union
from urllib.parse import urlparse
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter
from docling.exceptions import ConversionError
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
from docling_core.types.doc.document import DoclingDocument
from pydantic import Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
class CrewDoclingSource(BaseKnowledgeSource):
"""Default Source class for converting documents to markdown or json
This will auto support PDF, DOCX, and TXT, XLSX, Images, and HTML files without any additional dependencies and follows the docling package as the source of truth.
"""
_logger: Logger = Logger(verbose=True)
file_path: Optional[List[Union[Path, str]]] = Field(default=None)
file_paths: List[Union[Path, str]] = Field(default_factory=list)
chunks: List[str] = Field(default_factory=list)
safe_file_paths: List[Union[Path, str]] = Field(default_factory=list)
content: List[DoclingDocument] = Field(default_factory=list)
document_converter: DocumentConverter = Field(
default_factory=lambda: DocumentConverter(
allowed_formats=[
InputFormat.MD,
InputFormat.ASCIIDOC,
InputFormat.PDF,
InputFormat.DOCX,
InputFormat.HTML,
InputFormat.IMAGE,
InputFormat.XLSX,
InputFormat.PPTX,
]
)
)
def model_post_init(self, _) -> None:
if self.file_path:
self._logger.log(
"warning",
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
self.file_paths = self.file_path
self.safe_file_paths = self.validate_content()
self.content = self.load_content()
def load_content(self) -> List[DoclingDocument]:
try:
return self.convert_source_to_docling_documents()
except ConversionError as e:
self._logger.log(
"error",
f"Error loading content: {e}. Supported formats: {self.document_converter.allowed_formats}",
"red",
)
raise e
except Exception as e:
self._logger.log("error", f"Error loading content: {e}")
raise e
def add(self) -> None:
if self.content is None:
return
for doc in self.content:
new_chunks_iterable = self._chunk_doc(doc)
self.chunks.extend(list(new_chunks_iterable))
self._save_documents()
def convert_source_to_docling_documents(self) -> List[DoclingDocument]:
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
return [result.document for result in conv_results_iter]
def _chunk_doc(self, doc: DoclingDocument) -> Iterator[str]:
chunker = HierarchicalChunker()
for chunk in chunker.chunk(doc):
yield chunk.text
def validate_content(self) -> List[Union[Path, str]]:
processed_paths: List[Union[Path, str]] = []
for path in self.file_paths:
if isinstance(path, str):
if path.startswith(("http://", "https://")):
try:
if self._validate_url(path):
processed_paths.append(path)
else:
raise ValueError(f"Invalid URL format: {path}")
except Exception as e:
raise ValueError(f"Invalid URL: {path}. Error: {str(e)}")
else:
local_path = Path(KNOWLEDGE_DIRECTORY + "/" + path)
if local_path.exists():
processed_paths.append(local_path)
else:
raise FileNotFoundError(f"File not found: {local_path}")
else:
# this is an instance of Path
processed_paths.append(path)
return processed_paths
def _validate_url(self, url: str) -> bool:
try:
result = urlparse(url)
return all(
[
result.scheme in ("http", "https"),
result.netloc,
len(result.netloc.split(".")) >= 2, # Ensure domain has TLD
]
)
except Exception:
return False

View File

@@ -13,9 +13,9 @@ class StringKnowledgeSource(BaseKnowledgeSource):
def model_post_init(self, _):
"""Post-initialization method to validate content."""
self.load_content()
self.validate_content()
def load_content(self):
def validate_content(self):
"""Validate string content."""
if not isinstance(self.content, str):
raise ValueError("StringKnowledgeSource only accepts string content")

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod
from typing import Dict, Any, List, Optional
from typing import Any, Dict, List, Optional
class BaseKnowledgeStorage(ABC):

View File

@@ -14,9 +14,9 @@ from chromadb.config import Settings
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
from crewai.utilities import EmbeddingConfigurator
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
from crewai.utilities.paths import db_storage_path
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
@contextlib.contextmanager
@@ -124,43 +124,60 @@ class KnowledgeStorage(BaseKnowledgeStorage):
documents: List[str],
metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None,
):
if self.collection:
try:
if metadata is None:
metadatas: Optional[OneOrMany[chromadb.Metadata]] = None
elif isinstance(metadata, list):
metadatas = [cast(chromadb.Metadata, m) for m in metadata]
else:
metadatas = cast(chromadb.Metadata, metadata)
ids = [
hashlib.sha256(doc.encode("utf-8")).hexdigest() for doc in documents
]
self.collection.upsert(
documents=documents,
metadatas=metadatas,
ids=ids,
)
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except Exception as e:
Logger(verbose=True).log(
"error", f"Failed to upsert documents: {e}", "red"
)
raise
else:
if not self.collection:
raise Exception("Collection not initialized")
try:
# Create a dictionary to store unique documents
unique_docs = {}
# Generate IDs and create a mapping of id -> (document, metadata)
for idx, doc in enumerate(documents):
doc_id = hashlib.sha256(doc.encode("utf-8")).hexdigest()
doc_metadata = None
if metadata is not None:
if isinstance(metadata, list):
doc_metadata = metadata[idx]
else:
doc_metadata = metadata
unique_docs[doc_id] = (doc, doc_metadata)
# Prepare filtered lists for ChromaDB
filtered_docs = []
filtered_metadata = []
filtered_ids = []
# Build the filtered lists
for doc_id, (doc, meta) in unique_docs.items():
filtered_docs.append(doc)
filtered_metadata.append(meta)
filtered_ids.append(doc_id)
# If we have no metadata at all, set it to None
final_metadata: Optional[OneOrMany[chromadb.Metadata]] = (
None if all(m is None for m in filtered_metadata) else filtered_metadata
)
self.collection.upsert(
documents=filtered_docs,
metadatas=final_metadata,
ids=filtered_ids,
)
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except Exception as e:
Logger(verbose=True).log("error", f"Failed to upsert documents: {e}", "red")
raise
def _create_default_embedding_function(self):
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,

View File

@@ -43,6 +43,11 @@ LLM_CONTEXT_WINDOW_SIZES = {
"gpt-4-turbo": 128000,
"o1-preview": 128000,
"o1-mini": 128000,
# gemini
"gemini-2.0-flash": 1048576,
"gemini-1.5-pro": 2097152,
"gemini-1.5-flash": 1048576,
"gemini-1.5-flash-8b": 1048576,
# deepseek
"deepseek-chat": 128000,
# groq
@@ -61,6 +66,9 @@ LLM_CONTEXT_WINDOW_SIZES = {
"mixtral-8x7b-32768": 32768,
}
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
CONTEXT_WINDOW_USAGE_RATIO = 0.75
@contextmanager
def suppress_warnings():
@@ -124,6 +132,7 @@ class LLM:
self.api_version = api_version
self.api_key = api_key
self.callbacks = callbacks
self.context_window_size = 0
self.kwargs = kwargs
litellm.drop_params = True
@@ -191,7 +200,16 @@ class LLM:
def get_context_window_size(self) -> int:
# Only using 75% of the context window size to avoid cutting the message in the middle
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
if self.context_window_size != 0:
return self.context_window_size
self.context_window_size = int(
DEFAULT_CONTEXT_WINDOW_SIZE * CONTEXT_WINDOW_USAGE_RATIO
)
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
if self.model.startswith(key):
self.context_window_size = int(value * CONTEXT_WINDOW_USAGE_RATIO)
return self.context_window_size
def set_callbacks(self, callbacks: List[Any]):
callback_types = [type(callback) for callback in callbacks]

View File

@@ -1,4 +1,4 @@
from typing import Optional, Dict, Any
from typing import Any, Dict, Optional
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, Optional, List
from typing import Any, Dict, List, Optional
from crewai.memory.storage.rag_storage import RAGStorage

View File

@@ -1,4 +1,5 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage
@@ -32,7 +33,10 @@ class ShortTermMemory(Memory):
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew, path=path
type="short_term",
embedder_config=embedder_config,
crew=crew,
path=path,
)
)
super().__init__(storage)

View File

@@ -2,6 +2,7 @@ import os
from typing import Any, Dict, List
from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage

View File

@@ -1,5 +1,6 @@
from functools import wraps
def memoize(func):
cache = {}

View File

@@ -1,11 +1,11 @@
import datetime
import json
from pathlib import Path
import threading
import uuid
from concurrent.futures import Future
from copy import copy
from hashlib import md5
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
from opentelemetry.trace import Span

View File

@@ -6,6 +6,7 @@ import os
import platform
import warnings
from contextlib import contextmanager
from importlib.metadata import version
from typing import TYPE_CHECKING, Any, Optional
@@ -16,12 +17,10 @@ def suppress_warnings():
yield
with suppress_warnings():
import pkg_resources
from opentelemetry import trace # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter, # noqa: E402
)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
@@ -104,7 +103,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(span, "crew_key", crew.key)
@@ -306,7 +305,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
@@ -326,7 +325,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
@@ -346,7 +345,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
if llm:
self._add_attribute(span, "llm", llm.model)
@@ -365,7 +364,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
@@ -391,7 +390,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
@@ -472,7 +471,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
@@ -541,7 +540,7 @@ class Telemetry:
self._add_attribute(
crew._execution_span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(
crew._execution_span, "crew_output", final_string_output

View File

@@ -1,9 +1,9 @@
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
from .delegate_work_tool import DelegateWorkTool
from .ask_question_tool import AskQuestionTool
from .delegate_work_tool import DelegateWorkTool
class AgentTools:

View File

@@ -1,7 +1,9 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
class AskQuestionToolSchema(BaseModel):
question: str = Field(..., description="The question to ask")

View File

@@ -1,9 +1,10 @@
from typing import Optional, Union
from pydantic import Field
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
@@ -44,14 +45,14 @@ class BaseAgentTool(BaseTool):
if available_agent.role.casefold().replace("\n", "") == agent_name
]
except Exception as _:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)

View File

@@ -1,8 +1,9 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
class DelegateWorkToolSchema(BaseModel):
task: str = Field(..., description="The task to delegate")

View File

@@ -1,6 +1,5 @@
import ast
import datetime
import os
import time
from difflib import SequenceMatcher
from textwrap import dedent
@@ -15,12 +14,10 @@ from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
from crewai.utilities import I18N, Converter, ConverterError, Printer
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore
except ImportError:
pass
try:
import agentops # type: ignore
except ImportError:
agentops = None
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini"]
@@ -422,9 +419,10 @@ class ToolUsage:
elif value.lower() in [
"true",
"false",
"null",
]: # Check for boolean and null values
value = value.lower()
value = value.lower().capitalize()
elif value.lower() == "null":
value = "None"
else:
# Assume the value is a string and needs quotes
value = '"' + value.replace('"', '\\"') + '"'

View File

@@ -1,6 +1,7 @@
from typing import Any, Dict
from pydantic import BaseModel
from datetime import datetime
from typing import Any, Dict
from pydantic import BaseModel
class ToolUsageEvent(BaseModel):

View File

@@ -28,7 +28,7 @@
"errors": {
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexsiting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",

View File

@@ -1,15 +1,17 @@
from datetime import datetime, date
import json
from uuid import UUID
from pydantic import BaseModel
from datetime import date, datetime
from decimal import Decimal
from enum import Enum
from uuid import UUID
from pydantic import BaseModel
class CrewJSONEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, BaseModel):
return self._handle_pydantic_model(obj)
elif isinstance(obj, UUID) or isinstance(obj, Decimal):
elif isinstance(obj, UUID) or isinstance(obj, Decimal) or isinstance(obj, Enum):
return str(obj)
elif isinstance(obj, datetime) or isinstance(obj, date):

View File

@@ -1,10 +1,11 @@
import json
import regex
from typing import Any, Type
from crewai.agents.parser import OutputParserException
import regex
from pydantic import BaseModel, ValidationError
from crewai.agents.parser import OutputParserException
class CrewPydanticOutputParser:
"""Parses the text into pydantic models"""

View File

@@ -1,6 +1,7 @@
import os
from typing import Any, Dict, cast
from chromadb import EmbeddingFunction, Documents, Embeddings
from chromadb import Documents, EmbeddingFunction, Embeddings
from chromadb.api.types import validate_embedding_function

View File

@@ -1,13 +1,14 @@
from collections import defaultdict
from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
from crewai.agent import Agent
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
class TaskEvaluationPydanticOutput(BaseModel):

View File

@@ -1,4 +1,3 @@
import os
from typing import List
from pydantic import BaseModel, Field
@@ -6,27 +5,17 @@ from pydantic import BaseModel, Field
from crewai.utilities import Converter
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
agentops = None
try:
from agentops import track_agent # type: ignore
except ImportError:
def mock_agent_ops_provider():
def track_agent(*args, **kwargs):
def track_agent(name):
def noop(f):
return f
return noop
return track_agent
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
else:
track_agent = mock_agent_ops_provider()
class Entity(BaseModel):
name: str = Field(description="The name of the entity.")

View File

@@ -1,7 +1,7 @@
from typing import Any, Callable, Generic, List, Dict, Type, TypeVar
from functools import wraps
from pydantic import BaseModel
from typing import Any, Callable, Dict, Generic, List, Type, TypeVar
from pydantic import BaseModel
T = TypeVar("T")
EVT = TypeVar("EVT", bound=BaseModel)

View File

@@ -1,4 +1,5 @@
from typing import Any, List, Optional
from pydantic import BaseModel, Field
from crewai.agent import Agent

View File

@@ -1,5 +1,7 @@
from pydantic import BaseModel, Field
from typing import Any, Optional
from pydantic import BaseModel, Field
from crewai.utilities import I18N

View File

@@ -1,4 +1,4 @@
from typing import Type, get_args, get_origin, Union
from typing import Type, Union, get_args, get_origin
from pydantic import BaseModel

View File

@@ -1,6 +1,8 @@
from pydantic import BaseModel, Field
from datetime import datetime
from typing import Dict, Any, Optional, List
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)

View File

@@ -1,5 +1,6 @@
from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import Usage
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -11,7 +12,7 @@ class TokenCalcHandler(CustomLogger):
if self.token_cost_process is None:
return
usage : Usage = response_obj["usage"]
usage: Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)

View File

@@ -1595,19 +1595,15 @@ def test_agent_execute_task_with_ollama():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources():
# Create a knowledge source with some content
content = "Brandon's favorite color is blue and he likes Mexican food."
string_source = StringKnowledgeSource(
content=content, metadata={"preference": "personal"}
)
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.query.return_value = [
{"content": content, "metadata": {"preference": "personal"}}
]
mock_knowledge_instance.query.return_value = [{"content": content}]
agent = Agent(
role="Information Agent",
@@ -1628,4 +1624,4 @@ def test_agent_with_knowledge_sources():
result = crew.kickoff()
# Assert that the agent provides the correct information
assert "blue" in result.raw.lower()
assert "red" in result.raw.lower()

View File

@@ -1,9 +1,10 @@
import hashlib
from typing import Any, List, Optional
from pydantic import BaseModel
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from pydantic import BaseModel
class TestAgent(BaseAgent):

View File

@@ -1,10 +1,11 @@
import pytest
from crewai.agents.parser import CrewAgentParser
from crewai.agents.crew_agent_executor import (
AgentAction,
AgentFinish,
OutputParserException,
)
from crewai.agents.parser import CrewAgentParser
@pytest.fixture

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,7 @@ import unittest
from unittest.mock import MagicMock, patch
import requests
from crewai.cli.authentication.main import AuthenticationCommand
@@ -47,7 +48,9 @@ class TestAuthenticationCommand(unittest.TestCase):
@patch("crewai.cli.authentication.main.requests.post")
@patch("crewai.cli.authentication.main.validate_token")
@patch("crewai.cli.authentication.main.console.print")
def test_poll_for_token_success(self, mock_print, mock_validate_token, mock_post, mock_tool):
def test_poll_for_token_success(
self, mock_print, mock_validate_token, mock_post, mock_tool
):
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {
@@ -62,7 +65,9 @@ class TestAuthenticationCommand(unittest.TestCase):
self.auth_command._poll_for_token({"device_code": "123456"})
mock_validate_token.assert_called_once_with("TOKEN")
mock_print.assert_called_once_with("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
mock_print.assert_called_once_with(
"\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n"
)
@patch("crewai.cli.authentication.main.requests.post")
@patch("crewai.cli.authentication.main.console.print")

View File

@@ -3,9 +3,10 @@ import unittest
from datetime import datetime, timedelta
from unittest.mock import MagicMock, patch
from crewai.cli.authentication.utils import TokenManager, validate_token
from cryptography.fernet import Fernet
from crewai.cli.authentication.utils import TokenManager, validate_token
class TestValidateToken(unittest.TestCase):
@patch("crewai.cli.authentication.utils.AsymmetricSignatureVerifier")

View File

@@ -1,10 +1,12 @@
import unittest
import json
import tempfile
import shutil
import tempfile
import unittest
from pathlib import Path
from crewai.cli.config import Settings
class TestSettings(unittest.TestCase):
def setUp(self):
self.test_dir = Path(tempfile.mkdtemp())
@@ -20,8 +22,7 @@ class TestSettings(unittest.TestCase):
def test_initialization_with_data(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
config_path=self.config_path, tool_repository_username="user1"
)
self.assertEqual(settings.tool_repository_username, "user1")
self.assertIsNone(settings.tool_repository_password)
@@ -37,22 +38,23 @@ class TestSettings(unittest.TestCase):
def test_merge_file_and_input_data(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({
"tool_repository_username": "file_user",
"tool_repository_password": "file_pass"
}, f)
json.dump(
{
"tool_repository_username": "file_user",
"tool_repository_password": "file_pass",
},
f,
)
settings = Settings(
config_path=self.config_path,
tool_repository_username="new_user"
config_path=self.config_path, tool_repository_username="new_user"
)
self.assertEqual(settings.tool_repository_username, "new_user")
self.assertEqual(settings.tool_repository_password, "file_pass")
def test_dump_new_settings(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
config_path=self.config_path, tool_repository_username="user1"
)
settings.dump()
@@ -67,8 +69,7 @@ class TestSettings(unittest.TestCase):
json.dump({"existing_setting": "value"}, f)
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
config_path=self.config_path, tool_repository_username="user1"
)
settings.dump()
@@ -79,10 +80,7 @@ class TestSettings(unittest.TestCase):
self.assertEqual(saved_data["tool_repository_username"], "user1")
def test_none_values(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username=None
)
settings = Settings(config_path=self.config_path, tool_repository_username=None)
settings.dump()
with self.config_path.open("r") as f:

View File

@@ -231,7 +231,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = ["crewai"]
""",
)
@@ -250,7 +250,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = ["crewai"]
""",
)

View File

@@ -1,6 +1,7 @@
from crewai.cli.git import Repository
import pytest
from crewai.cli.git import Repository
@pytest.fixture()
def repository(fp):

View File

@@ -1,6 +1,7 @@
import os
import unittest
from unittest.mock import MagicMock, patch
from crewai.cli.plus_api import PlusAPI

View File

@@ -1,7 +1,9 @@
import pytest
import os
import shutil
import tempfile
import os
import pytest
from crewai.cli import utils

View File

@@ -85,7 +85,7 @@ def test_install_success(mock_get, mock_subprocess_run):
env=unittest.mock.ANY
)
assert "Succesfully installed sample-tool" in output
assert "Successfully installed sample-tool" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")

View File

@@ -3,6 +3,7 @@
import asyncio
import pytest
from crewai.flow.flow import Flow, and_, listen, or_, router, start

View File

@@ -1,8 +1,12 @@
"""Test Knowledge creation and querying functionality."""
from pathlib import Path
from typing import List, Union
from unittest.mock import patch
import pytest
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
@@ -10,8 +14,6 @@ from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
import pytest
@pytest.fixture(autouse=True)
def mock_vector_db():
@@ -200,7 +202,7 @@ def test_single_short_file(mock_vector_db, tmpdir):
f.write(content)
file_source = TextFileKnowledgeSource(
file_path=file_path, metadata={"preference": "personal"}
file_paths=[file_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [file_source]
mock_vector_db.query.return_value = [{"context": content, "score": 0.9}]
@@ -242,7 +244,7 @@ def test_single_2k_character_file(mock_vector_db, tmpdir):
f.write(content)
file_source = TextFileKnowledgeSource(
file_path=file_path, metadata={"preference": "personal"}
file_paths=[file_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [file_source]
mock_vector_db.query.return_value = [{"context": content, "score": 0.9}]
@@ -279,7 +281,7 @@ def test_multiple_short_files(mock_vector_db, tmpdir):
file_paths.append((file_path, item["metadata"]))
file_sources = [
TextFileKnowledgeSource(file_path=path, metadata=metadata)
TextFileKnowledgeSource(file_paths=[path], metadata=metadata)
for path, metadata in file_paths
]
mock_vector_db.sources = file_sources
@@ -352,7 +354,7 @@ def test_multiple_2k_character_files(mock_vector_db, tmpdir):
file_paths.append(file_path)
file_sources = [
TextFileKnowledgeSource(file_path=path, metadata={"preference": "personal"})
TextFileKnowledgeSource(file_paths=[path], metadata={"preference": "personal"})
for path in file_paths
]
mock_vector_db.sources = file_sources
@@ -399,7 +401,7 @@ def test_hybrid_string_and_files(mock_vector_db, tmpdir):
file_paths.append(file_path)
file_sources = [
TextFileKnowledgeSource(file_path=path, metadata={"preference": "personal"})
TextFileKnowledgeSource(file_paths=[path], metadata={"preference": "personal"})
for path in file_paths
]
@@ -424,7 +426,7 @@ def test_pdf_knowledge_source(mock_vector_db):
# Create a PDFKnowledgeSource
pdf_source = PDFKnowledgeSource(
file_path=pdf_path, metadata={"preference": "personal"}
file_paths=[pdf_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [pdf_source]
mock_vector_db.query.return_value = [
@@ -461,7 +463,7 @@ def test_csv_knowledge_source(mock_vector_db, tmpdir):
# Create a CSVKnowledgeSource
csv_source = CSVKnowledgeSource(
file_path=csv_path, metadata={"preference": "personal"}
file_paths=[csv_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [csv_source]
mock_vector_db.query.return_value = [
@@ -496,7 +498,7 @@ def test_json_knowledge_source(mock_vector_db, tmpdir):
# Create a JSONKnowledgeSource
json_source = JSONKnowledgeSource(
file_path=json_path, metadata={"preference": "personal"}
file_paths=[json_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [json_source]
mock_vector_db.query.return_value = [
@@ -529,7 +531,7 @@ def test_excel_knowledge_source(mock_vector_db, tmpdir):
# Create an ExcelKnowledgeSource
excel_source = ExcelKnowledgeSource(
file_path=excel_path, metadata={"preference": "personal"}
file_paths=[excel_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [excel_source]
mock_vector_db.query.return_value = [
@@ -543,3 +545,42 @@ def test_excel_knowledge_source(mock_vector_db, tmpdir):
# Assert that the correct information is retrieved
assert any("30" in result["context"] for result in results)
mock_vector_db.query.assert_called_once()
def test_docling_source(mock_vector_db):
docling_source = CrewDoclingSource(
file_paths=[
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
],
)
mock_vector_db.sources = [docling_source]
mock_vector_db.query.return_value = [
{
"context": "Reward hacking is a technique used to improve the performance of reinforcement learning agents.",
"score": 0.9,
}
]
# Perform a query
query = "What is reward hacking?"
results = mock_vector_db.query(query)
assert any("reward hacking" in result["context"].lower() for result in results)
mock_vector_db.query.assert_called_once()
def test_multiple_docling_sources():
urls: List[Union[Path, str]] = [
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
"https://lilianweng.github.io/posts/2024-07-07-hallucination/",
]
docling_source = CrewDoclingSource(file_paths=urls)
assert docling_source.file_paths == urls
assert docling_source.content is not None
def test_docling_source_with_local_file():
current_dir = Path(__file__).parent
pdf_path = current_dir / "crewai_quickstart.pdf"
docling_source = CrewDoclingSource(file_paths=[pdf_path])
assert docling_source.file_paths == [pdf_path]
assert docling_source.content is not None

View File

@@ -1,5 +1,7 @@
import pytest
from unittest.mock import patch
import pytest
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.memory.short_term.short_term_memory import ShortTermMemory

View File

@@ -6,12 +6,13 @@ import os
from unittest.mock import MagicMock, patch
import pytest
from pydantic import BaseModel
from pydantic_core import ValidationError
from crewai import Agent, Crew, Process, Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
from crewai.utilities.converter import Converter
from pydantic import BaseModel
from pydantic_core import ValidationError
def test_task_tool_reflect_agent_tools():
@@ -685,7 +686,7 @@ def test_increment_tool_errors():
with patch.object(Task, "increment_tools_errors") as increment_tools_errors:
increment_tools_errors.return_value = None
crew.kickoff()
assert len(increment_tools_errors.mock_calls) == 12
assert len(increment_tools_errors.mock_calls) > 0
def test_task_definition_based_on_dict():

View File

@@ -6,14 +6,14 @@ from crewai.tools import BaseTool, tool
def test_creating_a_tool_using_annotation():
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
"""Clear description for what this tool is useful for, your agent will need this information to use it."""
return question
# Assert all the right attributes were defined
assert my_tool.name == "Name of my tool"
assert (
my_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert my_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
@@ -27,7 +27,7 @@ def test_creating_a_tool_using_annotation():
assert (
converted_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert converted_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
@@ -41,7 +41,7 @@ def test_creating_a_tool_using_annotation():
def test_creating_a_tool_using_baseclass():
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
def _run(self, question: str) -> str:
return question
@@ -52,7 +52,7 @@ def test_creating_a_tool_using_baseclass():
assert (
my_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert my_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
@@ -64,7 +64,7 @@ def test_creating_a_tool_using_baseclass():
assert (
converted_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, your agent will need this information to use it."
)
assert converted_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
@@ -78,7 +78,7 @@ def test_creating_a_tool_using_baseclass():
def test_setting_cache_function():
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
cache_function: Callable = lambda: False
def _run(self, question: str) -> str:
@@ -92,7 +92,7 @@ def test_setting_cache_function():
def test_default_cache_function_is_true():
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
def _run(self, question: str) -> str:
return question

View File

@@ -6,8 +6,8 @@ import pytest
from pydantic import BaseModel, Field
from crewai import Agent, Task
from crewai.tools.tool_usage import ToolUsage
from crewai.tools import BaseTool
from crewai.tools.tool_usage import ToolUsage
class RandomNumberToolInput(BaseModel):

View File

@@ -1,6 +1,7 @@
from unittest import mock
import pytest
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.task import Task

View File

@@ -26,7 +26,7 @@
},
"errors": {
"force_final_answer": "Lorem ipsum dolor sit amet",
"agent_tool_unexsiting_coworker": "Lorem ipsum dolor sit amet",
"agent_tool_unexisting_coworker": "Lorem ipsum dolor sit amet",
"task_repeated_usage": "Lorem ipsum dolor sit amet",
"tool_usage_error": "Lorem ipsum dolor sit amet",
"tool_arguments_error": "Lorem ipsum dolor sit amet",

983
uv.lock generated

File diff suppressed because it is too large Load Diff