Compare commits

...

34 Commits

Author SHA1 Message Date
Devin AI
2744af4825 fix: improve error handling, logging, and test coverage
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-02-09 21:33:03 +00:00
Devin AI
81f84cab58 fix: enable any llm to run test functionality
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-02-09 21:24:58 +00:00
siddharth Sambharia
409892d65f Portkey Integration with CrewAI (#1233)
* Create Portkey-Observability-and-Guardrails.md

* crewAI update with new changes

* small change

---------

Co-authored-by: siddharthsambharia-portkey <siddhath.s@portkey.ai>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-12-27 18:16:47 -03:00
devin-ai-integration[bot]
62f3df7ed5 docs: add guide for multimodal agents (#1807)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2024-12-27 18:16:02 -03:00
João Igor
4cf8913d31 chore: removing crewai-tools from dev-dependencies (#1760)
As mentioned in issue #1759, listing crewai-tools as dev-dependencies makes pip install it a required dependency, and not an optional

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-12-27 17:45:06 -03:00
João Moura
82647358b2 Adding Multimodal Abilities to Crew (#1805)
* initial fix on delegation tools

* fixing tests for delegations and coding

* Refactor prepare tool and adding initial add images logic

* supporting image tool

* fixing linter

* fix linter

* Making sure multimodal feature support i18n

* fix linter and types

* mixxing translations

* fix types and linter

* Revert "fixing linter"

This reverts commit 2eda5fdeed.

* fix linters

* test

* fix

* fix

* fix linter

* fix

* ignore

* type improvements
2024-12-27 17:03:35 -03:00
Brandon Hancock (bhancock_ai)
6cc2f510bf Feat/joao flow improvement requests (#1795)
* Add in or and and in router

* In the middle of improving plotting

* final plot changes

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-12-24 18:55:44 -03:00
Lorenze Jay
9a65abf6b8 removed some redundancies (#1796)
* removed some redundancies

* cleanup
2024-12-23 13:54:16 -05:00
Lorenze Jay
b3185ad90c Feat/docling-support (#1763)
* added tool for docling support

* docling support installation

* use file_paths instead of file_path

* fix import

* organized imports

* run_type docs

* needs to be list

* fixed logic

* logged but file_path is backwards compatible

* use file_paths instead of file_path 2

* added test for multiple sources for file_paths

* fix run-types

* enabling local files to work and type cleanup

* linted

* fix test and types

* fixed run types

* fix types

* renamed to CrewDoclingSource

* linted

* added docs

* resolve conflicts

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-12-23 13:19:58 -05:00
devin-ai-integration[bot]
c887ff1f47 feat: Add interpolate_only method and improve error handling (#1791)
* Fixed output_file not respecting system path

* Fixed yaml config is not escaped properly for output requirements

* feat: Add interpolate_only method and improve error handling

- Add interpolate_only method for string interpolation while preserving JSON structure
- Add comprehensive test coverage for interpolate_only
- Add proper type annotation for logger using ClassVar
- Improve error handling and documentation for _save_file method

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Sort imports to fix lint issues

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Reorganize imports using ruff --fix

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Consolidate imports and fix formatting

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Apply ruff automatic import sorting

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Sort imports using ruff --fix

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Frieda (Jingying) Huang <jingyingfhuang@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Frieda Huang <124417784+frieda-huang@users.noreply.github.com>
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2024-12-23 13:05:29 -05:00
devin-ai-integration[bot]
22e5d39884 feat: Add task guardrails feature (#1742)
* feat: Add task guardrails feature

Add support for custom code guardrails in tasks that validate outputs
before proceeding to the next task. Features include:

- Optional task-level guardrail function
- Pre-next-task execution timing
- Tuple return format (success, data)
- Automatic result/error routing
- Configurable retry mechanism
- Comprehensive documentation and tests

Link to Devin run: https://app.devin.ai/sessions/39f6cfd6c5a24d25a7bd70ce070ed29a

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Add type check for guardrail result and remove unused import

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Remove unnecessary f-string prefix

Co-Authored-By: Joe Moura <joao@crewai.com>

* feat: Add guardrail validation improvements

- Add result/error exclusivity validation in GuardrailResult
- Make return type annotations optional in Task guardrail validator
- Improve error messages for validation failures

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: Add comprehensive guardrails documentation

- Add type hints and examples
- Add error handling best practices
- Add structured error response patterns
- Document retry mechanisms
- Improve documentation organization

Co-Authored-By: Joe Moura <joao@crewai.com>

* refactor: Update guardrail functions to handle TaskOutput objects

Co-Authored-By: Joe Moura <joao@crewai.com>

* feat: Add task guardrails feature

Add support for custom code guardrails in tasks that validate outputs
before proceeding to the next task. Features include:

- Optional task-level guardrail function
- Pre-next-task execution timing
- Tuple return format (success, data)
- Automatic result/error routing
- Configurable retry mechanism
- Comprehensive documentation and tests

Link to Devin run: https://app.devin.ai/sessions/39f6cfd6c5a24d25a7bd70ce070ed29a

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Add type check for guardrail result and remove unused import

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: Remove unnecessary f-string prefix

Co-Authored-By: Joe Moura <joao@crewai.com>

* feat: Add guardrail validation improvements

- Add result/error exclusivity validation in GuardrailResult
- Make return type annotations optional in Task guardrail validator
- Improve error messages for validation failures

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: Add comprehensive guardrails documentation

- Add type hints and examples
- Add error handling best practices
- Add structured error response patterns
- Document retry mechanisms
- Improve documentation organization

Co-Authored-By: Joe Moura <joao@crewai.com>

* refactor: Update guardrail functions to handle TaskOutput objects

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: Fix import sorting in task guardrails files

Co-Authored-By: Joe Moura <joao@crewai.com>

* fixing docs

* Fixing guardarils implementation

* docs: Enhance guardrail validator docstring with runtime validation rationale

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-12-22 00:52:02 -03:00
PJ
9ee6824ccd Correcting a small grammatical issue that was bugging me: from _satisfy the expect criteria_ to _satisfies the expected criteria_ (#1783)
Signed-off-by: PJ Hagerty <pjhagerty@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-20 10:17:34 -05:00
Vini Brasil
da73865f25 Add tool.crewai.type pyproject attribute in templates (#1789) 2024-12-20 10:36:18 -03:00
Vini Brasil
627b9f1abb Remove relative import in flow main.py template (#1782) 2024-12-18 10:47:44 -03:00
alan blount
1b8001bf98 Gemini 2.0 (#1773)
* Update llms.mdx (Gemini 2.0)

- Add Gemini 2.0 flash to Gemini table.
- Add link to 2 hosting paths for Gemini in Tip.
- Change to lower case model slugs vs names, user convenience.
- Add https://artificialanalysis.ai/ as alternate leaderboard.
- Move Gemma to "other" tab.

* Update llm.py (gemini 2.0)

Add setting for Gemini 2.0 context window to llm.py

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-17 16:44:10 -05:00
Tony Kipkemboi
e59e07e4f7 Merge pull request #1777 from crewAIInc/fix/python-max-version
Fix/python max version
2024-12-17 16:09:44 -05:00
Brandon Hancock
ee239b1c06 change to <13 instead of <=12 2024-12-17 16:00:15 -05:00
Brandon Hancock
bf459bf983 include 12 but not 13 2024-12-17 15:29:11 -05:00
Karan Vaidya
94eaa6740e Fix bool and null handling (#1771) 2024-12-16 16:23:53 -05:00
Shahar Yair
6d7c1b0743 Fix: CrewJSONEncoder now accepts enums (#1752)
* bugfix: CrewJSONEncoder now accepts enums

* sort imports

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 15:13:10 -05:00
Brandon Hancock (bhancock_ai)
6b864ee21d drop print (#1755) 2024-12-12 15:08:37 -05:00
Brandon Hancock (bhancock_ai)
1ffa8904db apply agent ops changes and resolve merge conflicts (#1748)
* apply agent ops changes and resolve merge conflicts

* Trying to fix tests

* add back in vcr

* update tools

* remove pkg_resources which was causing issues

* Fix tests

* experimenting to see if unique content is an issue with knowledge

* experimenting to see if unique content is an issue with knowledge

* update chromadb which seems to have issues with upsert

* generate new yaml for failing test

* Investigating upsert

* Drop patch

* Update casettes

* Fix duplicate document issue

* more fixes

* add back in vcr

* new cassette for test

---------

Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
2024-12-12 15:04:32 -05:00
Brandon Hancock (bhancock_ai)
ad916abd76 remove pkg_resources which was causing issues (#1751) 2024-12-12 12:41:13 -05:00
Rip&Tear
9702711094 Feature/add workflow permissions (#1749)
* fix: Call ChromaDB reset before removing storage directory to fix disk I/O errors

* feat: add workflow permissions to stale.yml

* revert rag_storage.py changes

* revert rag_storage.py changes

---------

Co-authored-by: Matt B <mattb@Matts-MacBook-Pro.local>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 12:31:43 -05:00
André Lago
8094754239 Fix small typo in sample tool (#1747)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 10:11:47 -05:00
Rashmi Pawar
bc5e303d5f NVIDIA Provider : UI changes (#1746)
* docs: add nvidia as provider

* nvidia ui docs changes

* add note for updated list

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-12 10:01:53 -05:00
Anmol Deep
ec89e003c8 Added is_auto_end flag in agentops.end session in crew.py (#1320)
When using agentops, we have the option to pass the `skip_auto_end_session` parameter, which is supposed to not end the session if the `end_session` function is called by Crew.

Now the way it works is, the `agentops.end_session` accepts `is_auto_end` flag and crewai should have passed it as `True` (its `False` by default). 

I have changed the code to pass is_auto_end=True

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-11 11:34:17 -05:00
Bowen Liang
0b0f2d30ab sort imports with isort rules by ruff linter (#1730)
* sort imports

* update

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-12-11 10:46:53 -05:00
Brandon Hancock (bhancock_ai)
1df61aba4c include event emitter in flows (#1740)
* include event emitter in flows

* Clean up

* Fix linter
2024-12-11 10:16:05 -05:00
Paul Cowgill
da9220fa81 Remove manager_callbacks reference (#1741) 2024-12-11 10:13:57 -05:00
Archkon
da4f356fab fix:typo error (#1738)
* Update base_agent_tools.py

typo error

* Update main.py

typo error

* Update base_file_knowledge_source.py

typo error

* Update test_main.py

typo error

* Update en.json

* Update prompts.json

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-12-10 11:18:45 -05:00
Brandon Hancock (bhancock_ai)
d932b20c6e copy googles changes. Fix tests. Improve LLM file (#1737)
* copy googles changes. Fix tests. Improve LLM file

* Fix type issue
2024-12-10 11:14:37 -05:00
Brandon Hancock (bhancock_ai)
2f9a2afd9e Update pyproject.toml and uv.lock to drop crewai-tools as a default requirement (#1711) 2024-12-09 14:17:46 -05:00
Brandon Hancock (bhancock_ai)
c1df7c410e Bugfix/restrict python version compatibility (#1736)
* drop 3.13

* revert

* Drop test cassette that was causing error

* trying to fix failing test

* adding thiago changes

* resolve final tests

* Drop skip

* drop pipeline
2024-12-09 14:07:57 -05:00
106 changed files with 6517 additions and 12096 deletions

View File

@@ -13,4 +13,4 @@ jobs:
pip install ruff
- name: Run Ruff Linter
run: ruff check --exclude "templates","__init__.py"
run: ruff check

View File

@@ -1,5 +1,10 @@
name: Mark stale issues and pull requests
permissions:
contents: write
issues: write
pull-requests: write
on:
schedule:
- cron: '10 12 * * *'
@@ -8,9 +13,6 @@ on:
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:

View File

@@ -1,9 +1,7 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.4
rev: v0.8.2
hooks:
- id: ruff
args: ["--fix"]
exclude: "templates"
- id: ruff-format
exclude: "templates"

9
.ruff.toml Normal file
View File

@@ -0,0 +1,9 @@
exclude = [
"templates",
"__init__.py",
]
[lint]
select = [
"I", # isort rules
]

View File

@@ -44,7 +44,7 @@ To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <=3.12 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:

View File

@@ -32,7 +32,6 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |

View File

@@ -79,6 +79,55 @@ crew = Crew(
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
```
Here's another example with the `CrewDoclingSource`
```python Code
from crewai import LLM, Agent, Crew, Process, Task
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
# Create a knowledge source
content_source = CrewDoclingSource(
file_paths=[
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking",
"https://lilianweng.github.io/posts/2024-07-07-hallucination",
],
)
# Create an LLM with a temperature of 0 to ensure deterministic outputs
llm = LLM(model="gpt-4o-mini", temperature=0)
# Create an agent with the knowledge store
agent = Agent(
role="About papers",
goal="You know everything about the papers.",
backstory="""You are a master at understanding papers and their content.""",
verbose=True,
allow_delegation=False,
llm=llm,
)
task = Task(
description="Answer the following questions about the papers: {question}",
expected_output="An answer to the question.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
knowledge_sources=[
content_source
], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
)
result = crew.kickoff(
inputs={
"question": "What is the reward hacking paper about? Be sure to provide sources."
}
)
```
## Knowledge Configuration
### Chunking Configuration

View File

@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
## Available Models and Their Capabilities
Here's a detailed breakdown of supported models and their capabilities:
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/?leaderboard) and [artificialanalysis.ai](https://artificialanalysis.ai/):
<Tabs>
<Tab title="OpenAI">
@@ -43,13 +43,104 @@ Here's a detailed breakdown of supported models and their capabilities:
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note>
</Tab>
<Tab title="Nvidia NIM">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
| nvidia/nemotron-4-mini-hindi-4b-instruct| 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
| "nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses. |
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
| nvidia/neva-22| 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
| nvidia/nemotron-mini-4b-instruct | 8,192 tokens | General-purpose tasks |
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
<Note>
NVIDIA's NIM support for models is expanding continuously! For the most up-to-date list of available models, please visit build.nvidia.com.
</Note>
</Tab>
<Tab title="Gemini">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
<Tip>
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
These models are available via API_KEY from
[The Gemini API](https://ai.google.dev/gemini-api/docs) and also from
[Google Cloud Vertex](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai) as part of the
[Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models).
</Tip>
</Tab>
<Tab title="Groq">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
<Tip>
Groq is known for its fast inference speeds, making it suitable for real-time applications.
@@ -60,7 +151,7 @@ Here's a detailed breakdown of supported models and their capabilities:
|----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemini | Varies by model | Multimodal capabilities |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
<Info>
Provider selection should consider factors like:
@@ -128,10 +219,10 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: anthropic/claude-2.1
# llm: anthropic/claude-2.0
# Google Models - Good for general tasks
# llm: gemini/gemini-pro
# Google Models - Strong reasoning, large cachable context window, multimodal
# llm: gemini/gemini-1.5-pro-latest
# llm: gemini/gemini-1.0-pro-latest
# llm: gemini/gemini-1.5-flash-latest
# llm: gemini/gemini-1.5-flash-8b-latest
# AWS Bedrock Models - Enterprise-grade
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
@@ -350,13 +441,18 @@ Learn how to get the most out of your LLM configuration:
<Accordion title="Google">
```python Code
# Option 1. Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2. Vertex AI IAM credentials for Gemini, Anthropic, and anything in the Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Example usage:
```python Code
llm = LLM(
model="gemini/gemini-pro",
model="gemini/gemini-1.5-pro-latest",
temperature=0.7
)
```
@@ -412,6 +508,20 @@ Learn how to get the most out of your LLM configuration:
```
</Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
@@ -502,20 +612,6 @@ Learn how to get the most out of your LLM configuration:
```
</Accordion>
<Accordion title="Nvidia NIM">
```python Code
NVIDIA_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="nvidia_nim/meta/llama3-70b-instruct",
temperature=0.7
)
```
</Accordion>
<Accordion title="SambaNova">
```python Code
SAMBANOVA_API_KEY=<your-api-key>

View File

@@ -6,7 +6,7 @@ icon: list-check
## Overview of a Task
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
@@ -263,8 +263,148 @@ analysis_task = Task(
)
```
## Task Guardrails
Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides
efeedback to agents when their output doesn't meet specific criteria.
### Using Task Guardrails
To add a guardrail to a task, provide a validation function through the `guardrail` parameter:
```python Code
from typing import Tuple, Union, Dict, Any
def validate_blog_content(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
"""Validate blog content meets requirements."""
try:
# Check word count
word_count = len(result.split())
if word_count > 200:
return (False, {
"error": "Blog content exceeds 200 words",
"code": "WORD_COUNT_ERROR",
"context": {"word_count": word_count}
})
# Additional validation logic here
return (True, result.strip())
except Exception as e:
return (False, {
"error": "Unexpected error during validation",
"code": "SYSTEM_ERROR"
})
blog_task = Task(
description="Write a blog post about AI",
expected_output="A blog post under 200 words",
agent=blog_agent,
guardrail=validate_blog_content # Add the guardrail function
)
```
### Guardrail Function Requirements
1. **Function Signature**:
- Must accept exactly one parameter (the task output)
- Should return a tuple of `(bool, Any)`
- Type hints are recommended but optional
2. **Return Values**:
- Success: Return `(True, validated_result)`
- Failure: Return `(False, error_details)`
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
try:
# Main validation logic
validated_data = perform_validation(result)
return (True, validated_data)
except ValidationError as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"input": result}
})
except Exception as e:
return (False, {
"error": "Unexpected error",
"code": "SYSTEM_ERROR"
})
```
2. **Error Categories**:
- Use specific error codes
- Include relevant context
- Provide actionable feedback
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
def complex_validation(result: str) -> Tuple[bool, Union[str, Dict[str, Any]]]:
"""Chain multiple validation steps."""
# Step 1: Basic validation
if not result:
return (False, {"error": "Empty result", "code": "EMPTY_INPUT"})
# Step 2: Content validation
try:
validated = validate_content(result)
if not validated:
return (False, {"error": "Invalid content", "code": "CONTENT_ERROR"})
# Step 3: Format validation
formatted = format_output(validated)
return (True, formatted)
except Exception as e:
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"step": "content_validation"}
})
```
### Handling Guardrail Results
When a guardrail returns `(False, error)`:
1. The error is sent back to the agent
2. The agent attempts to fix the issue
3. The process repeats until:
- The guardrail returns `(True, result)`
- Maximum retries are reached
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
def validate_json_output(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
"""Validate and parse JSON output."""
try:
# Try to parse as JSON
data = json.loads(result)
return (True, data)
except json.JSONDecodeError as e:
return (False, {
"error": "Invalid JSON format",
"code": "JSON_ERROR",
"context": {"line": e.lineno, "column": e.colno}
})
task = Task(
description="Generate a JSON report",
expected_output="A valid JSON object",
agent=analyst,
guardrail=validate_json_output,
max_retries=3 # Limit retry attempts
)
```
## Getting Structured Consistent Outputs from Tasks
When you need to ensure that a task outputs a structured and consistent format, you can use the `output_pydantic` or `output_json` properties on a task. These properties allow you to define the expected output structure, making it easier to parse and utilize the results in your application.
<Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
@@ -608,6 +748,114 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Task Guardrails
Task guardrails provide a powerful way to validate, transform, or filter task outputs before they are passed to the next task. Guardrails are optional functions that execute before the next task starts, allowing you to ensure that task outputs meet specific requirements or formats.
### Basic Usage
```python Code
from typing import Tuple, Union
from crewai import Task
def validate_json_output(result: str) -> Tuple[bool, Union[dict, str]]:
"""Validate that the output is valid JSON."""
try:
json_data = json.loads(result)
return (True, json_data)
except json.JSONDecodeError:
return (False, "Output must be valid JSON")
task = Task(
description="Generate JSON data",
expected_output="Valid JSON object",
guardrail=validate_json_output
)
```
### How Guardrails Work
1. **Optional Attribute**: Guardrails are an optional attribute at the task level, allowing you to add validation only where needed.
2. **Execution Timing**: The guardrail function is executed before the next task starts, ensuring valid data flow between tasks.
3. **Return Format**: Guardrails must return a tuple of `(success, data)`:
- If `success` is `True`, `data` is the validated/transformed result
- If `success` is `False`, `data` is the error message
4. **Result Routing**:
- On success (`True`), the result is automatically passed to the next task
- On failure (`False`), the error is sent back to the agent to generate a new answer
### Common Use Cases
#### Data Format Validation
```python Code
def validate_email_format(result: str) -> Tuple[bool, Union[str, str]]:
"""Ensure the output contains a valid email address."""
import re
email_pattern = r'^[\w\.-]+@[\w\.-]+\.\w+$'
if re.match(email_pattern, result.strip()):
return (True, result.strip())
return (False, "Output must be a valid email address")
```
#### Content Filtering
```python Code
def filter_sensitive_info(result: str) -> Tuple[bool, Union[str, str]]:
"""Remove or validate sensitive information."""
sensitive_patterns = ['SSN:', 'password:', 'secret:']
for pattern in sensitive_patterns:
if pattern.lower() in result.lower():
return (False, f"Output contains sensitive information ({pattern})")
return (True, result)
```
#### Data Transformation
```python Code
def normalize_phone_number(result: str) -> Tuple[bool, Union[str, str]]:
"""Ensure phone numbers are in a consistent format."""
import re
digits = re.sub(r'\D', '', result)
if len(digits) == 10:
formatted = f"({digits[:3]}) {digits[3:6]}-{digits[6:]}"
return (True, formatted)
return (False, "Output must be a 10-digit phone number")
```
### Advanced Features
#### Chaining Multiple Validations
```python Code
def chain_validations(*validators):
"""Chain multiple validators together."""
def combined_validator(result):
for validator in validators:
success, data = validator(result)
if not success:
return (False, data)
result = data
return (True, result)
return combined_validator
# Usage
task = Task(
description="Get user contact info",
expected_output="Email and phone",
guardrail=chain_validations(
validate_email_format,
filter_sensitive_info
)
)
```
#### Custom Retry Logic
```python Code
task = Task(
description="Generate data",
expected_output="Valid data",
guardrail=validate_data,
max_retries=5 # Override default retry limit
)
```
## Creating Directories when Saving Files
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
@@ -629,7 +877,7 @@ save_output_task = Task(
## Conclusion
Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -0,0 +1,211 @@
# Portkey Integration with CrewAI
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Getting Started
1. **Install Required Packages:**
```bash
pip install -qU crewai portkey-ai
```
2. **Configure the LLM Client:**
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
3. **Create and Run Your First Agent:**
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
## Key Features
| Feature | Description |
|---------|-------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
}
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)

View File

@@ -0,0 +1,138 @@
---
title: Using Multimodal Agents
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
icon: image
---
# Using Multimodal Agents
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
## Enabling Multimodal Capabilities
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
```python
from crewai import Agent
agent = Agent(
role="Image Analyst",
goal="Analyze and extract insights from images",
backstory="An expert in visual content interpretation with years of experience in image analysis",
multimodal=True # This enables multimodal capabilities
)
```
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
## Working with Images
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
Here's a complete example showing how to use a multimodal agent to analyze an image:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent
image_analyst = Agent(
role="Product Analyst",
goal="Analyze product images and provide detailed descriptions",
backstory="Expert in visual product analysis with deep knowledge of design and features",
multimodal=True
)
# Create a task for image analysis
task = Task(
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
agent=image_analyst
)
# Create and run the crew
crew = Crew(
agents=[image_analyst],
tasks=[task]
)
result = crew.kickoff()
```
### Advanced Usage with Context
You can provide additional context or specific questions about the image when creating tasks for multimodal agents. The task description can include specific aspects you want the agent to focus on:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent for detailed analysis
expert_analyst = Agent(
role="Visual Quality Inspector",
goal="Perform detailed quality analysis of product images",
backstory="Senior quality control expert with expertise in visual inspection",
multimodal=True # AddImageTool is automatically included
)
# Create a task with specific analysis requirements
inspection_task = Task(
description="""
Analyze the product image at https://example.com/product.jpg with focus on:
1. Quality of materials
2. Manufacturing defects
3. Compliance with standards
Provide a detailed report highlighting any issues found.
""",
agent=expert_analyst
)
# Create and run the crew
crew = Crew(
agents=[expert_analyst],
tasks=[inspection_task]
)
result = crew.kickoff()
```
### Tool Details
When working with multimodal agents, the `AddImageTool` is automatically configured with the following schema:
```python
class AddImageToolSchema:
image_url: str # Required: The URL or path of the image to process
action: Optional[str] = None # Optional: Additional context or specific questions about the image
```
The multimodal agent will automatically handle the image processing through its built-in tools, allowing it to:
- Access images via URLs or local file paths
- Process image content with optional context or specific questions
- Provide analysis and insights based on the visual information and task requirements
## Best Practices
When working with multimodal agents, keep these best practices in mind:
1. **Image Access**
- Ensure your images are accessible via URLs that the agent can reach
- For local images, consider hosting them temporarily or using absolute file paths
- Verify that image URLs are valid and accessible before running tasks
2. **Task Description**
- Be specific about what aspects of the image you want the agent to analyze
- Include clear questions or requirements in the task description
- Consider using the optional `action` parameter for focused analysis
3. **Resource Management**
- Image processing may require more computational resources than text-only tasks
- Some language models may require base64 encoding for image data
- Consider batch processing for multiple images to optimize performance
4. **Environment Setup**
- Verify that your environment has the necessary dependencies for image processing
- Ensure your language model supports multimodal capabilities
- Test with small images first to validate your setup
5. **Error Handling**
- Implement proper error handling for image loading failures
- Have fallback strategies for when image processing fails
- Monitor and log image processing operations for debugging

View File

@@ -7,7 +7,7 @@ icon: wrench
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <=3.12`. Here's how to check your version:
CrewAI requires `Python >=3.10 and <3.13`. Here's how to check your version:
```bash
python3 --version
```

View File

@@ -3,7 +3,7 @@ name = "crewai"
version = "0.86.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]
@@ -15,7 +15,6 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3",
"regex>=2024.9.11",
"crewai-tools>=0.17.0",
"click>=8.1.7",
"python-dotenv>=1.0.0",
"appdirs>=1.4.4",
@@ -27,9 +26,10 @@ dependencies = [
"uv>=0.4.25",
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"chromadb>=0.5.18",
"chromadb>=0.5.23",
"pdfplumber>=0.11.4",
"openpyxl>=3.1.5",
"blinker>=1.9.0",
]
[project.urls]
@@ -38,7 +38,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.14.0"]
tools = ["crewai-tools>=0.17.0"]
agentops = ["agentops>=0.3.0"]
fastembed = ["fastembed>=0.4.1"]
pdfplumber = [
@@ -51,10 +51,13 @@ openpyxl = [
"openpyxl>=3.1.5",
]
mem0 = ["mem0ai>=0.1.29"]
docling = [
"docling>=2.12.0",
]
[tool.uv]
dev-dependencies = [
"ruff>=0.4.10",
"ruff>=0.8.2",
"mypy>=1.10.0",
"pre-commit>=3.6.0",
"mkdocs>=1.4.3",
@@ -64,7 +67,6 @@ dev-dependencies = [
"mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0",
"cairosvg>=2.7.1",
"crewai-tools>=0.14.0",
"pytest>=8.0.0",
"pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0",

View File

@@ -17,33 +17,26 @@ from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.task import Task
from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.converter import generate_model_description
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
def mock_agent_ops_provider():
def track_agent(*args, **kwargs):
try:
import agentops # type: ignore # Name "agentops" is already defined
from agentops import track_agent # type: ignore
except ImportError:
def track_agent():
def noop(f):
return f
return noop
return track_agent
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
else:
track_agent = mock_agent_ops_provider()
@track_agent()
class Agent(BaseAgent):
@@ -122,6 +115,10 @@ class Agent(BaseAgent):
default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.",
)
multimodal: bool = Field(
default=False,
description="Whether the agent is multimodal.",
)
code_execution_mode: Literal["safe", "unsafe"] = Field(
default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
@@ -414,6 +411,10 @@ class Agent(BaseAgent):
tools = agent_tools.tools()
return tools
def get_multimodal_tools(self) -> List[Tool]:
from crewai.tools.agent_tools.add_image_tool import AddImageTool
return [AddImageTool()]
def get_code_execution_tools(self):
try:
from crewai_tools import CodeInterpreterTool

View File

@@ -143,10 +143,20 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
tool_result = self._execute_tool_and_check_finality(
formatted_answer
)
if self.step_callback:
self.step_callback(tool_result)
formatted_answer.text += f"\nObservation: {tool_result.result}"
# Directly append the result to the messages if the
# tool is "Add image to content" in case of multimodal
# agents
if formatted_answer.tool == self._i18n.tools("add_image")["name"]:
self.messages.append(tool_result.result)
continue
else:
if self.step_callback:
self.step_callback(tool_result)
formatted_answer.text += f"\nObservation: {tool_result.result}"
formatted_answer.result = tool_result.result
if tool_result.result_as_answer:
return AgentFinish(
@@ -413,7 +423,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
"""
while self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output)
print("Human feedback: ", human_feedback)
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback)

View File

@@ -1,5 +1,6 @@
import re
from typing import Any, Union
from json_repair import repair_json
from crewai.utilities import I18N

View File

@@ -5,9 +5,10 @@ from typing import Any, Dict
import requests
from rich.console import Console
from crewai.cli.tools.main import ToolCommand
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
from .utils import TokenManager, validate_token
from crewai.cli.tools.main import ToolCommand
console = Console()
@@ -79,7 +80,9 @@ class AuthenticationCommand:
style="yellow",
)
console.print("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
console.print(
"\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n"
)
return
if token_data["error"] not in ("authorization_pending", "slow_down"):

View File

@@ -1,10 +1,9 @@
from .utils import TokenManager
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token

View File

@@ -1,7 +1,10 @@
from importlib.metadata import version as get_version
from typing import Optional
from typing import Union
from crewai.llm import LLM
import click
import pkg_resources
from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.create_crew import create_crew
@@ -25,7 +28,7 @@ from .update_crew import update_crew
@click.group()
@click.version_option(pkg_resources.get_distribution("crewai").version)
@click.version_option(get_version("crewai"))
def crewai():
"""Top-level command group for crewai."""
@@ -52,16 +55,16 @@ def create(type, name, provider, skip_provider=False):
def version(tools):
"""Show the installed version of crewai."""
try:
crewai_version = pkg_resources.get_distribution("crewai").version
crewai_version = get_version("crewai")
except Exception:
crewai_version = "unknown version"
click.echo(f"crewai version: {crewai_version}")
if tools:
try:
tools_version = pkg_resources.get_distribution("crewai-tools").version
tools_version = get_version("crewai")
click.echo(f"crewai tools version: {tools_version}")
except pkg_resources.DistributionNotFound:
except Exception:
click.echo("crewai tools not installed")
@@ -180,8 +183,15 @@ def reset_memories(
default="gpt-4o-mini",
help="LLM Model to run the tests on the Crew. For now only accepting only OpenAI models.",
)
def test(n_iterations: int, model: str):
"""Test the crew and evaluate the results."""
def test(n_iterations: int, model: Union[str, LLM]):
"""Test the crew and evaluate the results using either a model name or LLM instance.
Args:
n_iterations: The number of iterations to run the test.
model: Either a model name string or an LLM instance to use for evaluating
the performance of the agents. If a string is provided, it will be used
to create an LLM instance.
"""
click.echo(f"Testing the crew for {n_iterations} iterations with model {model}")
evaluate_crew(n_iterations, model)

View File

@@ -1,8 +1,9 @@
import requests
from requests.exceptions import JSONDecodeError
from rich.console import Console
from crewai.cli.plus_api import PlusAPI
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI
from crewai.telemetry.telemetry import Telemetry
console = Console()

View File

@@ -1,13 +1,19 @@
import json
from pathlib import Path
from pydantic import BaseModel, Field
from typing import Optional
from pydantic import BaseModel, Field
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
class Settings(BaseModel):
tool_repository_username: Optional[str] = Field(None, description="Username for interacting with the Tool Repository")
tool_repository_password: Optional[str] = Field(None, description="Password for interacting with the Tool Repository")
tool_repository_username: Optional[str] = Field(
None, description="Username for interacting with the Tool Repository"
)
tool_repository_password: Optional[str] = Field(
None, description="Password for interacting with the Tool Repository"
)
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):

View File

@@ -1,9 +1,11 @@
from typing import Optional
import requests
from os import getenv
from crewai.cli.version import get_crewai_version
from typing import Optional
from urllib.parse import urljoin
import requests
from crewai.cli.version import get_crewai_version
class PlusAPI:
"""

View File

@@ -1,11 +1,12 @@
import subprocess
import click
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
def reset_memories_command(

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.12 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.86.0,<1.0.0"
]
@@ -18,3 +18,6 @@ test = "{{folder_name}}.main:test"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.crewai]
type = "crew"

View File

@@ -10,7 +10,7 @@ class MyCustomToolInput(BaseModel):
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
"Clear description for what this tool is useful for, your agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput

View File

@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.12 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:

View File

@@ -5,7 +5,7 @@ from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
from {{folder_name}}.crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):

View File

@@ -3,7 +3,7 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.86.0,<1.0.0",
]
@@ -15,3 +15,6 @@ plot = "{{folder_name}}.main:plot"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.crewai]
type = "flow"

View File

@@ -13,7 +13,7 @@ class MyCustomToolInput(BaseModel):
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
"Clear description for what this tool is useful for, your agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput

View File

@@ -1,17 +0,0 @@
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.12"
crewai = { extras = ["tools"], version = ">=0.86.0,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -5,7 +5,7 @@ custom tools to power up your crews.
## Installing
Ensure you have Python >=3.10 <=3.12 installed on your system. This project
Ensure you have Python >=3.10 <3.13 installed on your system. This project
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
handling, offering a seamless setup and execution experience.

View File

@@ -3,8 +3,10 @@ name = "{{folder_name}}"
version = "0.1.0"
description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.86.0"
]
[tool.crewai]
type = "tool"

View File

@@ -117,7 +117,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
published_handle = publish_response.json()["handle"]
console.print(
f"Succesfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
style="bold green",
)
@@ -138,7 +138,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
self._add_package(get_response.json())
console.print(f"Succesfully installed {handle}", style="bold green")
console.print(f"Successfully installed {handle}", style="bold green")
def login(self):
login_response = self.plus_api_client.login_to_tool_repository()

View File

@@ -1,6 +1,6 @@
import importlib.metadata
def get_crewai_version() -> str:
"""Get the version number of CrewAI running the CLI"""
return importlib.metadata.version("crewai")

View File

@@ -1,6 +1,5 @@
import asyncio
import json
import os
import uuid
import warnings
from concurrent.futures import Future
@@ -19,6 +18,9 @@ from pydantic import (
)
from pydantic_core import PydanticCustomError
from typing import Union
from crewai.llm import LLM
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache import CacheHandler
@@ -36,6 +38,7 @@ from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import TRAINING_DATA_FILE
@@ -49,12 +52,10 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore
except ImportError:
pass
try:
import agentops # type: ignore
except ImportError:
agentops = None
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
@@ -536,9 +537,6 @@ class Crew(BaseModel):
if not agent.function_calling_llm: # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
agent.function_calling_llm = self.function_calling_llm # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
if agent.allow_code_execution: # type: ignore # BaseAgent" has no attribute "allow_code_execution"
agent.tools += agent.get_code_execution_tools() # type: ignore # "BaseAgent" has no attribute "get_code_execution_tools"; maybe "get_delegation_tools"?
if not agent.step_callback: # type: ignore # "BaseAgent" has no attribute "step_callback"
agent.step_callback = self.step_callback # type: ignore # "BaseAgent" has no attribute "step_callback"
@@ -675,7 +673,6 @@ class Crew(BaseModel):
)
manager.tools = []
raise Exception("Manager agent should not have tools")
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else:
self.manager_llm = (
getattr(self.manager_llm, "model_name", None)
@@ -687,6 +684,7 @@ class Crew(BaseModel):
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
tools=AgentTools(agents=self.agents).tools(),
allow_delegation=True,
llm=self.manager_llm,
verbose=self.verbose,
)
@@ -729,7 +727,14 @@ class Crew(BaseModel):
f"No agent available for task: {task.description}. Ensure that either the task has an assigned agent or a manager agent is provided."
)
self._prepare_agent_tools(task)
# Determine which tools to use - task tools take precedence over agent tools
tools_for_task = task.tools or agent_to_use.tools or []
tools_for_task = self._prepare_tools(
agent_to_use,
task,
tools_for_task
)
self._log_task_start(task, agent_to_use.role)
if isinstance(task, ConditionalTask):
@@ -746,7 +751,7 @@ class Crew(BaseModel):
future = task.execute_async(
agent=agent_to_use,
context=context,
tools=agent_to_use.tools,
tools=tools_for_task,
)
futures.append((task, future, task_index))
else:
@@ -758,7 +763,7 @@ class Crew(BaseModel):
task_output = task.execute_sync(
agent=agent_to_use,
context=context,
tools=agent_to_use.tools,
tools=tools_for_task,
)
task_outputs = [task_output]
self._process_task_result(task, task_output)
@@ -795,45 +800,67 @@ class Crew(BaseModel):
return skipped_task_output
return None
def _prepare_agent_tools(self, task: Task):
if self.process == Process.hierarchical:
if self.manager_agent:
self._update_manager_tools(task)
else:
raise ValueError("Manager agent is required for hierarchical process.")
elif task.agent and task.agent.allow_delegation:
self._add_delegation_tools(task)
def _prepare_tools(self, agent: BaseAgent, task: Task, tools: List[Tool]) -> List[Tool]:
# Add delegation tools if agent allows delegation
if agent.allow_delegation:
if self.process == Process.hierarchical:
if self.manager_agent:
tools = self._update_manager_tools(task, tools)
else:
raise ValueError("Manager agent is required for hierarchical process.")
elif agent and agent.allow_delegation:
tools = self._add_delegation_tools(task, tools)
# Add code execution tools if agent allows code execution
if agent.allow_code_execution:
tools = self._add_code_execution_tools(agent, tools)
if agent and agent.multimodal:
tools = self._add_multimodal_tools(agent, tools)
return tools
def _get_agent_to_use(self, task: Task) -> Optional[BaseAgent]:
if self.process == Process.hierarchical:
return self.manager_agent
return task.agent
def _add_delegation_tools(self, task: Task):
def _merge_tools(self, existing_tools: List[Tool], new_tools: List[Tool]) -> List[Tool]:
"""Merge new tools into existing tools list, avoiding duplicates by tool name."""
if not new_tools:
return existing_tools
# Create mapping of tool names to new tools
new_tool_map = {tool.name: tool for tool in new_tools}
# Remove any existing tools that will be replaced
tools = [tool for tool in existing_tools if tool.name not in new_tool_map]
# Add all new tools
tools.extend(new_tools)
return tools
def _inject_delegation_tools(self, tools: List[Tool], task_agent: BaseAgent, agents: List[BaseAgent]):
delegation_tools = task_agent.get_delegation_tools(agents)
return self._merge_tools(tools, delegation_tools)
def _add_multimodal_tools(self, agent: BaseAgent, tools: List[Tool]):
multimodal_tools = agent.get_multimodal_tools()
return self._merge_tools(tools, multimodal_tools)
def _add_code_execution_tools(self, agent: BaseAgent, tools: List[Tool]):
code_tools = agent.get_code_execution_tools()
return self._merge_tools(tools, code_tools)
def _add_delegation_tools(self, task: Task, tools: List[Tool]):
agents_for_delegation = [agent for agent in self.agents if agent != task.agent]
if len(self.agents) > 1 and len(agents_for_delegation) > 0 and task.agent:
delegation_tools = task.agent.get_delegation_tools(agents_for_delegation)
# Add tools if they are not already in task.tools
for new_tool in delegation_tools:
# Find the index of the tool with the same name
existing_tool_index = next(
(
index
for index, tool in enumerate(task.tools or [])
if tool.name == new_tool.name
),
None,
)
if not task.tools:
task.tools = []
if existing_tool_index is not None:
# Replace the existing tool
task.tools[existing_tool_index] = new_tool
else:
# Add the new tool
task.tools.append(new_tool)
if not tools:
tools = []
tools = self._inject_delegation_tools(tools, task.agent, agents_for_delegation)
return tools
def _log_task_start(self, task: Task, role: str = "None"):
if self.output_log_file:
@@ -841,14 +868,13 @@ class Crew(BaseModel):
task_name=task.name, task=task.description, agent=role, status="started"
)
def _update_manager_tools(self, task: Task):
def _update_manager_tools(self, task: Task, tools: List[Tool]):
if self.manager_agent:
if task.agent:
self.manager_agent.tools = task.agent.get_delegation_tools([task.agent])
tools = self._inject_delegation_tools(tools, task.agent, [task.agent])
else:
self.manager_agent.tools = self.manager_agent.get_delegation_tools(
self.agents
)
tools = self._inject_delegation_tools(tools, self.manager_agent, self.agents)
return tools
def _get_context(self, task: Task, task_outputs: List[TaskOutput]):
context = (
@@ -1032,6 +1058,7 @@ class Crew(BaseModel):
agentops.end_session(
end_state="Success",
end_state_reason="Finished Execution",
is_auto_end=True,
)
self._telemetry.end_crew(self, final_string_output)
@@ -1051,19 +1078,30 @@ class Crew(BaseModel):
def test(
self,
n_iterations: int,
openai_model_name: Optional[str] = None,
openai_model_name: Optional[Union[str, LLM]] = None,
inputs: Optional[Dict[str, Any]] = None,
) -> None:
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
"""Test and evaluate the Crew with the given inputs for n iterations.
Args:
n_iterations: The number of iterations to run the test.
openai_model_name: Either a model name string or an LLM instance to use for evaluating
the performance of the agents. If a string is provided, it will be used to create
an LLM instance.
inputs: The inputs to use for the test.
Raises:
ValueError: If openai_model_name is not a string or LLM instance.
"""
test_crew = self.copy()
self._test_execution_span = test_crew._telemetry.test_execution_span(
test_crew,
n_iterations,
inputs,
openai_model_name, # type: ignore[arg-type]
) # type: ignore[arg-type]
evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type]
openai_model_name,
)
evaluator = CrewEvaluator(test_crew, openai_model_name)
for i in range(1, n_iterations + 1):
evaluator.set_iteration(i)

View File

@@ -14,8 +14,15 @@ from typing import (
cast,
)
from blinker import Signal
from pydantic import BaseModel, ValidationError
from crewai.flow.flow_events import (
FlowFinishedEvent,
FlowStartedEvent,
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.utils import get_possible_return_constants
from crewai.telemetry import Telemetry
@@ -73,10 +80,27 @@ def listen(condition):
return decorator
def router(method):
def router(condition):
def decorator(func):
func.__is_router__ = True
func.__router_for__ = method.__name__
# Handle conditions like listen/start
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
func.__condition_type__ = "OR"
elif (
isinstance(condition, dict)
and "type" in condition
and "methods" in condition
):
func.__trigger_methods__ = condition["methods"]
func.__condition_type__ = condition["type"]
elif callable(condition) and hasattr(condition, "__name__"):
func.__trigger_methods__ = [condition.__name__]
func.__condition_type__ = "OR"
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
)
return func
return decorator
@@ -116,8 +140,8 @@ class FlowMeta(type):
start_methods = []
listeners = {}
routers = {}
router_paths = {}
routers = set()
for attr_name, attr_value in dct.items():
if hasattr(attr_value, "__is_start_method__"):
@@ -130,18 +154,11 @@ class FlowMeta(type):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods)
elif hasattr(attr_value, "__is_router__"):
routers[attr_value.__router_for__] = attr_name
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
# Register router as a listener to its triggering method
trigger_method_name = attr_value.__router_for__
methods = [trigger_method_name]
condition_type = "OR"
listeners[attr_name] = (condition_type, methods)
if hasattr(attr_value, "__is_router__") and attr_value.__is_router__:
routers.add(attr_name)
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
setattr(cls, "_start_methods", start_methods)
setattr(cls, "_listeners", listeners)
@@ -156,9 +173,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
_start_methods: List[str] = []
_listeners: Dict[str, tuple[str, List[str]]] = {}
_routers: Dict[str, str] = {}
_routers: Set[str] = set()
_router_paths: Dict[str, List[str]] = {}
initial_state: Union[Type[T], T, None] = None
event_emitter = Signal("event_emitter")
def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]:
class _FlowGeneric(cls): # type: ignore
@@ -202,20 +220,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
return self._method_outputs
def _initialize_state(self, inputs: Dict[str, Any]) -> None:
"""
Initializes or updates the state with the provided inputs.
Args:
inputs: Dictionary of inputs to initialize or update the state.
Raises:
ValueError: If inputs do not match the structured state model.
TypeError: If state is neither a BaseModel instance nor a dictionary.
"""
if isinstance(self._state, BaseModel):
# Structured state management
# Structured state
try:
# Define a function to create the dynamic class
def create_model_with_extra_forbid(
base_model: Type[BaseModel],
) -> Type[BaseModel]:
@@ -225,50 +233,33 @@ class Flow(Generic[T], metaclass=FlowMeta):
return ModelWithExtraForbid
# Create the dynamic class
ModelWithExtraForbid = create_model_with_extra_forbid(
self._state.__class__
)
# Create a new instance using the combined state and inputs
self._state = cast(
T, ModelWithExtraForbid(**{**self._state.model_dump(), **inputs})
)
except ValidationError as e:
raise ValueError(f"Invalid inputs for structured state: {e}") from e
elif isinstance(self._state, dict):
# Unstructured state management
self._state.update(inputs)
else:
raise TypeError("State must be a BaseModel instance or a dictionary.")
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""
Starts the execution of the flow synchronously.
self.event_emitter.send(
self,
event=FlowStartedEvent(
type="flow_started",
flow_name=self.__class__.__name__,
),
)
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
return asyncio.run(self.kickoff_async())
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""
Starts the execution of the flow asynchronously.
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
if not self._start_methods:
raise ValueError("No start method defined")
@@ -276,20 +267,23 @@ class Flow(Generic[T], metaclass=FlowMeta):
self.__class__.__name__, list(self._methods.keys())
)
# Create tasks for all start methods
tasks = [
self._execute_start_method(start_method)
for start_method in self._start_methods
]
# Run all start methods concurrently
await asyncio.gather(*tasks)
# Return the final output (from the last executed method)
if self._method_outputs:
return self._method_outputs[-1]
else:
return None # Or raise an exception if no methods were executed
final_output = self._method_outputs[-1] if self._method_outputs else None
self.event_emitter.send(
self,
event=FlowFinishedEvent(
type="flow_finished",
flow_name=self.__class__.__name__,
result=final_output,
),
)
return final_output
async def _execute_start_method(self, start_method_name: str) -> None:
result = await self._execute_method(
@@ -305,70 +299,105 @@ class Flow(Generic[T], metaclass=FlowMeta):
if asyncio.iscoroutinefunction(method)
else method(*args, **kwargs)
)
self._method_outputs.append(result) # Store the output
# Track method execution counts
self._method_outputs.append(result)
self._method_execution_counts[method_name] = (
self._method_execution_counts.get(method_name, 0) + 1
)
return result
async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
listener_tasks = []
if trigger_method in self._routers:
router_method = self._methods[self._routers[trigger_method]]
path = await self._execute_method(
self._routers[trigger_method], router_method
# First, handle routers repeatedly until no router triggers anymore
while True:
routers_triggered = self._find_triggered_methods(
trigger_method, router_only=True
)
trigger_method = path
if not routers_triggered:
break
for router_name in routers_triggered:
await self._execute_single_listener(router_name, result)
# After executing router, the router's result is the path
# The last router executed sets the trigger_method
# The router result is the last element in self._method_outputs
trigger_method = self._method_outputs[-1]
# Now that no more routers are triggered by current trigger_method,
# execute normal listeners
listeners_triggered = self._find_triggered_methods(
trigger_method, router_only=False
)
if listeners_triggered:
tasks = [
self._execute_single_listener(listener_name, result)
for listener_name in listeners_triggered
]
await asyncio.gather(*tasks)
def _find_triggered_methods(
self, trigger_method: str, router_only: bool
) -> List[str]:
triggered = []
for listener_name, (condition_type, methods) in self._listeners.items():
is_router = listener_name in self._routers
if router_only != is_router:
continue
if condition_type == "OR":
# If the trigger_method matches any in methods, run this
if trigger_method in methods:
# Schedule the listener without preventing re-execution
listener_tasks.append(
self._execute_single_listener(listener_name, result)
)
triggered.append(listener_name)
elif condition_type == "AND":
# Initialize pending methods for this listener if not already done
if listener_name not in self._pending_and_listeners:
self._pending_and_listeners[listener_name] = set(methods)
# Remove the trigger method from pending methods
self._pending_and_listeners[listener_name].discard(trigger_method)
if trigger_method in self._pending_and_listeners[listener_name]:
self._pending_and_listeners[listener_name].discard(trigger_method)
if not self._pending_and_listeners[listener_name]:
# All required methods have been executed
listener_tasks.append(
self._execute_single_listener(listener_name, result)
)
triggered.append(listener_name)
# Reset pending methods for this listener
self._pending_and_listeners.pop(listener_name, None)
# Run all listener tasks concurrently and wait for them to complete
if listener_tasks:
await asyncio.gather(*listener_tasks)
return triggered
async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
try:
method = self._methods[listener_name]
self.event_emitter.send(
self,
event=MethodExecutionStartedEvent(
type="method_execution_started",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
sig = inspect.signature(method)
params = list(sig.parameters.values())
# Exclude 'self' parameter
method_params = [p for p in params if p.name != "self"]
if method_params:
# If listener expects parameters, pass the result
listener_result = await self._execute_method(
listener_name, method, result
)
else:
# If listener does not expect parameters, call without arguments
listener_result = await self._execute_method(listener_name, method)
# Execute listeners of this listener
self.event_emitter.send(
self,
event=MethodExecutionFinishedEvent(
type="method_execution_finished",
method_name=listener_name,
flow_name=self.__class__.__name__,
),
)
# Execute listeners (and possibly routers) of this listener
await self._execute_listeners(listener_name, listener_result)
except Exception as e:
print(
f"[Flow._execute_single_listener] Error in method {listener_name}: {e}"
@@ -381,5 +410,4 @@ class Flow(Generic[T], metaclass=FlowMeta):
self._telemetry.flow_plotting_span(
self.__class__.__name__, list(self._methods.keys())
)
plot_flow(self, filename)

View File

@@ -0,0 +1,33 @@
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@dataclass
class Event:
type: str
flow_name: str
timestamp: datetime = field(init=False)
def __post_init__(self):
self.timestamp = datetime.now()
@dataclass
class FlowStartedEvent(Event):
pass
@dataclass
class MethodExecutionStartedEvent(Event):
method_name: str
@dataclass
class MethodExecutionFinishedEvent(Event):
method_name: str
@dataclass
class FlowFinishedEvent(Event):
result: Optional[Any] = None

View File

@@ -31,16 +31,50 @@ def get_possible_return_constants(function):
print(f"Source code:\n{source}")
return None
return_values = []
return_values = set()
dict_definitions = {}
class DictionaryAssignmentVisitor(ast.NodeVisitor):
def visit_Assign(self, node):
# Check if this assignment is assigning a dictionary literal to a variable
if isinstance(node.value, ast.Dict) and len(node.targets) == 1:
target = node.targets[0]
if isinstance(target, ast.Name):
var_name = target.id
dict_values = []
# Extract string values from the dictionary
for val in node.value.values:
if isinstance(val, ast.Constant) and isinstance(val.value, str):
dict_values.append(val.value)
# If non-string, skip or just ignore
if dict_values:
dict_definitions[var_name] = dict_values
self.generic_visit(node)
class ReturnVisitor(ast.NodeVisitor):
def visit_Return(self, node):
# Check if the return value is a constant (Python 3.8+)
if isinstance(node.value, ast.Constant):
return_values.append(node.value.value)
# Direct string return
if isinstance(node.value, ast.Constant) and isinstance(
node.value.value, str
):
return_values.add(node.value.value)
# Dictionary-based return, like return paths[result]
elif isinstance(node.value, ast.Subscript):
# Check if we're subscripting a known dictionary variable
if isinstance(node.value.value, ast.Name):
var_name = node.value.value.id
if var_name in dict_definitions:
# Add all possible dictionary values
for v in dict_definitions[var_name]:
return_values.add(v)
self.generic_visit(node)
# First pass: identify dictionary assignments
DictionaryAssignmentVisitor().visit(code_ast)
# Second pass: identify returns
ReturnVisitor().visit(code_ast)
return return_values
return list(return_values) if return_values else None
def calculate_node_levels(flow):
@@ -61,10 +95,7 @@ def calculate_node_levels(flow):
current_level = levels[current]
visited.add(current)
for listener_name, (
condition_type,
trigger_methods,
) in flow._listeners.items():
for listener_name, (condition_type, trigger_methods) in flow._listeners.items():
if condition_type == "OR":
if current in trigger_methods:
if (
@@ -89,7 +120,7 @@ def calculate_node_levels(flow):
queue.append(listener_name)
# Handle router connections
if current in flow._routers.values():
if current in flow._routers:
router_method_name = current
paths = flow._router_paths.get(router_method_name, [])
for path in paths:
@@ -105,6 +136,7 @@ def calculate_node_levels(flow):
levels[listener_name] = current_level + 1
if listener_name not in visited:
queue.append(listener_name)
return levels
@@ -142,7 +174,7 @@ def dfs_ancestors(node, ancestors, visited, flow):
dfs_ancestors(listener_name, ancestors, visited, flow)
# Handle router methods separately
if node in flow._routers.values():
if node in flow._routers:
router_method_name = node
paths = flow._router_paths.get(router_method_name, [])
for path in paths:

View File

@@ -94,12 +94,14 @@ def add_edges(net, flow, node_positions, colors):
ancestors = build_ancestor_dict(flow)
parent_children = build_parent_children_dict(flow)
# Edges for normal listeners
for method_name in flow._listeners:
condition_type, trigger_methods = flow._listeners[method_name]
is_and_condition = condition_type == "AND"
for trigger in trigger_methods:
if trigger in flow._methods or trigger in flow._routers.values():
# Check if nodes exist before adding edges
if trigger in node_positions and method_name in node_positions:
is_router_edge = any(
trigger in paths for paths in flow._router_paths.values()
)
@@ -135,7 +137,22 @@ def add_edges(net, flow, node_positions, colors):
}
net.add_edge(trigger, method_name, **edge_style)
else:
# Nodes not found in node_positions. Check if it's a known router outcome and a known method.
is_router_edge = any(
trigger in paths for paths in flow._router_paths.values()
)
# Check if method_name is a known method
method_known = method_name in flow._methods
# If it's a known router edge and the method is known, don't warn.
# This means the path is legitimate, just not reflected as nodes here.
if not (is_router_edge and method_known):
print(
f"Warning: No node found for '{trigger}' or '{method_name}'. Skipping edge."
)
# Edges for router return paths
for router_method_name, paths in flow._router_paths.items():
for path in paths:
for listener_name, (
@@ -143,36 +160,49 @@ def add_edges(net, flow, node_positions, colors):
trigger_methods,
) in flow._listeners.items():
if path in trigger_methods:
is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
parent_has_multiple_children = (
len(parent_children.get(router_method_name, [])) > 1
)
needs_curvature = is_cycle_edge or parent_has_multiple_children
if (
router_method_name in node_positions
and listener_name in node_positions
):
is_cycle_edge = is_ancestor(
router_method_name, listener_name, ancestors
)
parent_has_multiple_children = (
len(parent_children.get(router_method_name, [])) > 1
)
needs_curvature = is_cycle_edge or parent_has_multiple_children
if needs_curvature:
source_pos = node_positions.get(router_method_name)
target_pos = node_positions.get(listener_name)
if needs_curvature:
source_pos = node_positions.get(router_method_name)
target_pos = node_positions.get(listener_name)
if source_pos and target_pos:
dx = target_pos[0] - source_pos[0]
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
index = get_child_index(
router_method_name, listener_name, parent_children
)
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
if source_pos and target_pos:
dx = target_pos[0] - source_pos[0]
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
index = get_child_index(
router_method_name, listener_name, parent_children
)
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth = False
edge_smooth = False
edge_style = {
"color": colors["router_edge"],
"width": 2,
"arrows": "to",
"dashes": True,
"smooth": edge_smooth,
}
net.add_edge(router_method_name, listener_name, **edge_style)
edge_style = {
"color": colors["router_edge"],
"width": 2,
"arrows": "to",
"dashes": True,
"smooth": edge_smooth,
}
net.add_edge(router_method_name, listener_name, **edge_style)
else:
# Same check here: known router edge and known method?
method_known = listener_name in flow._methods
if not method_known:
print(
f"Warning: No node found for '{router_method_name}' or '{listener_name}'. Skipping edge."
)

View File

@@ -1,8 +1,8 @@
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, List, Union
from typing import Dict, List, Optional, Union
from pydantic import Field
from pydantic import Field, field_validator
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
@@ -14,17 +14,28 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Base class for knowledge sources that load content from files."""
_logger: Logger = Logger(verbose=True)
file_path: Union[Path, List[Path], str, List[str]] = Field(
..., description="The path to the file"
file_path: Optional[Union[Path, List[Path], str, List[str]]] = Field(
default=None,
description="[Deprecated] The path to the file. Use file_paths instead.",
)
file_paths: Optional[Union[Path, List[Path], str, List[str]]] = Field(
default_factory=list, description="The path to the file"
)
content: Dict[Path, str] = Field(init=False, default_factory=dict)
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
safe_file_paths: List[Path] = Field(default_factory=list)
@field_validator("file_path", "file_paths", mode="before")
def validate_file_path(cls, v, values):
"""Validate that at least one of file_path or file_paths is provided."""
if v is None and ("file_path" not in values or values.get("file_path") is None):
raise ValueError("Either file_path or file_paths must be provided")
return v
def model_post_init(self, _):
"""Post-initialization method to load content."""
self.safe_file_paths = self._process_file_paths()
self.validate_paths()
self.validate_content()
self.content = self.load_content()
@abstractmethod
@@ -32,13 +43,13 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory."""
pass
def validate_paths(self):
def validate_content(self):
"""Validate the paths."""
for path in self.safe_file_paths:
if not path.exists():
self._logger.log(
"error",
f"File not found: {path}. Try adding sources to the knowledge directory. If its inside the knowledge directory, use the relative path.",
f"File not found: {path}. Try adding sources to the knowledge directory. If it's inside the knowledge directory, use the relative path.",
color="red",
)
raise FileNotFoundError(f"File not found: {path}")
@@ -59,13 +70,30 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
def _process_file_paths(self) -> List[Path]:
"""Convert file_path to a list of Path objects."""
paths = (
[self.file_path]
if isinstance(self.file_path, (str, Path))
else self.file_path
if hasattr(self, "file_path") and self.file_path is not None:
self._logger.log(
"warning",
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
self.file_paths = self.file_path
if self.file_paths is None:
raise ValueError("Your source must be provided with a file_paths: []")
# Convert single path to list
path_list: List[Union[Path, str]] = (
[self.file_paths]
if isinstance(self.file_paths, (str, Path))
else list(self.file_paths)
if isinstance(self.file_paths, list)
else []
)
if not isinstance(paths, list):
raise ValueError("file_path must be a Path, str, or a list of these types")
if not path_list:
raise ValueError(
"file_path/file_paths must be a Path, str, or a list of these types"
)
return [self.convert_to_path(path) for path in paths]
return [self.convert_to_path(path) for path in path_list]

View File

@@ -21,7 +21,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
collection_name: Optional[str] = Field(default=None)
@abstractmethod
def load_content(self) -> Dict[Any, str]:
def validate_content(self) -> Any:
"""Load and preprocess content from the source."""
pass

View File

@@ -0,0 +1,120 @@
from pathlib import Path
from typing import Iterator, List, Optional, Union
from urllib.parse import urlparse
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter
from docling.exceptions import ConversionError
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
from docling_core.types.doc.document import DoclingDocument
from pydantic import Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
class CrewDoclingSource(BaseKnowledgeSource):
"""Default Source class for converting documents to markdown or json
This will auto support PDF, DOCX, and TXT, XLSX, Images, and HTML files without any additional dependencies and follows the docling package as the source of truth.
"""
_logger: Logger = Logger(verbose=True)
file_path: Optional[List[Union[Path, str]]] = Field(default=None)
file_paths: List[Union[Path, str]] = Field(default_factory=list)
chunks: List[str] = Field(default_factory=list)
safe_file_paths: List[Union[Path, str]] = Field(default_factory=list)
content: List[DoclingDocument] = Field(default_factory=list)
document_converter: DocumentConverter = Field(
default_factory=lambda: DocumentConverter(
allowed_formats=[
InputFormat.MD,
InputFormat.ASCIIDOC,
InputFormat.PDF,
InputFormat.DOCX,
InputFormat.HTML,
InputFormat.IMAGE,
InputFormat.XLSX,
InputFormat.PPTX,
]
)
)
def model_post_init(self, _) -> None:
if self.file_path:
self._logger.log(
"warning",
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
color="yellow",
)
self.file_paths = self.file_path
self.safe_file_paths = self.validate_content()
self.content = self._load_content()
def _load_content(self) -> List[DoclingDocument]:
try:
return self._convert_source_to_docling_documents()
except ConversionError as e:
self._logger.log(
"error",
f"Error loading content: {e}. Supported formats: {self.document_converter.allowed_formats}",
"red",
)
raise e
except Exception as e:
self._logger.log("error", f"Error loading content: {e}")
raise e
def add(self) -> None:
if self.content is None:
return
for doc in self.content:
new_chunks_iterable = self._chunk_doc(doc)
self.chunks.extend(list(new_chunks_iterable))
self._save_documents()
def _convert_source_to_docling_documents(self) -> List[DoclingDocument]:
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
return [result.document for result in conv_results_iter]
def _chunk_doc(self, doc: DoclingDocument) -> Iterator[str]:
chunker = HierarchicalChunker()
for chunk in chunker.chunk(doc):
yield chunk.text
def validate_content(self) -> List[Union[Path, str]]:
processed_paths: List[Union[Path, str]] = []
for path in self.file_paths:
if isinstance(path, str):
if path.startswith(("http://", "https://")):
try:
if self._validate_url(path):
processed_paths.append(path)
else:
raise ValueError(f"Invalid URL format: {path}")
except Exception as e:
raise ValueError(f"Invalid URL: {path}. Error: {str(e)}")
else:
local_path = Path(KNOWLEDGE_DIRECTORY + "/" + path)
if local_path.exists():
processed_paths.append(local_path)
else:
raise FileNotFoundError(f"File not found: {local_path}")
else:
# this is an instance of Path
processed_paths.append(path)
return processed_paths
def _validate_url(self, url: str) -> bool:
try:
result = urlparse(url)
return all(
[
result.scheme in ("http", "https"),
result.netloc,
len(result.netloc.split(".")) >= 2, # Ensure domain has TLD
]
)
except Exception:
return False

View File

@@ -13,9 +13,9 @@ class StringKnowledgeSource(BaseKnowledgeSource):
def model_post_init(self, _):
"""Post-initialization method to validate content."""
self.load_content()
self.validate_content()
def load_content(self):
def validate_content(self):
"""Validate string content."""
if not isinstance(self.content, str):
raise ValueError("StringKnowledgeSource only accepts string content")

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod
from typing import Dict, Any, List, Optional
from typing import Any, Dict, List, Optional
class BaseKnowledgeStorage(ABC):

View File

@@ -14,9 +14,9 @@ from chromadb.config import Settings
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
from crewai.utilities import EmbeddingConfigurator
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
from crewai.utilities.logger import Logger
from crewai.utilities.paths import db_storage_path
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
@contextlib.contextmanager
@@ -124,43 +124,60 @@ class KnowledgeStorage(BaseKnowledgeStorage):
documents: List[str],
metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None,
):
if self.collection:
try:
if metadata is None:
metadatas: Optional[OneOrMany[chromadb.Metadata]] = None
elif isinstance(metadata, list):
metadatas = [cast(chromadb.Metadata, m) for m in metadata]
else:
metadatas = cast(chromadb.Metadata, metadata)
ids = [
hashlib.sha256(doc.encode("utf-8")).hexdigest() for doc in documents
]
self.collection.upsert(
documents=documents,
metadatas=metadatas,
ids=ids,
)
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except Exception as e:
Logger(verbose=True).log(
"error", f"Failed to upsert documents: {e}", "red"
)
raise
else:
if not self.collection:
raise Exception("Collection not initialized")
try:
# Create a dictionary to store unique documents
unique_docs = {}
# Generate IDs and create a mapping of id -> (document, metadata)
for idx, doc in enumerate(documents):
doc_id = hashlib.sha256(doc.encode("utf-8")).hexdigest()
doc_metadata = None
if metadata is not None:
if isinstance(metadata, list):
doc_metadata = metadata[idx]
else:
doc_metadata = metadata
unique_docs[doc_id] = (doc, doc_metadata)
# Prepare filtered lists for ChromaDB
filtered_docs = []
filtered_metadata = []
filtered_ids = []
# Build the filtered lists
for doc_id, (doc, meta) in unique_docs.items():
filtered_docs.append(doc)
filtered_metadata.append(meta)
filtered_ids.append(doc_id)
# If we have no metadata at all, set it to None
final_metadata: Optional[OneOrMany[chromadb.Metadata]] = (
None if all(m is None for m in filtered_metadata) else filtered_metadata
)
self.collection.upsert(
documents=filtered_docs,
metadatas=final_metadata,
ids=filtered_ids,
)
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except Exception as e:
Logger(verbose=True).log("error", f"Failed to upsert documents: {e}", "red")
raise
def _create_default_embedding_function(self):
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,

View File

@@ -43,6 +43,11 @@ LLM_CONTEXT_WINDOW_SIZES = {
"gpt-4-turbo": 128000,
"o1-preview": 128000,
"o1-mini": 128000,
# gemini
"gemini-2.0-flash": 1048576,
"gemini-1.5-pro": 2097152,
"gemini-1.5-flash": 1048576,
"gemini-1.5-flash-8b": 1048576,
# deepseek
"deepseek-chat": 128000,
# groq
@@ -59,8 +64,13 @@ LLM_CONTEXT_WINDOW_SIZES = {
"llama3-70b-8192": 8192,
"llama3-8b-8192": 8192,
"mixtral-8x7b-32768": 32768,
"llama-3.3-70b-versatile": 128000,
"llama-3.3-70b-instruct": 128000,
}
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
CONTEXT_WINDOW_USAGE_RATIO = 0.75
@contextmanager
def suppress_warnings():
@@ -124,6 +134,7 @@ class LLM:
self.api_version = api_version
self.api_key = api_key
self.callbacks = callbacks
self.context_window_size = 0
self.kwargs = kwargs
litellm.drop_params = True
@@ -191,7 +202,16 @@ class LLM:
def get_context_window_size(self) -> int:
# Only using 75% of the context window size to avoid cutting the message in the middle
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
if self.context_window_size != 0:
return self.context_window_size
self.context_window_size = int(
DEFAULT_CONTEXT_WINDOW_SIZE * CONTEXT_WINDOW_USAGE_RATIO
)
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
if self.model.startswith(key):
self.context_window_size = int(value * CONTEXT_WINDOW_USAGE_RATIO)
return self.context_window_size
def set_callbacks(self, callbacks: List[Any]):
callback_types = [type(callback) for callback in callbacks]

View File

@@ -1,4 +1,4 @@
from typing import Optional, Dict, Any
from typing import Any, Dict, Optional
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, Optional, List
from typing import Any, Dict, List, Optional
from crewai.memory.storage.rag_storage import RAGStorage

View File

@@ -1,4 +1,5 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage
@@ -32,7 +33,10 @@ class ShortTermMemory(Memory):
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew, path=path
type="short_term",
embedder_config=embedder_config,
crew=crew,
path=path,
)
)
super().__init__(storage)

View File

@@ -2,6 +2,7 @@ import os
from typing import Any, Dict, List
from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage

View File

@@ -1,5 +1,6 @@
from functools import wraps
def memoize(func):
cache = {}

View File

@@ -1,12 +1,25 @@
import datetime
import inspect
import json
from pathlib import Path
import logging
import threading
import uuid
from concurrent.futures import Future
from copy import copy
from hashlib import md5
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
from pathlib import Path
from typing import (
Any,
Callable,
ClassVar,
Dict,
List,
Optional,
Set,
Tuple,
Type,
Union,
)
from opentelemetry.trace import Span
from pydantic import (
@@ -20,6 +33,7 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.guardrail_result import GuardrailResult
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
@@ -49,6 +63,7 @@ class Task(BaseModel):
"""
__hash__ = object.__hash__ # type: ignore
logger: ClassVar[logging.Logger] = logging.getLogger(__name__)
used_tools: int = 0
tools_errors: int = 0
delegations: int = 0
@@ -110,6 +125,55 @@ class Task(BaseModel):
default=None,
)
processed_by_agents: Set[str] = Field(default_factory=set)
guardrail: Optional[Callable[[TaskOutput], Tuple[bool, Any]]] = Field(
default=None,
description="Function to validate task output before proceeding to next task"
)
max_retries: int = Field(
default=3,
description="Maximum number of retries when guardrail fails"
)
retry_count: int = Field(
default=0,
description="Current number of retries"
)
@field_validator("guardrail")
@classmethod
def validate_guardrail_function(cls, v: Optional[Callable]) -> Optional[Callable]:
"""Validate that the guardrail function has the correct signature and behavior.
While type hints provide static checking, this validator ensures runtime safety by:
1. Verifying the function accepts exactly one parameter (the TaskOutput)
2. Checking return type annotations match Tuple[bool, Any] if present
3. Providing clear, immediate error messages for debugging
This runtime validation is crucial because:
- Type hints are optional and can be ignored at runtime
- Function signatures need immediate validation before task execution
- Clear error messages help users debug guardrail implementation issues
Args:
v: The guardrail function to validate
Returns:
The validated guardrail function
Raises:
ValueError: If the function signature is invalid or return annotation
doesn't match Tuple[bool, Any]
"""
if v is not None:
sig = inspect.signature(v)
if len(sig.parameters) != 1:
raise ValueError("Guardrail function must accept exactly one parameter")
# Check return annotation if present, but don't require it
return_annotation = sig.return_annotation
if return_annotation != inspect.Signature.empty:
if not (return_annotation == Tuple[bool, Any] or str(return_annotation) == 'Tuple[bool, Any]'):
raise ValueError("If return type is annotated, it must be Tuple[bool, Any]")
return v
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
_execution_span: Optional[Span] = PrivateAttr(default=None)
@@ -254,7 +318,6 @@ class Task(BaseModel):
)
pydantic_output, json_output = self._export_output(result)
task_output = TaskOutput(
name=self.name,
description=self.description,
@@ -265,6 +328,37 @@ class Task(BaseModel):
agent=agent.role,
output_format=self._get_output_format(),
)
if self.guardrail:
guardrail_result = GuardrailResult.from_tuple(self.guardrail(task_output))
if not guardrail_result.success:
if self.retry_count >= self.max_retries:
raise Exception(
f"Task failed guardrail validation after {self.max_retries} retries. "
f"Last error: {guardrail_result.error}"
)
self.retry_count += 1
context = (
f"### Previous attempt failed validation: {guardrail_result.error}\n\n\n"
f"### Previous result:\n{task_output.raw}\n\n\n"
"Try again, making sure to address the validation error."
)
return self._execute_core(agent, context, tools)
if guardrail_result.result is None:
raise Exception(
"Task guardrail returned None as result. This is not allowed."
)
if isinstance(guardrail_result.result, str):
task_output.raw = guardrail_result.result
pydantic_output, json_output = self._export_output(guardrail_result.result)
task_output.pydantic = pydantic_output
task_output.json_dict = json_output
elif isinstance(guardrail_result.result, TaskOutput):
task_output = guardrail_result.result
self.output = task_output
self._set_end_execution_time(start_time)
@@ -308,7 +402,18 @@ class Task(BaseModel):
if inputs:
self.description = self._original_description.format(**inputs)
self.expected_output = self._original_expected_output.format(**inputs)
self.expected_output = self.interpolate_only(
input_string=self._original_expected_output, inputs=inputs
)
def interpolate_only(self, input_string: str, inputs: Dict[str, Any]) -> str:
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched."""
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
for key in inputs.keys():
escaped_string = escaped_string.replace(f"{{{{{key}}}}}", f"{{{key}}}")
return escaped_string.format(**inputs)
def increment_tools_errors(self) -> None:
"""Increment the tools errors counter."""
@@ -390,22 +495,33 @@ class Task(BaseModel):
return OutputFormat.RAW
def _save_file(self, result: Any) -> None:
"""Save task output to a file.
Args:
result: The result to save to the file. Can be a dict or any stringifiable object.
Raises:
ValueError: If output_file is not set
RuntimeError: If there is an error writing to the file
"""
if self.output_file is None:
raise ValueError("output_file is not set.")
resolved_path = Path(self.output_file).expanduser().resolve()
directory = resolved_path.parent
try:
resolved_path = Path(self.output_file).expanduser().resolve()
directory = resolved_path.parent
if not directory.exists():
directory.mkdir(parents=True, exist_ok=True)
if not directory.exists():
directory.mkdir(parents=True, exist_ok=True)
with resolved_path.open("w", encoding="utf-8") as file:
if isinstance(result, dict):
import json
json.dump(result, file, ensure_ascii=False, indent=2)
else:
file.write(str(result))
with resolved_path.open("w", encoding="utf-8") as file:
if isinstance(result, dict):
import json
json.dump(result, file, ensure_ascii=False, indent=2)
else:
file.write(str(result))
except (OSError, IOError) as e:
raise RuntimeError(f"Failed to save output file: {e}")
return None
def __repr__(self):

View File

@@ -0,0 +1,56 @@
"""
Module for handling task guardrail validation results.
This module provides the GuardrailResult class which standardizes
the way task guardrails return their validation results.
"""
from typing import Any, Optional, Tuple, Union
from pydantic import BaseModel, field_validator
class GuardrailResult(BaseModel):
"""Result from a task guardrail execution.
This class standardizes the return format of task guardrails,
converting tuple responses into a structured format that can
be easily handled by the task execution system.
Attributes:
success (bool): Whether the guardrail validation passed
result (Any, optional): The validated/transformed result if successful
error (str, optional): Error message if validation failed
"""
success: bool
result: Optional[Any] = None
error: Optional[str] = None
@field_validator("result", "error")
@classmethod
def validate_result_error_exclusivity(cls, v: Any, info) -> Any:
values = info.data
if "success" in values:
if values["success"] and v and "error" in values and values["error"]:
raise ValueError("Cannot have both result and error when success is True")
if not values["success"] and v and "result" in values and values["result"]:
raise ValueError("Cannot have both result and error when success is False")
return v
@classmethod
def from_tuple(cls, result: Tuple[bool, Union[Any, str]]) -> "GuardrailResult":
"""Create a GuardrailResult from a validation tuple.
Args:
result: A tuple of (success, data) where data is either
the validated result or error message.
Returns:
GuardrailResult: A new instance with the tuple data.
"""
success, data = result
return cls(
success=success,
result=data if success else None,
error=data if not success else None
)

View File

@@ -6,6 +6,7 @@ import os
import platform
import warnings
from contextlib import contextmanager
from importlib.metadata import version
from typing import TYPE_CHECKING, Any, Optional
@@ -16,12 +17,10 @@ def suppress_warnings():
yield
with suppress_warnings():
import pkg_resources
from opentelemetry import trace # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter, # noqa: E402
)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
@@ -104,7 +103,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(span, "crew_key", crew.key)
@@ -306,7 +305,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
@@ -326,7 +325,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
@@ -346,7 +345,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
if llm:
self._add_attribute(span, "llm", llm.model)
@@ -365,7 +364,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
@@ -391,7 +390,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
@@ -472,7 +471,7 @@ class Telemetry:
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
@@ -541,7 +540,7 @@ class Telemetry:
self._add_attribute(
crew._execution_span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
version("crewai"),
)
self._add_attribute(
crew._execution_span, "crew_output", final_string_output

View File

@@ -0,0 +1,45 @@
from typing import Dict, Optional, Union
from pydantic import BaseModel, Field
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
i18n = I18N()
class AddImageToolSchema(BaseModel):
image_url: str = Field(..., description="The URL or path of the image to add")
action: Optional[str] = Field(
default=None,
description="Optional context or question about the image"
)
class AddImageTool(BaseTool):
"""Tool for adding images to the content"""
name: str = Field(default_factory=lambda: i18n.tools("add_image")["name"]) # type: ignore
description: str = Field(default_factory=lambda: i18n.tools("add_image")["description"]) # type: ignore
args_schema: type[BaseModel] = AddImageToolSchema
def _run(
self,
image_url: str,
action: Optional[str] = None,
**kwargs,
) -> dict:
action = action or i18n.tools("add_image")["default_action"] # type: ignore
content = [
{"type": "text", "text": action},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
}
]
return {
"role": "user",
"content": content
}

View File

@@ -1,9 +1,9 @@
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
from .delegate_work_tool import DelegateWorkTool
from .ask_question_tool import AskQuestionTool
from .delegate_work_tool import DelegateWorkTool
class AgentTools:
@@ -20,13 +20,13 @@ class AgentTools:
delegate_tool = DelegateWorkTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("delegate_work").format(coworkers=coworkers),
description=self.i18n.tools("delegate_work").format(coworkers=coworkers), # type: ignore
)
ask_tool = AskQuestionTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
description=self.i18n.tools("ask_question").format(coworkers=coworkers), # type: ignore
)
return [delegate_tool, ask_tool]

View File

@@ -1,7 +1,9 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
class AskQuestionToolSchema(BaseModel):
question: str = Field(..., description="The question to ask")

View File

@@ -1,9 +1,10 @@
from typing import Optional, Union
from pydantic import Field
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
@@ -44,14 +45,14 @@ class BaseAgentTool(BaseTool):
if available_agent.role.casefold().replace("\n", "") == agent_name
]
except Exception as _:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)

View File

@@ -1,8 +1,9 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
class DelegateWorkToolSchema(BaseModel):
task: str = Field(..., description="The task to delegate")

View File

@@ -1,6 +1,5 @@
import ast
import datetime
import os
import time
from difflib import SequenceMatcher
from textwrap import dedent
@@ -11,18 +10,16 @@ from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task
from crewai.telemetry import Telemetry
from crewai.tools import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
from crewai.utilities import I18N, Converter, ConverterError, Printer
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore
except ImportError:
pass
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini"]
try:
import agentops # type: ignore
except ImportError:
agentops = None
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini", "o1", "o3", "o3-mini"]
class ToolUsageErrorException(Exception):
@@ -106,6 +103,19 @@ class ToolUsage:
if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
if isinstance(tool, CrewStructuredTool) and tool.name == self._i18n.tools("add_image")["name"]: # type: ignore
try:
result = self._use(tool_string=tool_string, tool=tool, calling=calling)
return result
except Exception as e:
error = getattr(e, "message", str(e))
self.task.increment_tools_errors()
if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
def _use(
@@ -422,9 +432,10 @@ class ToolUsage:
elif value.lower() in [
"true",
"false",
"null",
]: # Check for boolean and null values
value = value.lower()
value = value.lower().capitalize()
elif value.lower() == "null":
value = "None"
else:
# Assume the value is a string and needs quotes
value = '"' + value.replace('"', '\\"') + '"'

View File

@@ -1,6 +1,7 @@
from typing import Any, Dict
from pydantic import BaseModel
from datetime import datetime
from typing import Any, Dict
from pydantic import BaseModel
class ToolUsageEvent(BaseModel):

View File

@@ -12,7 +12,7 @@
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
@@ -28,7 +28,7 @@
"errors": {
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexsiting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
@@ -37,6 +37,11 @@
},
"tools": {
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them."
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.",
"add_image": {
"name": "Add image to content",
"description": "See image to understand it's content, you can optionally ask a question about the image",
"default_action": "Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
}
}
}

View File

@@ -1,15 +1,17 @@
from datetime import datetime, date
import json
from uuid import UUID
from pydantic import BaseModel
from datetime import date, datetime
from decimal import Decimal
from enum import Enum
from uuid import UUID
from pydantic import BaseModel
class CrewJSONEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, BaseModel):
return self._handle_pydantic_model(obj)
elif isinstance(obj, UUID) or isinstance(obj, Decimal):
elif isinstance(obj, UUID) or isinstance(obj, Decimal) or isinstance(obj, Enum):
return str(obj)
elif isinstance(obj, datetime) or isinstance(obj, date):

View File

@@ -1,10 +1,11 @@
import json
import regex
from typing import Any, Type
from crewai.agents.parser import OutputParserException
import regex
from pydantic import BaseModel, ValidationError
from crewai.agents.parser import OutputParserException
class CrewPydanticOutputParser:
"""Parses the text into pydantic models"""

View File

@@ -1,6 +1,7 @@
import os
from typing import Any, Dict, cast
from chromadb import EmbeddingFunction, Documents, Embeddings
from chromadb import Documents, EmbeddingFunction, Embeddings
from chromadb.api.types import validate_embedding_function

View File

@@ -1,13 +1,18 @@
from typing import Union
from crewai.llm import LLM
from collections import defaultdict
from pydantic import BaseModel, Field
from crewai.utilities.logger import Logger
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
from crewai.agent import Agent
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
class TaskEvaluationPydanticOutput(BaseModel):
@@ -22,7 +27,7 @@ class CrewEvaluator:
Attributes:
crew (Crew): The crew of agents to evaluate.
openai_model_name (str): The model to use for evaluating the performance of the agents (for now ONLY OpenAI accepted).
openai_model_name (Union[str, LLM]): Either a model name string or an LLM instance to use for evaluating the performance of the agents.
tasks_scores (defaultdict): A dictionary to store the scores of the agents for each task.
iteration (int): The current iteration of the evaluation.
"""
@@ -31,10 +36,29 @@ class CrewEvaluator:
run_execution_times: defaultdict = defaultdict(list)
iteration: int = 0
def __init__(self, crew, openai_model_name: str):
def __init__(self, crew, openai_model_name: Union[str, LLM]):
"""Initialize the CrewEvaluator.
Args:
crew (Crew): The crew to evaluate
openai_model_name (Union[str, LLM]): Either a model name string or an LLM instance
to use for evaluation. If a string is provided, it will be used to create an
LLM instance with default settings. If an LLM instance is provided, its settings
(like temperature) will be preserved.
Raises:
ValueError: If openai_model_name is not a string or LLM instance.
"""
self.crew = crew
self.openai_model_name = openai_model_name
if not isinstance(openai_model_name, (str, LLM)):
raise ValueError(f"Invalid model type '{type(openai_model_name)}'. Expected str or LLM instance.")
self.model_instance = openai_model_name if isinstance(openai_model_name, LLM) else LLM(model=openai_model_name)
self._telemetry = Telemetry()
self._logger = Logger()
self._logger.log(
"info",
f"Initializing CrewEvaluator with model: {openai_model_name if isinstance(openai_model_name, str) else openai_model_name.model}"
)
self._setup_for_evaluating()
def _setup_for_evaluating(self) -> None:
@@ -50,7 +74,7 @@ class CrewEvaluator:
),
backstory="Evaluator agent for crew evaluation with precise capabilities to evaluate the performance of the agents in the crew based on the tasks they have performed",
verbose=False,
llm=self.openai_model_name,
llm=self.model_instance,
)
def _evaluation_task(
@@ -180,7 +204,11 @@ class CrewEvaluator:
self.crew,
evaluation_result.pydantic.quality,
current_task._execution_time,
self.openai_model_name,
self.model_instance.model,
)
self._logger.log(
"info",
f"Task evaluation completed with quality score: {evaluation_result.pydantic.quality}"
)
self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality)
self.run_execution_times[self.iteration].append(

View File

@@ -1,4 +1,3 @@
import os
from typing import List
from pydantic import BaseModel, Field
@@ -6,27 +5,17 @@ from pydantic import BaseModel, Field
from crewai.utilities import Converter
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
agentops = None
try:
from agentops import track_agent # type: ignore
except ImportError:
def mock_agent_ops_provider():
def track_agent(*args, **kwargs):
def track_agent(name):
def noop(f):
return f
return noop
return track_agent
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
else:
track_agent = mock_agent_ops_provider()
class Entity(BaseModel):
name: str = Field(description="The name of the entity.")

View File

@@ -1,7 +1,7 @@
from typing import Any, Callable, Generic, List, Dict, Type, TypeVar
from functools import wraps
from pydantic import BaseModel
from typing import Any, Callable, Dict, Generic, List, Type, TypeVar
from pydantic import BaseModel
T = TypeVar("T")
EVT = TypeVar("EVT", bound=BaseModel)

View File

@@ -1,6 +1,6 @@
import json
import os
from typing import Dict, Optional
from typing import Dict, Optional, Union
from pydantic import BaseModel, Field, PrivateAttr, model_validator
@@ -41,8 +41,8 @@ class I18N(BaseModel):
def errors(self, error: str) -> str:
return self.retrieve("errors", error)
def tools(self, error: str) -> str:
return self.retrieve("tools", error)
def tools(self, tool: str) -> Union[str, Dict[str, str]]:
return self.retrieve("tools", tool)
def retrieve(self, kind, key) -> str:
try:

View File

@@ -1,4 +1,5 @@
from typing import Any, List, Optional
from pydantic import BaseModel, Field
from crewai.agent import Agent

View File

@@ -1,5 +1,7 @@
from pydantic import BaseModel, Field
from typing import Any, Optional
from pydantic import BaseModel, Field
from crewai.utilities import I18N

View File

@@ -1,4 +1,4 @@
from typing import Type, get_args, get_origin, Union
from typing import Type, Union, get_args, get_origin
from pydantic import BaseModel

View File

@@ -1,6 +1,8 @@
from pydantic import BaseModel, Field
from datetime import datetime
from typing import Dict, Any, Optional, List
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)

View File

@@ -1,5 +1,6 @@
from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import Usage
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -11,7 +12,7 @@ class TokenCalcHandler(CustomLogger):
if self.token_cost_process is None:
return
usage : Usage = response_obj["usage"]
usage: Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)

View File

@@ -1595,19 +1595,15 @@ def test_agent_execute_task_with_ollama():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_knowledge_sources():
# Create a knowledge source with some content
content = "Brandon's favorite color is blue and he likes Mexican food."
string_source = StringKnowledgeSource(
content=content, metadata={"preference": "personal"}
)
content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content)
with patch(
"crewai.knowledge.storage.knowledge_storage.KnowledgeStorage"
) as MockKnowledge:
mock_knowledge_instance = MockKnowledge.return_value
mock_knowledge_instance.sources = [string_source]
mock_knowledge_instance.query.return_value = [
{"content": content, "metadata": {"preference": "personal"}}
]
mock_knowledge_instance.query.return_value = [{"content": content}]
agent = Agent(
role="Information Agent",
@@ -1628,4 +1624,4 @@ def test_agent_with_knowledge_sources():
result = crew.kickoff()
# Assert that the agent provides the correct information
assert "blue" in result.raw.lower()
assert "red" in result.raw.lower()

View File

@@ -1,9 +1,10 @@
import hashlib
from typing import Any, List, Optional
from pydantic import BaseModel
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from pydantic import BaseModel
class TestAgent(BaseAgent):

View File

@@ -1,10 +1,11 @@
import pytest
from crewai.agents.parser import CrewAgentParser
from crewai.agents.crew_agent_executor import (
AgentAction,
AgentFinish,
OutputParserException,
)
from crewai.agents.parser import CrewAgentParser
@pytest.fixture

View File

@@ -26237,7 +26237,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}], "model": "gpt-4o"}'
headers:
@@ -26590,7 +26590,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -26941,7 +26941,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -27292,7 +27292,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -27647,7 +27647,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -28005,7 +28005,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -28364,7 +28364,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -28718,7 +28718,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -29082,7 +29082,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -29441,7 +29441,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -29802,7 +29802,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -30170,7 +30170,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -30533,7 +30533,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -30907,7 +30907,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -31273,7 +31273,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -31644,7 +31644,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the
@@ -32015,7 +32015,7 @@ interactions:
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
I have already used.\n\nIf you don''t need to use any more tools, you must give
your best complete final answer, make sure it satisfy the expect criteria, use
your best complete final answer, make sure it satisfies the expected criteria, use
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
"I did it wrong. Tried to both perform Action and give a Final Answer at the

View File

@@ -247,7 +247,7 @@ interactions:
{"role": "user", "content": "I did it wrong. Invalid Format: I missed the ''Action:''
after ''Thought:''. I will do right next, and don''t use a tool I have already
used.\n\nIf you don''t need to use any more tools, you must give your best complete
final answer, make sure it satisfy the expect criteria, use the EXACT format
final answer, make sure it satisfies the expected criteria, use the EXACT format
below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
final answer to the task.\n\n"}], "model": "o1-preview"}'
headers:

File diff suppressed because one or more lines are too long

View File

@@ -3,223 +3,17 @@ interactions:
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
time CEO of a content creation agency with a Senior Writer on the team. You''re
now working on a new project and want to make sure the content produced is amazing.\nYour
personal goal is: Make sure the writers in your company produce amazing content.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: Delegate work to coworker(task: str, context:
str, coworker: Optional[str] = None, **kwargs)\nTool Description: Delegate a
specific task to one of the following coworkers: Senior Writer\nThe input to
this tool should be the coworker, the task you want them to do, and ALL necessary
context to execute the task, they know nothing about the task, so share absolute
everything you know, don''t reference things but instead explain them.\nTool
Arguments: {''task'': {''title'': ''Task'', ''type'': ''string''}, ''context'':
{''title'': ''Context'', ''type'': ''string''}, ''coworker'': {''title'': ''Coworker'',
''type'': ''string''}, ''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\nTool
Name: Ask question to coworker(question: str, context: str, coworker: Optional[str]
= None, **kwargs)\nTool Description: Ask a specific question to one of the following
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
question you have for them, and ALL necessary context to ask the question properly,
they know nothing about the question, so share absolute everything you know,
don''t reference things but instead explain them.\nTool Arguments: {''question'':
{''title'': ''Question'', ''type'': ''string''}, ''context'': {''title'': ''Context'',
''type'': ''string''}, ''coworker'': {''title'': ''Coworker'', ''type'': ''string''},
''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\n\nUse the following
format:\n\nThought: you should always think about what to do\nAction: the action
to take, only one name of [Delegate work to coworker, Ask question to coworker],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
article about AI.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '2762'
content-type:
- application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7ZvxqgeOayGTQWwR61ASlZp0s74\",\n \"object\":
\"chat.completion\",\n \"created\": 1727214103,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: To ensure the content is amazing,
I'll delegate the task of producing a one-paragraph draft of an article about
AI Agents to the Senior Writer with all necessary context.\\n\\nAction: Delegate
work to coworker\\nAction Input: \\n{\\n \\\"coworker\\\": \\\"Senior Writer\\\",
\\n \\\"task\\\": \\\"Produce a one paragraph draft of an article about AI
Agents\\\", \\n \\\"context\\\": \\\"We need an amazing one-paragraph draft
as the beginning of a 4-paragraph article about AI Agents. This is for a high-stakes
project that critically impacts our company. The paragraph should highlight
what AI Agents are, their significance, and how they are transforming various
industries. The tone should be professional yet engaging. Make sure the content
is original, insightful, and thoroughly researched.\\\"\\n}\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 608,\n \"completion_tokens\":
160,\n \"total_tokens\": 768,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85f0b038a71cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:41:45 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1826'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999325'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 1ms
x-request-id:
- req_79054638deeb01da76c5bba273bffc28
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
Cq8OCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkShg4KEgoQY3Jld2FpLnRl
bGVtZXRyeRKQAgoQg15EMIBbDpydrcK3GAUYfBII5VYz5B10kmgqDlRhc2sgRXhlY3V0aW9uMAE5
aGIpYwtM+BdBIO6VVRNM+BdKLgoIY3Jld19rZXkSIgogZTNmZGEwZjMxMTBmZTgwYjE4OTQ3YzAx
NDcxNDMwYTRKMQoHY3Jld19pZBImCiRjNzM1NzdhYi0xYThhLTQzMGYtYjYyZi01MTBlYWMyMWI3
MThKLgoIdGFza19rZXkSIgogNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYyZGRKMQoHdGFz
a19pZBImCiQ3MjAzMjYyMC0yMzJmLTQ5ZTMtOGMyNy0xYzBlOWJhNjFiZDB6AhgBhQEAAQAAEssJ
ChB+du4H1wHcku5blhLQBtuoEgiXVguc5KA1RyoMQ3JldyBDcmVhdGVkMAE54IJsVxNM+BdBcCN4
VxNM+BdKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC42MS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMu
MTEuN0ouCghjcmV3X2tleRIiCiBlNjQ5NTczYTI2ZTU4NzkwY2FjMjFhMzdjZDQ0NDM3YUoxCgdj
cmV3X2lkEiYKJDI4ZTY0YmQ3LWNlYWMtNDYxOS04MmM3LTIzNmRkNTQxOGM4N0ocCgxjcmV3X3By
b2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2Zf
dGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAJKgAUKC2NyZXdfYWdlbnRzEvAE
Cu0EW3sia2V5IjogIjMyODIxN2I2YzI5NTliZGZjNDdjYWQwMGU4NDg5MGQwIiwgImlkIjogIjQ1
NjMxMmU3LThkMmMtNDcyMi1iNWNkLTlhMGRhMzg5MmM3OCIsICJyb2xlIjogIkNFTyIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAxNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6
IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6
IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4
YmE0NDZhZjciLCAiaWQiOiAiMTM0MDg5MjAtNzVjOC00MTk3LWIwNmQtY2I4MmNkZjhkZDhhIiwg
InJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAx
NSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRp
b24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSvgB
CgpjcmV3X3Rhc2tzEukBCuYBW3sia2V5IjogIjBiOWQ2NWRiNmI3YWVkZmIzOThjNTllMmE5Zjcx
ZWM1IiwgImlkIjogImQ0YjVhZmE2LTczNTEtNDUxMy04NzY2LTIzOGNjYTk5ZjRlZiIsICJhc3lu
Y19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUi
OiAiQ0VPIiwgImFnZW50X2tleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
ICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChCLEGLGYlBkv0YucoYjY1NeEghRpGin
zpZUiSoMVGFzayBDcmVhdGVkMAE5KCA2WBNM+BdBaLw2WBNM+BdKLgoIY3Jld19rZXkSIgogZTY0
OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0NDQzN2FKMQoHY3Jld19pZBImCiQyOGU2NGJkNy1jZWFj
LTQ2MTktODJjNy0yMzZkZDU0MThjODdKLgoIdGFza19rZXkSIgogMGI5ZDY1ZGI2YjdhZWRmYjM5
OGM1OWUyYTlmNzFlYzVKMQoHdGFza19pZBImCiRkNGI1YWZhNi03MzUxLTQ1MTMtODc2Ni0yMzhj
Y2E5OWY0ZWZ6AhgBhQEAAQAA
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '1842'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Tue, 24 Sep 2024 21:41:46 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Senior Writer. You''re
a senior writer, specialized in technology, software engineering, AI and startups.
You work as a freelancer and are now working on writing content for a new customer.\nYour
personal goal is: Write the best content about AI and AI agents.\nTo give my
best complete final answer to the task use the exact following format:\n\nThought:
personal goal is: Make sure the writers in your company produce amazing content.\nTo
give my best complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
Task: Produce a one paragraph draft of an article about AI Agents\n\nThis is
the expect criteria for your final answer: Your best answer to your coworker
asking you this, accounting for the context shared.\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nThis is the context
you''re working with:\nWe need an amazing one-paragraph draft as the beginning
of a 4-paragraph article about AI Agents. This is for a high-stakes project
that critically impacts our company. The paragraph should highlight what AI
Agents are, their significance, and how they are transforming various industries.
The tone should be professional yet engaging. Make sure the content is original,
insightful, and thoroughly researched.\n\nBegin! This is VERY important to you,
use the tools available and give your best Final Answer, your job depends on
it!\n\nThought:"}], "model": "gpt-4o"}'
Task: Produce and amazing 1 paragraph draft of an article about AI Agents.\n\nThis
is the expect criteria for your final answer: A 4 paragraph article about AI.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
@@ -228,16 +22,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1545'
- '1105'
content-type:
- application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
@@ -247,9 +38,11 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -257,31 +50,51 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7ZxDYcPlSiBZsftdRs2cWbUJllW\",\n \"object\":
\"chat.completion\",\n \"created\": 1727214105,\n \"model\": \"gpt-4o-2024-05-13\",\n
content: "{\n \"id\": \"chatcmpl-Ahe7liUPejwfqxMe8aEWmKGJ837em\",\n \"object\":
\"chat.completion\",\n \"created\": 1734965705,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Artificial Intelligence (AI) Agents are sophisticated computer programs
designed to perform tasks that typically require human intelligence, such as
decision making, problem-solving, and learning. These agents operate autonomously,
utilizing vast amounts of data, advanced algorithms, and machine learning techniques
to analyze their environment, adapt to new information, and improve their performance
over time. The significance of AI Agents lies in their transformative potential
across various industries. In healthcare, they assist in diagnosing diseases
with greater accuracy; in finance, they predict market trends and manage risks;
in customer service, they provide personalized and efficient responses. As these
AI-powered entities continue to evolve, they are not only enhancing operational
efficiencies but also driving innovation and creating new opportunities for
growth and development in every sector they penetrate.\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 297,\n \"completion_tokens\":
160,\n \"total_tokens\": 457,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
Answer: In the rapidly evolving landscape of technology, AI agents have emerged
as formidable tools, revolutionizing how we interact with data and automate
tasks. These sophisticated systems leverage machine learning and natural language
processing to perform a myriad of functions, from virtual personal assistants
to complex decision-making companions in industries such as finance, healthcare,
and education. By mimicking human intelligence, AI agents can analyze massive
data sets at unparalleled speeds, enabling businesses to uncover valuable insights,
enhance productivity, and elevate user experiences to unprecedented levels.\\n\\nOne
of the most striking aspects of AI agents is their adaptability; they learn
from their interactions and continuously improve their performance over time.
This feature is particularly valuable in customer service where AI agents can
address inquiries, resolve issues, and provide personalized recommendations
without the limitations of human fatigue. Moreover, with intuitive interfaces,
AI agents enhance user interactions, making technology more accessible and user-friendly,
thereby breaking down barriers that have historically hindered digital engagement.\\n\\nDespite
their immense potential, the deployment of AI agents raises important ethical
and practical considerations. Issues related to privacy, data security, and
the potential for job displacement necessitate thoughtful dialogue and proactive
measures. Striking a balance between technological innovation and societal impact
will be crucial as organizations integrate these agents into their operations.
Additionally, ensuring transparency in AI decision-making processes is vital
to maintain public trust as AI agents become an integral part of daily life.\\n\\nLooking
ahead, the future of AI agents appears bright, with ongoing advancements promising
even greater capabilities. As we continue to harness the power of AI, we can
expect these agents to play a transformative role in shaping various sectors\u2014streamlining
workflows, enabling smarter decision-making, and fostering more personalized
experiences. Embracing this technology responsibly can lead to a future where
AI agents not only augment human effort but also inspire creativity and efficiency
across the board, ultimately redefining our interaction with the digital world.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 208,\n \"completion_tokens\":
382,\n \"total_tokens\": 590,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85f0c0cf961cf3-GRU
- 8f6930c97a33ae54-GRU
Connection:
- keep-alive
Content-Encoding:
@@ -289,45 +102,77 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:41:48 GMT
- Mon, 23 Dec 2024 14:55:10 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=g58erGPkGAltcfYpDRU4IsdEEzb955dGmBOAZacFlPA-1734965710-1.0.1.1-IiodiX3uxbT5xSa4seI7M.gRM4Jj46h2d6ZW2wCkSUYUAX.ivRh_sGQN2hucEMzdG8O87pc00dCl7E5W8KkyEA;
path=/; expires=Mon, 23-Dec-24 15:25:10 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=eQzzWvIXDS8Me1OIBdCG5F1qFyVfAo3sumvYRE7J41E-1734965710778-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2468'
- '5401'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
- '30000'
x-ratelimit-limit-tokens:
- '30000000'
- '150000000'
x-ratelimit-remaining-requests:
- '9999'
- '29999'
x-ratelimit-remaining-tokens:
- '29999625'
- '149999746'
x-ratelimit-reset-requests:
- 6ms
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_66c8801b42ac865249246d98225c1492
- req_30791533923ae20626ef35a03ae66172
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
CtwBCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSswEKEgoQY3Jld2FpLnRl
bGVtZXRyeRKcAQoQROg/k5NCUGdgfvfLrFlQDxIIlfh6oMbmqu0qClRvb2wgVXNhZ2UwATlws+Wj
FEz4F0EwBeijFEz4F0oaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjYxLjBKKAoJdG9vbF9uYW1lEhsK
GURlbGVnYXRlIHdvcmsgdG8gY293b3JrZXJKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
CqYMCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS/QsKEgoQY3Jld2FpLnRl
bGVtZXRyeRLVCQoQLH3VghpS+l/DatJl8rrpvRIIUpNEm7ELU08qDENyZXcgQ3JlYXRlZDABObgs
nNId1hMYQfgVpdId1hMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0
NDQzN2FKMQoHY3Jld19pZBImCiQzYjVkNDFjNC1kZWJiLTQ2MzItYmIwMC1mNTdhNmM2M2QwMThK
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSooFCgtjcmV3
X2FnZW50cxL6BAr3BFt7ImtleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
ICJpZCI6ICI1Yjk4NDA2OS03MjVlLTQxOWYtYjdiZS1mMDdjMTYyOGNkZjIiLCAicm9sZSI6ICJD
RU8iLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwg
ImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdh
dGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1h
eF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0
ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiZjkwZWI0ZmItMzUyMC00ZDAyLTlhNDYt
NDE2ZTNlNTQ5NWYxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNl
LCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0i
OiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2Us
ICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0
b29sc19uYW1lcyI6IFtdfV1K+AEKCmNyZXdfdGFza3MS6QEK5gFbeyJrZXkiOiAiMGI5ZDY1ZGI2
YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzUiLCAiaWQiOiAiNzdmNDY3MDYtNzRjZi00ZGVkLThlMDUt
NmRlZGM0MmYwZDliIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJDRU8iLCAiYWdlbnRfa2V5IjogIjMyODIxN2I2YzI5NTli
ZGZjNDdjYWQwMGU4NDg5MGQwIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEBvb
LkoAnHiD1gUnbftefpYSCNb1+4JxldizKgxUYXNrIENyZWF0ZWQwATmwYcTSHdYTGEEQz8TSHdYT
GEouCghjcmV3X2tleRIiCiBlNjQ5NTczYTI2ZTU4NzkwY2FjMjFhMzdjZDQ0NDM3YUoxCgdjcmV3
X2lkEiYKJDNiNWQ0MWM0LWRlYmItNDYzMi1iYjAwLWY1N2E2YzYzZDAxOEouCgh0YXNrX2tleRIi
CiAwYjlkNjVkYjZiN2FlZGZiMzk4YzU5ZTJhOWY3MWVjNUoxCgd0YXNrX2lkEiYKJDc3ZjQ2NzA2
LTc0Y2YtNGRlZC04ZTA1LTZkZWRjNDJmMGQ5YnoCGAGFAQABAAA=
headers:
Accept:
- '*/*'
@@ -336,7 +181,7 @@ interactions:
Connection:
- keep-alive
Content-Length:
- '223'
- '1577'
Content-Type:
- application/x-protobuf
User-Agent:
@@ -352,213 +197,8 @@ interactions:
Content-Type:
- application/x-protobuf
Date:
- Tue, 24 Sep 2024 21:41:51 GMT
- Mon, 23 Dec 2024 14:55:10 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
time CEO of a content creation agency with a Senior Writer on the team. You''re
now working on a new project and want to make sure the content produced is amazing.\nYour
personal goal is: Make sure the writers in your company produce amazing content.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: Delegate work to coworker(task: str, context:
str, coworker: Optional[str] = None, **kwargs)\nTool Description: Delegate a
specific task to one of the following coworkers: Senior Writer\nThe input to
this tool should be the coworker, the task you want them to do, and ALL necessary
context to execute the task, they know nothing about the task, so share absolute
everything you know, don''t reference things but instead explain them.\nTool
Arguments: {''task'': {''title'': ''Task'', ''type'': ''string''}, ''context'':
{''title'': ''Context'', ''type'': ''string''}, ''coworker'': {''title'': ''Coworker'',
''type'': ''string''}, ''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\nTool
Name: Ask question to coworker(question: str, context: str, coworker: Optional[str]
= None, **kwargs)\nTool Description: Ask a specific question to one of the following
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
question you have for them, and ALL necessary context to ask the question properly,
they know nothing about the question, so share absolute everything you know,
don''t reference things but instead explain them.\nTool Arguments: {''question'':
{''title'': ''Question'', ''type'': ''string''}, ''context'': {''title'': ''Context'',
''type'': ''string''}, ''coworker'': {''title'': ''Coworker'', ''type'': ''string''},
''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\n\nUse the following
format:\n\nThought: you should always think about what to do\nAction: the action
to take, only one name of [Delegate work to coworker, Ask question to coworker],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
article about AI.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "Thought: To ensure the content is amazing,
I''ll delegate the task of producing a one-paragraph draft of an article about
AI Agents to the Senior Writer with all necessary context.\n\nAction: Delegate
work to coworker\nAction Input: \n{\n \"coworker\": \"Senior Writer\", \n \"task\":
\"Produce a one paragraph draft of an article about AI Agents\", \n \"context\":
\"We need an amazing one-paragraph draft as the beginning of a 4-paragraph article
about AI Agents. This is for a high-stakes project that critically impacts our
company. The paragraph should highlight what AI Agents are, their significance,
and how they are transforming various industries. The tone should be professional
yet engaging. Make sure the content is original, insightful, and thoroughly
researched.\"\n}\nObservation: Artificial Intelligence (AI) Agents are sophisticated
computer programs designed to perform tasks that typically require human intelligence,
such as decision making, problem-solving, and learning. These agents operate
autonomously, utilizing vast amounts of data, advanced algorithms, and machine
learning techniques to analyze their environment, adapt to new information,
and improve their performance over time. The significance of AI Agents lies
in their transformative potential across various industries. In healthcare,
they assist in diagnosing diseases with greater accuracy; in finance, they predict
market trends and manage risks; in customer service, they provide personalized
and efficient responses. As these AI-powered entities continue to evolve, they
are not only enhancing operational efficiencies but also driving innovation
and creating new opportunities for growth and development in every sector they
penetrate."}], "model": "gpt-4o"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '4536'
content-type:
- application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AB7a1PO3pMNybn76wXDFc5HE7ZRsL\",\n \"object\":
\"chat.completion\",\n \"created\": 1727214109,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: The paragraph provided by the
Senior Writer is well-detailed and engaging. I will now structure the 4-paragraph
article around this draft, adding an introduction, expanding on the specific
applications in various industries, and concluding with the potential future
impact of AI Agents.\\n\\nFinal Answer:\\n\\nArtificial Intelligence (AI) Agents
are sophisticated computer programs designed to perform tasks that typically
require human intelligence, such as decision making, problem-solving, and learning.
These agents operate autonomously, utilizing vast amounts of data, advanced
algorithms, and machine learning techniques to analyze their environment, adapt
to new information, and improve their performance over time.\\n\\nThe significance
of AI Agents lies in their transformative potential across various industries.
In healthcare, for example, they assist in diagnosing diseases with greater
accuracy and speed than human practitioners, offering personalized treatment
plans by analyzing patient data. In finance, AI Agents predict market trends,
manage risks, and even execute trades, contributing to more stable and profitable
financial systems. Customer service sectors benefit significantly from AI Agents,
as they provide personalized and efficient responses, often resolving issues
faster than traditional methods.\\n\\nMoreover, AI Agents are also making substantial
contributions in fields like education and manufacturing. In education, they
offer tailored learning experiences by assessing individual student needs and
adjusting teaching methods accordingly. They help educators identify students
who might need additional support and provide resources to enhance learning
outcomes. In manufacturing, AI Agents optimize production lines, predict equipment
failures, and improve supply chain management, thus boosting productivity and
reducing downtime.\\n\\nAs these AI-powered entities continue to evolve, they
are not only enhancing operational efficiencies but also driving innovation
and creating new opportunities for growth and development in every sector they
penetrate. The future of AI Agents looks promising, with the potential to revolutionize
the way we live and work, making processes more efficient, decisions more data-driven,
and solutions more innovative than ever before.\\n\\nThis is now a well-rounded,
four-paragraph article that comprehensively covers the topic of AI Agents.\\n\\nFinal
Answer: This is the complete content as specified:\\nArtificial Intelligence
(AI) Agents are sophisticated computer programs designed to perform tasks that
typically require human intelligence, such as decision making, problem-solving,
and learning. These agents operate autonomously, utilizing vast amounts of data,
advanced algorithms, and machine learning techniques to analyze their environment,
adapt to new information, and improve their performance over time.\\n\\nThe
significance of AI Agents lies in their transformative potential across various
industries. In healthcare, for example, they assist in diagnosing diseases with
greater accuracy and speed than human practitioners, offering personalized treatment
plans by analyzing patient data. In finance, AI Agents predict market trends,
manage risks, and even execute trades, contributing to more stable and profitable
financial systems. Customer service sectors benefit significantly from AI Agents,
as they provide personalized and efficient responses, often resolving issues
faster than traditional methods.\\n\\nMoreover, AI Agents are also making substantial
contributions in fields like education and manufacturing. In education, they
offer tailored learning experiences by assessing individual student needs and
adjusting teaching methods accordingly. They help educators identify students
who might need additional support and provide resources to enhance learning
outcomes. In manufacturing, AI Agents optimize production lines, predict equipment
failures, and improve supply chain management, thus boosting productivity and
reducing downtime.\\n\\nAs these AI-powered entities continue to evolve, they
are not only enhancing operational efficiencies but also driving innovation
and creating new opportunities for growth and development in every sector they
penetrate. The future of AI Agents looks promising, with the potential to revolutionize
the way we live and work, making processes more efficient, decisions more data-driven,
and solutions more innovative than ever before.\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 923,\n \"completion_tokens\":
715,\n \"total_tokens\": 1638,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c85f0d2f90c1cf3-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 24 Sep 2024 21:41:58 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '8591'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29998895'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 2ms
x-request-id:
- req_2b51b5cff02148d29b04284b40ca6081
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,480 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
time CEO of a content creation agency with a Senior Writer on the team. You''re
now working on a new project and want to make sure the content produced is amazing.\nYour
personal goal is: Make sure the writers in your company produce amazing content.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
just returns the input\n\nUse the following format:\n\nThought: you should always
think about what to do\nAction: the action to take, only one name of [Test Tool],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question"}, {"role": "user", "content":
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
article about AI.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1581'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhLsKP8xKkISk8ntUscyUKL30xRXW\",\n \"object\":
\"chat.completion\",\n \"created\": 1734895556,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I need to gather information to create
an amazing paragraph draft about AI Agents that aligns with the expected criteria
of a 4-paragraph article about AI. \\n\\nAction: Test Tool \\nAction Input:
{\\\"query\\\": \\\"Write a captivating and informative paragraph about AI Agents,
focusing on their capabilities, applications, and significance in modern technology.\\\"}
\ \",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 309,\n \"completion_tokens\":
68,\n \"total_tokens\": 377,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f62802d0b3f00d5-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sun, 22 Dec 2024 19:25:57 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=vwBNilrHRgMLd8ALWzYrBO5Lm8ieJzbQ3WCVOgmuJ.s-1734895557-1.0.1.1-z.QnDsynL_Ndu.JkWrh_wGMo57vvpK88nWDBTA8P.6prlSRmA91GQLpP62yRUbCW6yoKFbDxroSaYO6qrzZPRg;
path=/; expires=Sun, 22-Dec-24 19:55:57 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1075'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999630'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_80fbcef3505afac708a24ef167b701bb
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
time CEO of a content creation agency with a Senior Writer on the team. You''re
now working on a new project and want to make sure the content produced is amazing.\nYour
personal goal is: Make sure the writers in your company produce amazing content.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
just returns the input\n\nUse the following format:\n\nThought: you should always
think about what to do\nAction: the action to take, only one name of [Test Tool],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question"}, {"role": "user", "content":
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
article about AI.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "I need to gather information to create an
amazing paragraph draft about AI Agents that aligns with the expected criteria
of a 4-paragraph article about AI. \n\nAction: Test Tool \nAction Input: {\"query\":
\"Write a captivating and informative paragraph about AI Agents, focusing on
their capabilities, applications, and significance in modern technology.\"} \nObservation:
Processed: Write a captivating and informative paragraph about AI Agents, focusing
on their capabilities, applications, and significance in modern technology."}],
"model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '2153'
content-type:
- application/json
cookie:
- __cf_bm=vwBNilrHRgMLd8ALWzYrBO5Lm8ieJzbQ3WCVOgmuJ.s-1734895557-1.0.1.1-z.QnDsynL_Ndu.JkWrh_wGMo57vvpK88nWDBTA8P.6prlSRmA91GQLpP62yRUbCW6yoKFbDxroSaYO6qrzZPRg;
_cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhLsMt1AgrzynC2TSJZZSwr9El8FV\",\n \"object\":
\"chat.completion\",\n \"created\": 1734895558,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I have received the content
related to AI Agents, which I need to now use as a foundation for creating a
complete 4-paragraph article about AI. \\n\\nAction: Test Tool \\nAction Input:
{\\\"query\\\": \\\"Based on the previous paragraph about AI Agents, write a
4-paragraph article about AI, including an introduction, discussion of AI Agents,
their applications, and a conclusion on the future of AI.\\\"} \",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 409,\n \"completion_tokens\":
88,\n \"total_tokens\": 497,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f6280352b9400d5-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sun, 22 Dec 2024 19:25:59 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1346'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999498'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_e25b377af34ef03b9a6955c9cfca5738
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
CtoOCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSsQ4KEgoQY3Jld2FpLnRl
bGVtZXRyeRLrCQoQHzrcBLmZm6+CB9ZGtTnz1BIISnyRX3cExT4qDENyZXcgQ3JlYXRlZDABOdCK
UxFRlhMYQdiyWhFRlhMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0
NDQzN2FKMQoHY3Jld19pZBImCiQyYWFjYzYwZC0xYzE5LTRjZGYtYmJhNy1iM2RiMGM4YzFlZWZK
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSpUFCgtjcmV3
X2FnZW50cxKFBQqCBVt7ImtleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
ICJpZCI6ICJlZmE4ZWRlNS0wN2IyLTQzOWUtYWQ4Yi1iNmQ0Nzg5NjBkNzkiLCAicm9sZSI6ICJD
RU8iLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwg
ImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdh
dGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1h
eF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsidGVzdCB0b29sIl19LCB7ImtleSI6
ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICIxMDE2MGEzMC0zM2U4
LTRlN2YtOTAzOC1lODU3Zjc2MzI0ZTUiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJv
c2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9j
YWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxl
ZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xp
bWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUqDAgoKY3Jld190YXNrcxL0AQrxAVt7ImtleSI6
ICIwYjlkNjVkYjZiN2FlZGZiMzk4YzU5ZTJhOWY3MWVjNSIsICJpZCI6ICJiNjYyZWVkOS1kYzcy
LTQ1NTEtYTdmMC1kY2E4ZTk3MmU3NjciLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVt
YW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIkNFTyIsICJhZ2VudF9rZXkiOiAiMzI4
MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAiLCAidG9vbHNfbmFtZXMiOiBbInRlc3QgdG9v
bCJdfV16AhgBhQEAAQAAEo4CChDkOw+7vfeJwW1bc0PIIqxeEggzmQQt0SPl+ioMVGFzayBDcmVh
dGVkMAE5OBlxEVGWExhBwKlxEVGWExhKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNh
YzIxYTM3Y2Q0NDQzN2FKMQoHY3Jld19pZBImCiQyYWFjYzYwZC0xYzE5LTRjZGYtYmJhNy1iM2Ri
MGM4YzFlZWZKLgoIdGFza19rZXkSIgogMGI5ZDY1ZGI2YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzVK
MQoHdGFza19pZBImCiRiNjYyZWVkOS1kYzcyLTQ1NTEtYTdmMC1kY2E4ZTk3MmU3Njd6AhgBhQEA
AQAAEowBChDS1rm7Q+c0w96t+encwsGJEgjRF+jTQh1PCyoKVG9vbCBVc2FnZTABOaAiFGtRlhMY
QdiVImtRlhMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoYCgl0b29sX25hbWUSCwoJVGVz
dCBUb29sSg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASjAEKECYGxNLnTRLCS76uAAOuzGwSCPmX
kSTjWKCcKgpUb29sIFVzYWdlMAE5CH3Wx1GWExhBGH/xx1GWExhKGgoOY3Jld2FpX3ZlcnNpb24S
CAoGMC44Ni4wShgKCXRvb2xfbmFtZRILCglUZXN0IFRvb2xKDgoIYXR0ZW1wdHMSAhgBegIYAYUB
AAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '1885'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Sun, 22 Dec 2024 19:26:01 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
time CEO of a content creation agency with a Senior Writer on the team. You''re
now working on a new project and want to make sure the content produced is amazing.\nYour
personal goal is: Make sure the writers in your company produce amazing content.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
just returns the input\n\nUse the following format:\n\nThought: you should always
think about what to do\nAction: the action to take, only one name of [Test Tool],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question"}, {"role": "user", "content":
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
article about AI.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "I need to gather information to create an
amazing paragraph draft about AI Agents that aligns with the expected criteria
of a 4-paragraph article about AI. \n\nAction: Test Tool \nAction Input: {\"query\":
\"Write a captivating and informative paragraph about AI Agents, focusing on
their capabilities, applications, and significance in modern technology.\"} \nObservation:
Processed: Write a captivating and informative paragraph about AI Agents, focusing
on their capabilities, applications, and significance in modern technology."},
{"role": "assistant", "content": "Thought: I have received the content related
to AI Agents, which I need to now use as a foundation for creating a complete
4-paragraph article about AI. \n\nAction: Test Tool \nAction Input: {\"query\":
\"Based on the previous paragraph about AI Agents, write a 4-paragraph article
about AI, including an introduction, discussion of AI Agents, their applications,
and a conclusion on the future of AI.\"} \nObservation: Processed: Based on
the previous paragraph about AI Agents, write a 4-paragraph article about AI,
including an introduction, discussion of AI Agents, their applications, and
a conclusion on the future of AI."}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '2820'
content-type:
- application/json
cookie:
- __cf_bm=vwBNilrHRgMLd8ALWzYrBO5Lm8ieJzbQ3WCVOgmuJ.s-1734895557-1.0.1.1-z.QnDsynL_Ndu.JkWrh_wGMo57vvpK88nWDBTA8P.6prlSRmA91GQLpP62yRUbCW6yoKFbDxroSaYO6qrzZPRg;
_cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhLsNJa6GxRIHF8l8eViU7D6CyBHP\",\n \"object\":
\"chat.completion\",\n \"created\": 1734895559,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I have gathered the complete
article on AI, which aligns with the expected criteria. Now I will present the
final answer as required. \\n\\nFinal Answer: \\n\\nArtificial Intelligence
(AI) has rapidly evolved to become an integral part of our modern world, driving
significant advancements across various industries. AI refers to the simulation
of human intelligence in machines programmed to think and learn like humans.
This technology enables machines to analyze data, recognize patterns, and make
decisions with minimal human intervention, paving the way for innovation in
fields like healthcare, finance, and transportation.\\n\\nAI Agents, in particular,
embody the future of artificial intelligence by acting autonomously to perform
complex tasks. These agents leverage machine learning and natural language processing
to interact with users and understand their needs. They're deployed in customer
service applications, virtual assistants, and personal scheduling tools, showcasing
their capability to streamline processes and enhance user experience. By mimicking
human reasoning, AI Agents can adapt to changing situations and provide personalized
solutions.\\n\\nThe applications of AI Agents extend beyond mere task completion;
they are transforming the way businesses operate. In the realm of customer engagement,
AI Agents analyze customer behavior to provide insights that help companies
tailor their offerings. In healthcare, they assist in diagnosing illnesses by
analyzing patient data and suggesting treatments. The versatility of AI Agents
makes them invaluable assets in our increasingly automated world.\\n\\nAs we
look to the future, the potential of AI continues to expand. With ongoing advancements
in technology, AI Agents are set to become even more sophisticated, further
bridging the gap between humans and machines. The prospects of AI promise not
only to improve efficiency and productivity but also to change the way we live
and work, promising a future where intelligent, autonomous agents support us
in our daily lives.\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
546,\n \"completion_tokens\": 343,\n \"total_tokens\": 889,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f62803eed8100d5-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sun, 22 Dec 2024 19:26:04 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '4897'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999342'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_65fdf94aa8bbc10f64f2a27ccdcc5cc8
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,623 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
time CEO of a content creation agency with a Senior Writer on the team. You''re
now working on a new project and want to make sure the content produced is amazing.\nYour
personal goal is: Make sure the writers in your company produce amazing content.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
just returns the input\nTool Name: Delegate work to coworker\nTool Arguments:
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
''str''}}\nTool Description: Delegate a specific task to one of the following
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
task you want them to do, and ALL necessary context to execute the task, they
know nothing about the task, so share absolute everything you know, don''t reference
things but instead explain them.\nTool Name: Ask question to coworker\nTool
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
of the following coworkers: Senior Writer\nThe input to this tool should be
the coworker, the question you have for them, and ALL necessary context to ask
the question properly, they know nothing about the question, so share absolute
everything you know, don''t reference things but instead explain them.\n\nUse
the following format:\n\nThought: you should always think about what to do\nAction:
the action to take, only one name of [Test Tool, Delegate work to coworker,
Ask question to coworker], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: Produce and amazing
1 paragraph draft of an article about AI Agents.\n\nThis is the expect criteria
for your final answer: A 4 paragraph article about AI.\nyou MUST return the
actual complete content as the final answer, not a summary.\n\nBegin! This is
VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '2892'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhLQELAjJpn76wiLmWBinm3sqf32l\",\n \"object\":
\"chat.completion\",\n \"created\": 1734893814,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I need to gather information and insights
to ensure the Senior Writer produces a high-quality draft about AI Agents, which
will then serve as a foundation for the complete article.\\n\\nAction: Ask question
to coworker \\nAction Input: {\\\"question\\\":\\\"Can you provide a detailed
overview of what AI Agents are, their functionalities, and their applications
in real-world scenarios? Please include examples of how they are being used
in various industries, and discuss their potential impact on the future of technology
and society.\\\",\\\"context\\\":\\\"We are looking to create a comprehensive
understanding of AI Agents as part of a four-paragraph article. This will help
generate a high-quality draft for the article.\\\",\\\"coworker\\\":\\\"Senior
Writer\\\"} \",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
604,\n \"completion_tokens\": 138,\n \"total_tokens\": 742,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f6255a1bf08a519-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sun, 22 Dec 2024 18:56:56 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=rVquIlcnYXc7wMkbKG5Ii90HxfQ_ukNJjDgSHhsWb1k-1734893816-1.0.1.1-33qDl8KNWcxLAGBuPhT8FrZ6QUnEy9oOYh2Fp2hIjDnF.cQlyrgxiWcuHljTTxG_mH7eQrf1AHJ6p8sxZJZ30A;
path=/; expires=Sun, 22-Dec-24 19:26:56 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=xqgLJsf3h5lBKZFTADRNNUqizeChNBBFoLvWiR2WPnw-1734893816555-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '2340'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999305'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_53956b48bd1188451efc104e8a234ef4
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
CrEMCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSiAwKEgoQY3Jld2FpLnRl
bGVtZXRyeRLgCQoQg/HA64g3phKbzz/hvUtbahIIpu+Csq+uWc0qDENyZXcgQ3JlYXRlZDABOSDm
Uli7lBMYQcgRXFi7lBMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0
NDQzN2FKMQoHY3Jld19pZBImCiRhMWFjMTc0Ny0xMTA0LTRlZjItODZkNi02ZGRhNTFmMDlmMTdK
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSooFCgtjcmV3
X2FnZW50cxL6BAr3BFt7ImtleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
ICJpZCI6ICI4YWUwNGY0Yy0wMjNiLTRkNWQtODAwZC02ZjlkMWFmMWExOTkiLCAicm9sZSI6ICJD
RU8iLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwg
ImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdh
dGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1h
eF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0
ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDg2MWQ4YTMtMjMxYS00Mzc5LTk2ZmEt
MWQwZmQyZDI1MGYxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNl
LCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0i
OiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2Us
ICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0
b29sc19uYW1lcyI6IFtdfV1KgwIKCmNyZXdfdGFza3MS9AEK8QFbeyJrZXkiOiAiMGI5ZDY1ZGI2
YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzUiLCAiaWQiOiAiY2IyMmIxMzctZTA3ZC00NDA5LWI5NmMt
ZWQ2ZDU3MjFhNDNiIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJDRU8iLCAiYWdlbnRfa2V5IjogIjMyODIxN2I2YzI5NTli
ZGZjNDdjYWQwMGU4NDg5MGQwIiwgInRvb2xzX25hbWVzIjogWyJ0ZXN0IHRvb2wiXX1degIYAYUB
AAEAABKOAgoQB7Z9AEDI9OTStHqguBSbLxIIj9dttVFJs9cqDFRhc2sgQ3JlYXRlZDABOYDae1i7
lBMYQeBHfFi7lBMYSi4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2NkNDQ0
MzdhSjEKB2NyZXdfaWQSJgokYTFhYzE3NDctMTEwNC00ZWYyLTg2ZDYtNmRkYTUxZjA5ZjE3Si4K
CHRhc2tfa2V5EiIKIDBiOWQ2NWRiNmI3YWVkZmIzOThjNTllMmE5ZjcxZWM1SjEKB3Rhc2tfaWQS
JgokY2IyMmIxMzctZTA3ZC00NDA5LWI5NmMtZWQ2ZDU3MjFhNDNiegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '1588'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Sun, 22 Dec 2024 18:56:59 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Senior Writer. You''re
a senior writer, specialized in technology, software engineering, AI and startups.
You work as a freelancer and are now working on writing content for a new customer.\nYour
personal goal is: Write the best content about AI and AI agents.\nTo give my
best complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
Task: Can you provide a detailed overview of what AI Agents are, their functionalities,
and their applications in real-world scenarios? Please include examples of how
they are being used in various industries, and discuss their potential impact
on the future of technology and society.\n\nThis is the expect criteria for
your final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nThis is the context you''re working with:\nWe
are looking to create a comprehensive understanding of AI Agents as part of
a four-paragraph article. This will help generate a high-quality draft for the
article.\n\nBegin! This is VERY important to you, use the tools available and
give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1572'
content-type:
- application/json
cookie:
- __cf_bm=rVquIlcnYXc7wMkbKG5Ii90HxfQ_ukNJjDgSHhsWb1k-1734893816-1.0.1.1-33qDl8KNWcxLAGBuPhT8FrZ6QUnEy9oOYh2Fp2hIjDnF.cQlyrgxiWcuHljTTxG_mH7eQrf1AHJ6p8sxZJZ30A;
_cfuvid=xqgLJsf3h5lBKZFTADRNNUqizeChNBBFoLvWiR2WPnw-1734893816555-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhLQG5ubl99yeBYm6TTV0sodagMND\",\n \"object\":
\"chat.completion\",\n \"created\": 1734893816,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: \\n\\n**Overview of AI Agents** \\nAI agents are advanced software
systems designed to autonomously perform tasks, make decisions, and learn from
their environments without needing constant human intervention. They leverage
machine learning, natural language processing, and various AI techniques to
simulate human-like understanding and autonomy. These agents can be categorized
into three types: reactive agents (which operate purely based on their environment),
deliberative agents (which can make decisions based on reasoning), and hybrid
agents that incorporate aspects of both types. Their ability to adapt and learn
over time makes them instrumental in automating processes across various domains.\\n\\n**Functionalities
of AI Agents** \\nThe core functionalities of AI agents include perception,
action, learning, and interaction. They perceive data through sensors or data
feeds, process information through algorithms, and take actions based on this
data. Machine learning allows them to refine their performance over time by
analyzing outcomes and adjusting their strategies accordingly. Interaction capabilities
enable them to communicate with users, providing insights, answering queries,
or even negotiating in some cases. These functionalities make AI agents invaluable
for tasks such as predictive analytics, personal assistance, and real-time decision-making
in complex systems.\\n\\n**Applications in Various Industries** \\nAI agents
are already being utilized across multiple industries, demonstrating their versatility
and efficiency. In healthcare, AI agents assist in diagnostics by analyzing
medical records and suggesting treatment plans tailored to individual patients.
In finance, they power robo-advisors that manage investment portfolios, automate
trading strategies, and provide financial advice based on real-time market analysis.
Furthermore, in customer service, AI chatbots serve as virtual assistants, enhancing
user experience by providing instant support and resolving queries without human
intervention. The logistics and supply chain industries have also seen AI agents
optimize inventory management and route planning, significantly improving operational
efficiency.\\n\\n**Future Impact on Technology and Society** \\nThe ongoing
development of AI agents is poised to have a profound impact on technology and
society. As these agents become more sophisticated, we can anticipate a shift
towards increased automation in both professional and personal spheres, leading
to enhanced productivity and new business models. However, this automation introduces
challenges such as job displacement and ethical considerations regarding decision-making
by AI. It is essential to foster an ongoing dialogue on the implications of
AI agents to ensure responsible development and integration into our daily lives.
As AI agents continue to evolve, they will undoubtedly play a pivotal role in
shaping the future of technology and its intersection with societal dynamics,
making it critical for us to engage thoughtfully with this emerging paradigm.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 289,\n \"completion_tokens\":
506,\n \"total_tokens\": 795,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f6255b1f832a519-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sun, 22 Dec 2024 18:57:04 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '7836'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999630'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_c14268346d6ce72ceea4b1472f73c5ae
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
CtsBCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSsgEKEgoQY3Jld2FpLnRl
bGVtZXRyeRKbAQoQ7U/1ZgBSTkCXtesUNPA2URIIrnRWFVT58Z8qClRvb2wgVXNhZ2UwATl4lN3i
vZQTGEGgevzivZQTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKJwoJdG9vbF9uYW1lEhoK
GEFzayBxdWVzdGlvbiB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAA
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '222'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Sun, 22 Dec 2024 18:57:09 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
time CEO of a content creation agency with a Senior Writer on the team. You''re
now working on a new project and want to make sure the content produced is amazing.\nYour
personal goal is: Make sure the writers in your company produce amazing content.\nYou
ONLY have access to the following tools, and should NEVER make up tools that
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
just returns the input\nTool Name: Delegate work to coworker\nTool Arguments:
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
''str''}}\nTool Description: Delegate a specific task to one of the following
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
task you want them to do, and ALL necessary context to execute the task, they
know nothing about the task, so share absolute everything you know, don''t reference
things but instead explain them.\nTool Name: Ask question to coworker\nTool
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
of the following coworkers: Senior Writer\nThe input to this tool should be
the coworker, the question you have for them, and ALL necessary context to ask
the question properly, they know nothing about the question, so share absolute
everything you know, don''t reference things but instead explain them.\n\nUse
the following format:\n\nThought: you should always think about what to do\nAction:
the action to take, only one name of [Test Tool, Delegate work to coworker,
Ask question to coworker], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple python dictionary, enclosed in
curly braces, using \" to wrap keys and values.\nObservation: the result of
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
know the final answer\nFinal Answer: the final answer to the original input
question"}, {"role": "user", "content": "\nCurrent Task: Produce and amazing
1 paragraph draft of an article about AI Agents.\n\nThis is the expect criteria
for your final answer: A 4 paragraph article about AI.\nyou MUST return the
actual complete content as the final answer, not a summary.\n\nBegin! This is
VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I need
to gather information and insights to ensure the Senior Writer produces a high-quality
draft about AI Agents, which will then serve as a foundation for the complete
article.\n\nAction: Ask question to coworker \nAction Input: {\"question\":\"Can
you provide a detailed overview of what AI Agents are, their functionalities,
and their applications in real-world scenarios? Please include examples of how
they are being used in various industries, and discuss their potential impact
on the future of technology and society.\",\"context\":\"We are looking to create
a comprehensive understanding of AI Agents as part of a four-paragraph article.
This will help generate a high-quality draft for the article.\",\"coworker\":\"Senior
Writer\"} \nObservation: **Overview of AI Agents** \nAI agents are advanced
software systems designed to autonomously perform tasks, make decisions, and
learn from their environments without needing constant human intervention. They
leverage machine learning, natural language processing, and various AI techniques
to simulate human-like understanding and autonomy. These agents can be categorized
into three types: reactive agents (which operate purely based on their environment),
deliberative agents (which can make decisions based on reasoning), and hybrid
agents that incorporate aspects of both types. Their ability to adapt and learn
over time makes them instrumental in automating processes across various domains.\n\n**Functionalities
of AI Agents** \nThe core functionalities of AI agents include perception,
action, learning, and interaction. They perceive data through sensors or data
feeds, process information through algorithms, and take actions based on this
data. Machine learning allows them to refine their performance over time by
analyzing outcomes and adjusting their strategies accordingly. Interaction capabilities
enable them to communicate with users, providing insights, answering queries,
or even negotiating in some cases. These functionalities make AI agents invaluable
for tasks such as predictive analytics, personal assistance, and real-time decision-making
in complex systems.\n\n**Applications in Various Industries** \nAI agents are
already being utilized across multiple industries, demonstrating their versatility
and efficiency. In healthcare, AI agents assist in diagnostics by analyzing
medical records and suggesting treatment plans tailored to individual patients.
In finance, they power robo-advisors that manage investment portfolios, automate
trading strategies, and provide financial advice based on real-time market analysis.
Furthermore, in customer service, AI chatbots serve as virtual assistants, enhancing
user experience by providing instant support and resolving queries without human
intervention. The logistics and supply chain industries have also seen AI agents
optimize inventory management and route planning, significantly improving operational
efficiency.\n\n**Future Impact on Technology and Society** \nThe ongoing development
of AI agents is poised to have a profound impact on technology and society.
As these agents become more sophisticated, we can anticipate a shift towards
increased automation in both professional and personal spheres, leading to enhanced
productivity and new business models. However, this automation introduces challenges
such as job displacement and ethical considerations regarding decision-making
by AI. It is essential to foster an ongoing dialogue on the implications of
AI agents to ensure responsible development and integration into our daily lives.
As AI agents continue to evolve, they will undoubtedly play a pivotal role in
shaping the future of technology and its intersection with societal dynamics,
making it critical for us to engage thoughtfully with this emerging paradigm."}],
"model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '6755'
content-type:
- application/json
cookie:
- __cf_bm=rVquIlcnYXc7wMkbKG5Ii90HxfQ_ukNJjDgSHhsWb1k-1734893816-1.0.1.1-33qDl8KNWcxLAGBuPhT8FrZ6QUnEy9oOYh2Fp2hIjDnF.cQlyrgxiWcuHljTTxG_mH7eQrf1AHJ6p8sxZJZ30A;
_cfuvid=xqgLJsf3h5lBKZFTADRNNUqizeChNBBFoLvWiR2WPnw-1734893816555-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhLQOmLKuevpaWtRyyhHjHVYqvNVk\",\n \"object\":
\"chat.completion\",\n \"created\": 1734893824,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I have received a comprehensive
overview from the Senior Writer that includes the necessary information about
AI Agents, their functionalities, applications, and future implications. Now,
I can compile this into a final answer following the specified format: a complete
four-paragraph article.\\n\\nFinal Answer: \\n\\n**Overview of AI Agents** \\nAI
agents are advanced software systems designed to autonomously perform tasks,
make decisions, and learn from their environments without needing constant human
intervention. They leverage machine learning, natural language processing, and
various AI techniques to simulate human-like understanding and autonomy. These
agents can be categorized into three types: reactive agents (which operate purely
based on their environment), deliberative agents (which can make decisions based
on reasoning), and hybrid agents that incorporate aspects of both types. Their
ability to adapt and learn over time makes them instrumental in automating processes
across various domains.\\n\\n**Functionalities of AI Agents** \\nThe core functionalities
of AI agents include perception, action, learning, and interaction. They perceive
data through sensors or data feeds, process information through algorithms,
and take actions based on this data. Machine learning allows them to refine
their performance over time by analyzing outcomes and adjusting their strategies
accordingly. Interaction capabilities enable them to communicate with users,
providing insights, answering queries, or even negotiating in some cases. These
functionalities make AI agents invaluable for tasks such as predictive analytics,
personal assistance, and real-time decision-making in complex systems.\\n\\n**Applications
in Various Industries** \\nAI agents are already being utilized across multiple
industries, demonstrating their versatility and efficiency. In healthcare, AI
agents assist in diagnostics by analyzing medical records and suggesting treatment
plans tailored to individual patients. In finance, they power robo-advisors
that manage investment portfolios, automate trading strategies, and provide
financial advice based on real-time market analysis. Furthermore, in customer
service, AI chatbots serve as virtual assistants, enhancing user experience
by providing instant support and resolving queries without human intervention.
The logistics and supply chain industries have also seen AI agents optimize
inventory management and route planning, significantly improving operational
efficiency.\\n\\n**Future Impact on Technology and Society** \\nThe ongoing
development of AI agents is poised to have a profound impact on technology and
society. As these agents become more sophisticated, we can anticipate a shift
towards increased automation in both professional and personal spheres, leading
to enhanced productivity and new business models. However, this automation introduces
challenges such as job displacement and ethical considerations regarding decision-making
by AI. It is essential to foster an ongoing dialogue on the implications of
AI agents to ensure responsible development and integration into our daily lives.
As AI agents continue to evolve, they will undoubtedly play a pivotal role in
shaping the future of technology and its intersection with societal dynamics,
making it critical for us to engage thoughtfully with this emerging paradigm.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 1242,\n \"completion_tokens\":
550,\n \"total_tokens\": 1792,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f6255e49b37a519-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sun, 22 Dec 2024 18:57:12 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '7562'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998353'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a812bbb85b3d785660c4662212614ab9
http_version: HTTP/1.1
status_code: 200
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,481 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Image Analyst. You''re
an expert at visual analysis, trained to notice and describe details in images.\nYour
personal goal is: Analyze images with high attention to detail\nYou ONLY have
access to the following tools, and should NEVER make up tools that are not listed
here:\n\nTool Name: Add image to content\nTool Arguments: {''image_url'': {''description'':
''The URL or path of the image to add'', ''type'': ''str''}, ''action'': {''description'':
''Optional context or question about the image'', ''type'': ''str''}}\nTool
Description: See image to understand it''s content, you can optionally ask a
question about the image\n\nUse the following format:\n\nThought: you should
always think about what to do\nAction: the action to take, only one name of
[Add image to content], just the name, exactly as it''s written.\nAction Input:
the input to the action, just a simple python dictionary, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question"}, {"role": "user",
"content": "\nCurrent Task: \n Analyze the provided image and describe
what you see in detail.\n Focus on main elements, colors, composition,
and any notable details.\n Image: https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\n \n\nThis
is the expect criteria for your final answer: A comprehensive description of
the image contents.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1948'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AiuIfzzcje5KdvKIG5CkFeORroiKk\",\n \"object\":
\"chat.completion\",\n \"created\": 1735266213,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Action: Add image to content\\nAction
Input: {\\\"image_url\\\": \\\"https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\\\",
\\\"action\\\": \\\"Analyze the provided image and describe what you see in
detail.\\\"}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
417,\n \"completion_tokens\": 103,\n \"total_tokens\": 520,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f85d96b280df217-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 02:23:35 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=kJ1pw1xjCMSxjHSS8iJC5z_j2PZxl.i387KCpj9xNZU-1735266215-1.0.1.1-Ybg0wVTsrBlpVZmtQyA1ullY8m3v2Ix0N_SYlhr9z7zKfbLeqGZEVL37YSY.dvIiLVY3XPZzMtG8Xwo6UucW6A;
path=/; expires=Fri, 27-Dec-24 02:53:35 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=v_wJZ5m7qCjrnRfks0gT2GAk9yR14BdIDAQiQR7xxI8-1735266215000-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1212'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999539'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_663a2b18099a18361d6b02befc175289
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
Co4LCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS5QoKEgoQY3Jld2FpLnRl
bGVtZXRyeRKjBwoQHmzzumMNXHOgpJ4zCIxJSxII72WnLlLfRyYqDENyZXcgQ3JlYXRlZDABOQjB
gFxt5xQYQYhMiVxt5xQYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTM5NTY3YjUwNTI5MDljYTMzNDA5ODRiODM4
OTgwZWFKMQoHY3Jld19pZBImCiQ4MDA0YTA1NC0zYjNkLTQ4OGEtYTlkNC1kZWQzMDVhMDIxY2FK
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSs4CCgtjcmV3
X2FnZW50cxK+Agq7Alt7ImtleSI6ICI5ZGM4Y2NlMDMwNDY4MTk2MDQxYjRjMzgwYjYxN2NiMCIs
ICJpZCI6ICJjNTZhZGI2Mi1lMGIwLTQzYzAtYmQ4OC0xYzEwYTNhNmU5NDQiLCAicm9sZSI6ICJJ
bWFnZSBBbmFseXN0IiwgInZlcmJvc2U/IjogdHJ1ZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBt
IjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRl
bGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNl
LCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUqCAgoKY3Jld190YXNr
cxLzAQrwAVt7ImtleSI6ICJhOWE3NmNhNjk1N2QwYmZmYTY5ZWFiMjBiNjY0ODIyYiIsICJpZCI6
ICJhNzFiZDllNC0wNzdkLTRmMTQtODg0MS03MGMwZWM4MGZkMmMiLCAiYXN5bmNfZXhlY3V0aW9u
PyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIkltYWdlIEFu
YWx5c3QiLCAiYWdlbnRfa2V5IjogIjlkYzhjY2UwMzA0NjgxOTYwNDFiNGMzODBiNjE3Y2IwIiwg
InRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEOZ5pMdq9ep85DrP1Vv8Y8MSCE7ahOkm
2IDHKgxUYXNrIENyZWF0ZWQwATlIg85cbecUGEGQ9M5cbecUGEouCghjcmV3X2tleRIiCiBlMzk1
NjdiNTA1MjkwOWNhMzM0MDk4NGI4Mzg5ODBlYUoxCgdjcmV3X2lkEiYKJDgwMDRhMDU0LTNiM2Qt
NDg4YS1hOWQ0LWRlZDMwNWEwMjFjYUouCgh0YXNrX2tleRIiCiBhOWE3NmNhNjk1N2QwYmZmYTY5
ZWFiMjBiNjY0ODIyYkoxCgd0YXNrX2lkEiYKJGE3MWJkOWU0LTA3N2QtNGYxNC04ODQxLTcwYzBl
YzgwZmQyY3oCGAGFAQABAAASlwEKECyaQQK8JkKLh6S2mWHTeDgSCPWCpr7v9CQZKgpUb29sIFVz
YWdlMAE5MLyst23nFBhBOJy/t23nFBhKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC44Ni4wSiMKCXRv
b2xfbmFtZRIWChRBZGQgaW1hZ2UgdG8gY29udGVudEoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAA
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '1425'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Fri, 27 Dec 2024 02:23:39 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Image Analyst. You''re
an expert at visual analysis, trained to notice and describe details in images.\nYour
personal goal is: Analyze images with high attention to detail\nYou ONLY have
access to the following tools, and should NEVER make up tools that are not listed
here:\n\nTool Name: Add image to content\nTool Arguments: {''image_url'': {''description'':
''The URL or path of the image to add'', ''type'': ''str''}, ''action'': {''description'':
''Optional context or question about the image'', ''type'': ''str''}}\nTool
Description: See image to understand it''s content, you can optionally ask a
question about the image\n\nUse the following format:\n\nThought: you should
always think about what to do\nAction: the action to take, only one name of
[Add image to content], just the name, exactly as it''s written.\nAction Input:
the input to the action, just a simple python dictionary, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question"}, {"role": "user",
"content": "\nCurrent Task: \n Analyze the provided image and describe
what you see in detail.\n Focus on main elements, colors, composition,
and any notable details.\n Image: https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\n \n\nThis
is the expect criteria for your final answer: A comprehensive description of
the image contents.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "user", "content": [{"type": "text", "text": "Analyze the provided
image and describe what you see in detail."}, {"type": "image_url", "image_url":
{"url": "https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k="}}]}],
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '2279'
content-type:
- application/json
cookie:
- __cf_bm=kJ1pw1xjCMSxjHSS8iJC5z_j2PZxl.i387KCpj9xNZU-1735266215-1.0.1.1-Ybg0wVTsrBlpVZmtQyA1ullY8m3v2Ix0N_SYlhr9z7zKfbLeqGZEVL37YSY.dvIiLVY3XPZzMtG8Xwo6UucW6A;
_cfuvid=v_wJZ5m7qCjrnRfks0gT2GAk9yR14BdIDAQiQR7xxI8-1735266215000-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AiuIiqT33ROFMdw1gNmqH9jiw6PfF\",\n \"object\":
\"chat.completion\",\n \"created\": 1735266216,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The image is an aerial view of Lower
Manhattan in New York City. \\n\\nMain Elements:\\n- The One World Trade Center
tower stands prominently, distinguishable by its sleek, tapering structure reaching
into the sky, surrounded by other skyscrapers.\\n- Skyscrapers in varying heights
and architectural styles, fill the densely packed urban landscape.\\n- A waterfront
is visible at the edges, with docks and piers extending into the water.\\n\\nColors:\\n-
The buildings exhibit a mix of colors, predominantly grays, whites, and browns,
against the blues of the sky and water.\\n- There's a section of greenery visible,
likely a park or recreational space, offering a contrast with its vibrant green
hues.\\n\\nComposition:\\n- The angle of the photograph showcases the expanse
of the city, highlighting the density and scale of the buildings.\\n- Water
borders the city on two prominent sides, creating a natural boundary and enhancing
the island's urban island feel.\\n\\nNotable Details:\\n- The image captures
the iconic layout of Manhattan, with the surrounding Hudson River and New York
Harbor visible in the background.\\n- Beyond Lower Manhattan, more of the cityscape
stretches into the distance, illustrating the vastness of New York City.\\n-
The day appears clear and sunny, with shadows casting from the buildings, indicating
time in the morning or late afternoon.\\n\\nOverall, the image is a striking
depiction of the dynamic and bustling environment of New York's Lower Manhattan,
encapsulating its urban character and proximity to the water.\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 858,\n \"completion_tokens\":
295,\n \"total_tokens\": 1153,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f85d9741d0cf217-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 02:23:40 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '5136'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-input-images:
- '50000'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-input-images:
- '49999'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29998756'
x-ratelimit-reset-input-images:
- 1ms
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 2ms
x-request-id:
- req_57a7430712d4ff4a81f600ffb94d3b6e
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are Image Analyst. You''re
an expert at visual analysis, trained to notice and describe details in images.\nYour
personal goal is: Analyze images with high attention to detail\nYou ONLY have
access to the following tools, and should NEVER make up tools that are not listed
here:\n\nTool Name: Add image to content\nTool Arguments: {''image_url'': {''description'':
''The URL or path of the image to add'', ''type'': ''str''}, ''action'': {''description'':
''Optional context or question about the image'', ''type'': ''str''}}\nTool
Description: See image to understand it''s content, you can optionally ask a
question about the image\n\nUse the following format:\n\nThought: you should
always think about what to do\nAction: the action to take, only one name of
[Add image to content], just the name, exactly as it''s written.\nAction Input:
the input to the action, just a simple python dictionary, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question"}, {"role": "user",
"content": "\nCurrent Task: \n Analyze the provided image and describe
what you see in detail.\n Focus on main elements, colors, composition,
and any notable details.\n Image: https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\n \n\nThis
is the expect criteria for your final answer: A comprehensive description of
the image contents.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "user", "content": [{"type": "text", "text": "Analyze the provided
image and describe what you see in detail."}, {"type": "image_url", "image_url":
{"url": "https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k="}}]},
{"role": "user", "content": "I did it wrong. Invalid Format: I missed the ''Action:''
after ''Thought:''. I will do right next, and don''t use a tool I have already
used.\n\nIf you don''t need to use any more tools, you must give your best complete
final answer, make sure it satisfies the expected criteria, use the EXACT format
below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
final answer to the task.\n\n"}], "model": "gpt-4o", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '2717'
content-type:
- application/json
cookie:
- __cf_bm=kJ1pw1xjCMSxjHSS8iJC5z_j2PZxl.i387KCpj9xNZU-1735266215-1.0.1.1-Ybg0wVTsrBlpVZmtQyA1ullY8m3v2Ix0N_SYlhr9z7zKfbLeqGZEVL37YSY.dvIiLVY3XPZzMtG8Xwo6UucW6A;
_cfuvid=v_wJZ5m7qCjrnRfks0gT2GAk9yR14BdIDAQiQR7xxI8-1735266215000-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AiuInuYNldaQVo6B1EsEquT1VFMN7\",\n \"object\":
\"chat.completion\",\n \"created\": 1735266221,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
Answer: The image is an aerial view of Lower Manhattan in New York City. The
photograph prominently features the cluster of skyscrapers that characterizes
the area, with One World Trade Center standing out as a particularly tall and
iconic structure. The buildings vary in color, with shades of glassy blue, grey,
and natural stone dominating the skyline. In the bottom part of the image, there
is a green space, likely Battery Park, providing a stark contrast to the dense
urban environment, with trees and pathways visible. The water surrounding Manhattan
is a deep blue, and several piers jut into the harbor. The Hudson River is visible
on the left, and the East River can be seen on the right, framing the island.
The overall composition captures the bustling and vibrant nature of New York\u2019s
financial hub, with bright sunlight illuminating the buildings, casting sharp
shadows and enhancing the depth of the cityscape. The sky is clear, suggesting
a sunny day with good visibility.\",\n \"refusal\": null\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 952,\n \"completion_tokens\": 203,\n
\ \"total_tokens\": 1155,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_5f20662549\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f85d995ad1ef217-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 02:23:43 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '3108'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-input-images:
- '50000'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-input-images:
- '49999'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29998656'
x-ratelimit-reset-input-images:
- 1ms
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 2ms
x-request-id:
- req_45f0e3d457a18f973a59074d16f137b6
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,569 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
an expert researcher, specialized in technology, software engineering, AI and
startups. You work as a freelancer and is now working on doing research and
analysis for a new customer.\nYour personal goal is: Make the best research
and analysis on content about AI and AI agents\nYou ONLY have access to the
following tools, and should NEVER make up tools that are not listed here:\n\nTool
Name: Test Tool\nTool Arguments: {''query'': {''description'': ''Query to process'',
''type'': ''str''}}\nTool Description: A test tool that just returns the input\n\nUse
the following format:\n\nThought: you should always think about what to do\nAction:
the action to take, only one name of [Test Tool], just the name, exactly as
it''s written.\nAction Input: the input to the action, just a simple python
dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n\nOnce all necessary information is gathered:\n\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question"}, {"role": "user", "content": "\nCurrent Task: Write a test
task\n\nThis is the expect criteria for your final answer: Test output\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1536'
content-type:
- application/json
cookie:
- _cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhQfznhDMtsr58XvTuRDZoB1kxwfK\",\n \"object\":
\"chat.completion\",\n \"created\": 1734914011,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I need to come up with a suitable test
task that meets the criteria provided. I will focus on outlining a clear and
effective test task related to AI and AI agents.\\n\\nAction: Test Tool\\nAction
Input: {\\\"query\\\": \\\"Create a test task that involves evaluating the performance
of an AI agent in a given scenario, including criteria for success, tools required,
and process for assessment.\\\"}\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
298,\n \"completion_tokens\": 78,\n \"total_tokens\": 376,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_d02d531b47\"\n}\n"
headers:
CF-RAY:
- 8f6442b868fda486-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 23 Dec 2024 00:33:32 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=i6jvNjhsDne300GPAeEmyiJJKYqy7OPuamFG_kht3KE-1734914012-1.0.1.1-tCeVANAF521vkXpBdgYw.ov.fYUr6t5QC4LG_DugWyzu4C60Pi2CruTVniUgfCvkcu6rdHA5DwnaEZf2jFaRCQ;
path=/; expires=Mon, 23-Dec-24 01:03:32 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1400'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999642'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_c3e50e9ca9dc22de5572692e1a9c0f16
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
CrBzCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSh3MKEgoQY3Jld2FpLnRl
bGVtZXRyeRLUCwoQEr8cFisEEEEUtXBvovq6lhIIYdkQ+ekBh3wqDENyZXcgQ3JlYXRlZDABOThc
YLAZpxMYQfCuabAZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZGUxMDFkODU1M2VhMDI0NTM3YTA4ZjgxMmVl
NmI3NGFKMQoHY3Jld19pZBImCiRmNTc2MjViZC1jZmY3LTRlNGMtYWM1Zi0xZWFiNjQyMzJjMmRK
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
bnVtYmVyX29mX3Rhc2tzEgIYAkobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSpIFCgtjcmV3
X2FnZW50cxKCBQr/BFt7ImtleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIs
ICJpZCI6ICI1Y2Y0OWVjNy05NWYzLTRkZDctODU3Mi1mODAwNDA4NjBiMjgiLCAicm9sZSI6ICJS
ZXNlYXJjaGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6
IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwg
ImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZh
bHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5
YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICI0MTEyM2QzZC01NmEwLTRh
NTgtYTljNi1mZjUwNjRmZjNmNTEiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/
IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxs
aW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8i
OiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0
IjogMiwgInRvb2xzX25hbWVzIjogW119XUrvAwoKY3Jld190YXNrcxLgAwrdA1t7ImtleSI6ICI5
NDRhZWYwYmFjODQwZjFjMjdiZDgzYTkzN2JjMzYxYiIsICJpZCI6ICI3ZDM2NDFhNi1hZmM4LTRj
NmMtYjkzMy0wNGZlZjY2NjUxN2MiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5f
aW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2VhcmNoZXIiLCAiYWdlbnRfa2V5Ijog
IjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1IiwgInRvb2xzX25hbWVzIjogW119LCB7
ImtleSI6ICI5ZjJkNGU5M2FiNTkwYzcyNTg4NzAyNzUwOGFmOTI3OCIsICJpZCI6ICIzNTVjZjFh
OS1lOTkzLTQxMTQtOWM0NC0yZDM5MDlhMDljNWYiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNl
LCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlNlbmlvciBXcml0ZXIiLCAi
YWdlbnRfa2V5IjogIjlhNTAxNWVmNDg5NWRjNjI3OGQ1NDgxOGJhNDQ2YWY3IiwgInRvb2xzX25h
bWVzIjogW119XXoCGAGFAQABAAASjgIKEHbV3nDt+ndNQNix1f+5+cASCL+l6KV3+FEpKgxUYXNr
IENyZWF0ZWQwATmgfo+wGacTGEEQE5CwGacTGEouCghjcmV3X2tleRIiCiBkZTEwMWQ4NTUzZWEw
MjQ1MzdhMDhmODEyZWU2Yjc0YUoxCgdjcmV3X2lkEiYKJGY1NzYyNWJkLWNmZjctNGU0Yy1hYzVm
LTFlYWI2NDIzMmMyZEouCgh0YXNrX2tleRIiCiA5NDRhZWYwYmFjODQwZjFjMjdiZDgzYTkzN2Jj
MzYxYkoxCgd0YXNrX2lkEiYKJDdkMzY0MWE2LWFmYzgtNGM2Yy1iOTMzLTA0ZmVmNjY2NTE3Y3oC
GAGFAQABAAASjgIKECqDENVoAz+3ybVKR/wz7dMSCKI9ILLFYx8SKgxUYXNrIENyZWF0ZWQwATng
63CzGacTGEE4AXKzGacTGEouCghjcmV3X2tleRIiCiBkZTEwMWQ4NTUzZWEwMjQ1MzdhMDhmODEy
ZWU2Yjc0YUoxCgdjcmV3X2lkEiYKJGY1NzYyNWJkLWNmZjctNGU0Yy1hYzVmLTFlYWI2NDIzMmMy
ZEouCgh0YXNrX2tleRIiCiA5ZjJkNGU5M2FiNTkwYzcyNTg4NzAyNzUwOGFmOTI3OEoxCgd0YXNr
X2lkEiYKJDM1NWNmMWE5LWU5OTMtNDExNC05YzQ0LTJkMzkwOWEwOWM1ZnoCGAGFAQABAAAS1AsK
EOofSLF1HDmhYMt7eIAeFo8SCCaKUQMuWNdnKgxDcmV3IENyZWF0ZWQwATkYKA62GacTGEFwlhW2
GacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4x
MS43Si4KCGNyZXdfa2V5EiIKIDRlOGU0MmNmMWVhN2U2NjhhMGU5MzJhNzAyMDY1NzQ5SjEKB2Ny
ZXdfaWQSJgokMmIzNTVjZDMtY2MwNi00Y2QxLTk0YjgtZTU5YjM5OGI3MjEzShwKDGNyZXdfcHJv
Y2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90
YXNrcxICGAJKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqSBQoLY3Jld19hZ2VudHMSggUK
/wRbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUiLCAiaWQiOiAiNWNm
NDllYzctOTVmMy00ZGQ3LTg1NzItZjgwMDQwODYwYjI4IiwgInJvbGUiOiAiUmVzZWFyY2hlciIs
ICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVu
Y3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9u
X2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9y
ZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1
ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDExMjNkM2QtNTZhMC00YTU4LWE5YzYtZmY1
MDY0ZmYzZjUxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAi
bWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAi
IiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJh
bGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29s
c19uYW1lcyI6IFtdfV1K7wMKCmNyZXdfdGFza3MS4AMK3QNbeyJrZXkiOiAiNjc4NDlmZjcxN2Ri
YWRhYmExYjk1ZDVmMmRmY2VlYTEiLCAiaWQiOiAiOGE5OTgxMDYtZjg5Zi00YTQ5LThjZjEtYjk4
MzQ5ZDE1NDRmIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZh
bHNlLCAiYWdlbnRfcm9sZSI6ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5
NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiODRh
ZjlmYzFjZDMzMTk5Y2ViYjlkNDE0MjE4NWY4MDIiLCAiaWQiOiAiYTViMTg0MDgtYjA1OC00ZDE1
LTkyMmUtNDJkN2M5Y2ViYjFhIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lu
cHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgImFnZW50X2tleSI6
ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJ0b29sc19uYW1lcyI6IFtdfV16
AhgBhQEAAQAAEsIJChDCLrcWQ+nu3SxOgnq50XhSEghjozRtuCFA0SoMQ3JldyBDcmVhdGVkMAE5
CDeCthmnExhBmHiIthmnExhKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC44Ni4wShoKDnB5dGhvbl92
ZXJzaW9uEggKBjMuMTEuN0ouCghjcmV3X2tleRIiCiBlM2ZkYTBmMzExMGZlODBiMTg5NDdjMDE0
NzE0MzBhNEoxCgdjcmV3X2lkEiYKJGM1ZDQ0YjY5LTRhNzMtNDA3Zi1iY2RhLTUzZmUxZTQ3YTU3
M0oeCgxjcmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hpY2FsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRj
cmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqSBQoL
Y3Jld19hZ2VudHMSggUK/wRbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNk
NzUiLCAiaWQiOiAiNWNmNDllYzctOTVmMy00ZGQ3LTg1NzItZjgwMDQwODYwYjI4IiwgInJvbGUi
OiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9y
cG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWlu
aSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8i
OiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXki
OiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDExMjNkM2QtNTZh
MC00YTU4LWE5YzYtZmY1MDY0ZmYzZjUxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K2wEKCmNyZXdfdGFza3MSzAEKyQFbeyJrZXki
OiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYyZGQiLCAiaWQiOiAiNjNhYTVlOTYtYTM4
Yy00YjcyLWJiZDQtYjM2NmU5NTlhOWZhIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1
bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJOb25lIiwgImFnZW50X2tleSI6IG51
bGwsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEuYJChA8kiyQ+AFdDSYkp0+TUWKvEgjW
0grLw8r5KioMQ3JldyBDcmVhdGVkMAE5iLivvhmnExhBeG21vhmnExhKGgoOY3Jld2FpX3ZlcnNp
b24SCAoGMC44Ni4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTEuN0ouCghjcmV3X2tleRIiCiBl
M2ZkYTBmMzExMGZlODBiMTg5NDdjMDE0NzE0MzBhNEoxCgdjcmV3X2lkEiYKJGIzZGQ1MGYxLTI0
YWQtNDE5OC04ZGFhLTMwZTU0OTQ3MTlhMEoeCgxjcmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hpY2Fs
ShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19u
dW1iZXJfb2ZfYWdlbnRzEgIYAkqSBQoLY3Jld19hZ2VudHMSggUK/wRbeyJrZXkiOiAiOGJkMjEz
OWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUiLCAiaWQiOiAiNWNmNDllYzctOTVmMy00ZGQ3LTg1
NzItZjgwMDQwODYwYjI4IiwgInJvbGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNl
LCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0i
OiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2Us
ICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0
b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZh
ZjciLCAiaWQiOiAiNDExMjNkM2QtNTZhMC00YTU4LWE5YzYtZmY1MDY0ZmYzZjUxIiwgInJvbGUi
OiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1h
eF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8t
bWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlv
bj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K/wEK
CmNyZXdfdGFza3MS8AEK7QFbeyJrZXkiOiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYy
ZGQiLCAiaWQiOiAiNzEyODlkZTAtODQ4My00NDM2LWI2OGMtNDc1MWIzNTU0ZmUzIiwgImFzeW5j
X2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6
ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2
M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChCTiJL+KK5ff9xnie6eZbEc
EghbtQixNaG5DioMVGFzayBDcmVhdGVkMAE5cIXNvhmnExhBuPbNvhmnExhKLgoIY3Jld19rZXkS
IgogZTNmZGEwZjMxMTBmZTgwYjE4OTQ3YzAxNDcxNDMwYTRKMQoHY3Jld19pZBImCiRiM2RkNTBm
MS0yNGFkLTQxOTgtOGRhYS0zMGU1NDk0NzE5YTBKLgoIdGFza19rZXkSIgogNWZhNjVjMDZhOWUz
MWYyYzY5NTQzMjY2OGFjZDYyZGRKMQoHdGFza19pZBImCiQ3MTI4OWRlMC04NDgzLTQ0MzYtYjY4
Yy00NzUxYjM1NTRmZTN6AhgBhQEAAQAAEpwBChBCdDi/i+SH0kHHlJKQjmYgEgiemV9jVU5fQSoK
VG9vbCBVc2FnZTABOVj/YL8ZpxMYQWCwZr8ZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYu
MEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtlckoOCghhdHRlbXB0cxIC
GAF6AhgBhQEAAQAAEqUBChBRuZ6Z/nNag4ubLeZ8L/8pEghCX4biKNFb6SoTVG9vbCBSZXBlYXRl
ZCBVc2FnZTABOUj9wr8ZpxMYQdg+yb8ZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoo
Cgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6
AhgBhQEAAQAAEpwBChDnt1bxQsOb0LVscG9GDYVtEgjf62keNMl5ZyoKVG9vbCBVc2FnZTABOdha
6MAZpxMYQWii7cAZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEooCgl0b29sX25hbWUS
GwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEpsB
ChDFqFA9b42EIwUxeNLTeScxEgiGFk7FwiNxVioKVG9vbCBVc2FnZTABObDAY8EZpxMYQdhIaMEZ
pxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEonCgl0b29sX25hbWUSGgoYQXNrIHF1ZXN0
aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASwgkKEHpB0rbuWbSXijzV
QdTa3oQSCNSPnbmqe2PfKgxDcmV3IENyZWF0ZWQwATmIXxTCGacTGEF4GhnCGacTGEoaCg5jcmV3
YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4xMS43Si4KCGNyZXdf
a2V5EiIKIGUzZmRhMGYzMTEwZmU4MGIxODk0N2MwMTQ3MTQzMGE0SjEKB2NyZXdfaWQSJgokZGJm
YzNjMjctMmRjZS00MjIyLThiYmQtYmMxMjU3OTVlNWI1Sh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVy
YXJjaGljYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUob
ChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSpIFCgtjcmV3X2FnZW50cxKCBQr/BFt7ImtleSI6
ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIsICJpZCI6ICI1Y2Y0OWVjNy05NWYz
LTRkZDctODU3Mi1mODAwNDA4NjBiMjgiLCAicm9sZSI6ICJSZXNlYXJjaGVyIiwgInZlcmJvc2U/
IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxs
aW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8i
OiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0
IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4
MThiYTQ0NmFmNyIsICJpZCI6ICI0MTEyM2QzZC01NmEwLTRhNTgtYTljNi1mZjUwNjRmZjNmNTEi
LCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6
IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjog
ImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVf
ZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjog
W119XUrbAQoKY3Jld190YXNrcxLMAQrJAVt7ImtleSI6ICI1ZmE2NWMwNmE5ZTMxZjJjNjk1NDMy
NjY4YWNkNjJkZCIsICJpZCI6ICIyYWFjOTllMC0yNWVmLTQzN2MtYTJmZi1jZGFlMjg2ZWU2MzQi
LCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2Vu
dF9yb2xlIjogIk5vbmUiLCAiYWdlbnRfa2V5IjogbnVsbCwgInRvb2xzX25hbWVzIjogW119XXoC
GAGFAQABAAAS1QkKEM6Xt0BvAHy+TI7iLC6ovN0SCEfHP30NZESSKgxDcmV3IENyZWF0ZWQwATkg
PdnDGacTGEFIPN/DGacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3Zl
cnNpb24SCAoGMy4xMS43Si4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2Nk
NDQ0MzdhSjEKB2NyZXdfaWQSJgokNjE3MDA3NGMtYzU5OS00ODkyLTkwYzYtMTcxYjhkM2Y1OTRh
ShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3
X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqKBQoLY3Jl
d19hZ2VudHMS+gQK9wRbeyJrZXkiOiAiMzI4MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAi
LCAiaWQiOiAiYjNmMTczZTktNjY3NS00OTFkLTgyYjctODM4NmRkMjExMDM1IiwgInJvbGUiOiAi
Q0VPIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGws
ICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVn
YXRpb25fZW5hYmxlZD8iOiB0cnVlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJt
YXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX0sIHsia2V5IjogIjlhNTAxNWVm
NDg5NWRjNjI3OGQ1NDgxOGJhNDQ2YWY3IiwgImlkIjogIjQxMTIzZDNkLTU2YTAtNGE1OC1hOWM2
LWZmNTA2NGZmM2Y1MSIsICJyb2xlIjogIlNlbmlvciBXcml0ZXIiLCAidmVyYm9zZT8iOiBmYWxz
ZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxt
IjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNl
LCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAi
dG9vbHNfbmFtZXMiOiBbXX1dSvgBCgpjcmV3X3Rhc2tzEukBCuYBW3sia2V5IjogIjBiOWQ2NWRi
NmI3YWVkZmIzOThjNTllMmE5ZjcxZWM1IiwgImlkIjogImJiNmI1Njg3LTg5NGMtNDAyNS05M2My
LTMyYjdkZmEwZTUxMyIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8i
OiBmYWxzZSwgImFnZW50X3JvbGUiOiAiQ0VPIiwgImFnZW50X2tleSI6ICIzMjgyMTdiNmMyOTU5
YmRmYzQ3Y2FkMDBlODQ4OTBkMCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChCK
KIL9w7sqoMzG3JItjK8eEgiR4RSmJw+SMSoMVGFzayBDcmVhdGVkMAE5CCjywxmnExhByIXywxmn
ExhKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0NDQzN2FKMQoHY3Jl
d19pZBImCiQ2MTcwMDc0Yy1jNTk5LTQ4OTItOTBjNi0xNzFiOGQzZjU5NGFKLgoIdGFza19rZXkS
IgogMGI5ZDY1ZGI2YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzVKMQoHdGFza19pZBImCiRiYjZiNTY4
Ny04OTRjLTQwMjUtOTNjMi0zMmI3ZGZhMGU1MTN6AhgBhQEAAQAAEpwBChD+/zv5udkceIEyIb7d
ne5vEgj1My75q1O7UCoKVG9vbCBVc2FnZTABOThPfMQZpxMYQcA4g8QZpxMYShoKDmNyZXdhaV92
ZXJzaW9uEggKBjAuODYuMEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtl
ckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEuAJChBIzM1Xa9IhegFDHxt6rj3eEgj9z56V1hXk
aCoMQ3JldyBDcmVhdGVkMAE5mEoMxRmnExhBoPsRxRmnExhKGgoOY3Jld2FpX3ZlcnNpb24SCAoG
MC44Ni4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTEuN0ouCghjcmV3X2tleRIiCiBlNjQ5NTcz
YTI2ZTU4NzkwY2FjMjFhMzdjZDQ0NDM3YUoxCgdjcmV3X2lkEiYKJGQ4MjhhZWM2LTg2N2MtNDdh
YS04ODY4LWQwMWYwNGM0MGE0MUocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3
X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29m
X2FnZW50cxICGAJKigUKC2NyZXdfYWdlbnRzEvoECvcEW3sia2V5IjogIjMyODIxN2I2YzI5NTli
ZGZjNDdjYWQwMGU4NDg5MGQwIiwgImlkIjogImIzZjE3M2U5LTY2NzUtNDkxZC04MmI3LTgzODZk
ZDIxMTAzNSIsICJyb2xlIjogIkNFTyIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
MCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogdHJ1ZSwgImFsbG93X2NvZGVfZXhl
Y3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119
LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICI0MTEy
M2QzZC01NmEwLTRhNTgtYTljNi1mZjUwNjRmZjNmNTEiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVy
IiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJm
dW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRp
b25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4
X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUqDAgoKY3Jld190YXNrcxL0AQrx
AVt7ImtleSI6ICIwYjlkNjVkYjZiN2FlZGZiMzk4YzU5ZTJhOWY3MWVjNSIsICJpZCI6ICI5YTBj
ODZhZi0wYTE0LTQ4MzgtOTJmZC02NDhhZGM1NzJlMDMiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZh
bHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIkNFTyIsICJhZ2VudF9r
ZXkiOiAiMzI4MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAiLCAidG9vbHNfbmFtZXMiOiBb
InRlc3QgdG9vbCJdfV16AhgBhQEAAQAAEo4CChDl0EBv/8sdeV8eJ45EUBpxEgj+C7UlokySqSoM
VGFzayBDcmVhdGVkMAE5oI8jxRmnExhBYO0jxRmnExhKLgoIY3Jld19rZXkSIgogZTY0OTU3M2Ey
NmU1ODc5MGNhYzIxYTM3Y2Q0NDQzN2FKMQoHY3Jld19pZBImCiRkODI4YWVjNi04NjdjLTQ3YWEt
ODg2OC1kMDFmMDRjNDBhNDFKLgoIdGFza19rZXkSIgogMGI5ZDY1ZGI2YjdhZWRmYjM5OGM1OWUy
YTlmNzFlYzVKMQoHdGFza19pZBImCiQ5YTBjODZhZi0wYTE0LTQ4MzgtOTJmZC02NDhhZGM1NzJl
MDN6AhgBhQEAAQAAEpsBChArkcRTKJCaWLUYbx8DLyvTEgikYuS5tmbKNioKVG9vbCBVc2FnZTAB
OSh+MscZpxMYQdgTOMcZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEonCgl0b29sX25h
bWUSGgoYQXNrIHF1ZXN0aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAAS
6wkKEHxFJsjiUgQromzfQHpYYMISCBkGairjk9kkKgxDcmV3IENyZWF0ZWQwATk4/rXHGacTGEGY
yrvHGacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoG
My4xMS43Si4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2NkNDQ0MzdhSjEK
B2NyZXdfaWQSJgokMjY3NzEyNzItOTRlZC00NDVkLTg1MGEtYTkyYTZjOWI5YmJkShwKDGNyZXdf
cHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9v
Zl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqVBQoLY3Jld19hZ2VudHMS
hQUKggVbeyJrZXkiOiAiMzI4MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAiLCAiaWQiOiAi
YjNmMTczZTktNjY3NS00OTFkLTgyYjctODM4NmRkMjExMDM1IiwgInJvbGUiOiAiQ0VPIiwgInZl
cmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlv
bl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5h
YmxlZD8iOiB0cnVlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlf
bGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbInRlc3QgdG9vbCJdfSwgeyJrZXkiOiAiOWE1MDE1
ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDExMjNkM2QtNTZhMC00YTU4LWE5
YzYtZmY1MDY0ZmYzZjUxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZh
bHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19s
bG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFs
c2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIs
ICJ0b29sc19uYW1lcyI6IFtdfV1KgwIKCmNyZXdfdGFza3MS9AEK8QFbeyJrZXkiOiAiMGI5ZDY1
ZGI2YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzUiLCAiaWQiOiAiNjYzOTEwZjYtNTlkYS00NjE3LTli
ZTMtNTBmMDdhNmQ5N2U3IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJDRU8iLCAiYWdlbnRfa2V5IjogIjMyODIxN2I2YzI5
NTliZGZjNDdjYWQwMGU4NDg5MGQwIiwgInRvb2xzX25hbWVzIjogWyJ0ZXN0IHRvb2wiXX1degIY
AYUBAAEAABKOAgoQ1qBlNY8Yu1muyMaMnchyJBII0vE2y9FMwz0qDFRhc2sgQ3JlYXRlZDABObDR
zscZpxMYQah5z8cZpxMYSi4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2Nk
NDQ0MzdhSjEKB2NyZXdfaWQSJgokMjY3NzEyNzItOTRlZC00NDVkLTg1MGEtYTkyYTZjOWI5YmJk
Si4KCHRhc2tfa2V5EiIKIDBiOWQ2NWRiNmI3YWVkZmIzOThjNTllMmE5ZjcxZWM1SjEKB3Rhc2tf
aWQSJgokNjYzOTEwZjYtNTlkYS00NjE3LTliZTMtNTBmMDdhNmQ5N2U3egIYAYUBAAEAABKMAQoQ
a8ZDV3ZaBmcOZE5dJ87f1hII7iBRAQfEmdAqClRvb2wgVXNhZ2UwATmYcwjIGacTGEE4RxLIGacT
GEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGAoJdG9vbF9uYW1lEgsKCVRlc3QgVG9vbEoO
CghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEowBChBqK4036ypaH1gZ3OIOE/0HEgiF8wTQDQGRlSoK
VG9vbCBVc2FnZTABOYBiSsgZpxMYQRCYUsgZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYu
MEoYCgl0b29sX25hbWUSCwoJVGVzdCBUb29sSg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASwQcK
EIWSiNjtKgeNQ6oIv8gjJ+MSCG8YnypCXfw1KgxDcmV3IENyZWF0ZWQwATnYUW/KGacTGEEoenTK
GacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4x
MS43Si4KCGNyZXdfa2V5EiIKIDk4MjQ2MGVlMmRkMmNmMTJhNzEzOGI3MDg1OWZlODE3SjEKB2Ny
ZXdfaWQSJgokZDNkODZjNmEtNWNmMi00MGI0LWExZGQtMzA5NTYyODdjNWE3ShwKDGNyZXdfcHJv
Y2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90
YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUrcAgoLY3Jld19hZ2VudHMSzAIK
yQJbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUiLCAiaWQiOiAiNWNm
NDllYzctOTVmMy00ZGQ3LTg1NzItZjgwMDQwODYwYjI4IiwgInJvbGUiOiAiUmVzZWFyY2hlciIs
ICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVu
Y3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9u
X2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9y
ZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsidGVzdCB0b29sIl19XUqSAgoKY3Jld190
YXNrcxKDAgqAAlt7ImtleSI6ICJmODM5Yzg3YzNkNzU3Yzg4N2Y0Y2U3NGQxODY0YjAyYSIsICJp
ZCI6ICJjM2Y2NjY2MS00YWNjLTQ5OWQtYjJkNC1kZjI0Nzg1MTJhZGYiLCAiYXN5bmNfZXhlY3V0
aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2Vh
cmNoZXIiLCAiYWdlbnRfa2V5IjogIjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1Iiwg
InRvb2xzX25hbWVzIjogWyJhbm90aGVyIHRlc3QgdG9vbCJdfV16AhgBhQEAAQAAEo4CChD8dNvp
UItERukk59GnvESYEghtjirHyG3B3SoMVGFzayBDcmVhdGVkMAE5MAGByhmnExhBIFeByhmnExhK
LgoIY3Jld19rZXkSIgogOTgyNDYwZWUyZGQyY2YxMmE3MTM4YjcwODU5ZmU4MTdKMQoHY3Jld19p
ZBImCiRkM2Q4NmM2YS01Y2YyLTQwYjQtYTFkZC0zMDk1NjI4N2M1YTdKLgoIdGFza19rZXkSIgog
ZjgzOWM4N2MzZDc1N2M4ODdmNGNlNzRkMTg2NGIwMmFKMQoHdGFza19pZBImCiRjM2Y2NjY2MS00
YWNjLTQ5OWQtYjJkNC1kZjI0Nzg1MTJhZGZ6AhgBhQEAAQAAEowBChDdoNfQMW/Om7LQU9gZGDrl
Egjw71DM3bnOWCoKVG9vbCBVc2FnZTABOUgPFC8apxMYQdhtKi8apxMYShoKDmNyZXdhaV92ZXJz
aW9uEggKBjAuODYuMEoYCgl0b29sX25hbWUSCwoJVGVzdCBUb29sSg4KCGF0dGVtcHRzEgIYAXoC
GAGFAQABAAA=
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '14771'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Mon, 23 Dec 2024 00:33:37 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
an expert researcher, specialized in technology, software engineering, AI and
startups. You work as a freelancer and is now working on doing research and
analysis for a new customer.\nYour personal goal is: Make the best research
and analysis on content about AI and AI agents\nYou ONLY have access to the
following tools, and should NEVER make up tools that are not listed here:\n\nTool
Name: Test Tool\nTool Arguments: {''query'': {''description'': ''Query to process'',
''type'': ''str''}}\nTool Description: A test tool that just returns the input\n\nUse
the following format:\n\nThought: you should always think about what to do\nAction:
the action to take, only one name of [Test Tool], just the name, exactly as
it''s written.\nAction Input: the input to the action, just a simple python
dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n\nOnce all necessary information is gathered:\n\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question"}, {"role": "user", "content": "\nCurrent Task: Write a test
task\n\nThis is the expect criteria for your final answer: Test output\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
"I need to come up with a suitable test task that meets the criteria provided.
I will focus on outlining a clear and effective test task related to AI and
AI agents.\n\nAction: Test Tool\nAction Input: {\"query\": \"Create a test task
that involves evaluating the performance of an AI agent in a given scenario,
including criteria for success, tools required, and process for assessment.\"}\nObservation:
Processed: Create a test task that involves evaluating the performance of an
AI agent in a given scenario, including criteria for success, tools required,
and process for assessment."}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '2160'
content-type:
- application/json
cookie:
- _cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000;
__cf_bm=i6jvNjhsDne300GPAeEmyiJJKYqy7OPuamFG_kht3KE-1734914012-1.0.1.1-tCeVANAF521vkXpBdgYw.ov.fYUr6t5QC4LG_DugWyzu4C60Pi2CruTVniUgfCvkcu6rdHA5DwnaEZf2jFaRCQ
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AhQg0Cx1FED7gOwMpJUD5cQxVzQBK\",\n \"object\":
\"chat.completion\",\n \"created\": 1734914012,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to compile a complete
and detailed test task based on the processed input, ensuring it includes all
necessary components as per the expect criteria.\\n\\nFinal Answer: \\n### Test
Task: Evaluating the Performance of an AI Agent\\n\\n**Objective:** \\nTo evaluate
the performance of an AI agent in simulating conversation with users in a customer
service context.\\n\\n**Scenario:**\\nAn AI agent is tasked with handling customer
inquiries on an e-commerce platform. The agent must respond to questions about
product availability, order tracking, returns, and other common queries.\\n\\n**Criteria
for Success:**\\n1. **Accuracy:** The AI should provide correct information
in at least 90% of the interactions.\\n2. **Response Time:** The average response
time should be under 2 seconds.\\n3. **User Satisfaction:** Aim for a user satisfaction
score of 85% or higher based on follow-up surveys after interactions.\\n4. **Fallback
Rate:** The AI should not default to a human agent more than 10% of the time.\\n\\n**Tools
Required:**\\n- Chatbot development platform (e.g., Dialogflow, Rasa)\\n- Metrics
tracking software (to measure accuracy, response times, and user satisfaction)\\n-
Survey tool (e.g., Google Forms, SurveyMonkey) for feedback collection\\n\\n**Process
for Assessment:**\\n1. **Setup:** Deploy the AI agent on a testing environment
simulating real customer inquiries.\\n2. **Data Collection:** Run the test for
a predetermined period (e.g., one week) or until a set number of interactions
(e.g., 1000).\\n3. **Measurement:**\\n - Record the interactions and analyze
the accuracy of the AI's responses.\\n - Measure the average response time
for each interaction.\\n - Collect user satisfaction scores via surveys sent
after the interaction.\\n4. **Analysis:** Compile the data to see if the AI
met the success criteria. Identify strengths and weaknesses in the responses.\\n5.
**Review:** Share findings with the development team to strategize improvements.\\n\\nThis
detailed task will help assess the AI agent\u2019s capabilities and provide
insights for further enhancements.\",\n \"refusal\": null\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 416,\n \"completion_tokens\": 422,\n
\ \"total_tokens\": 838,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_d02d531b47\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f6442c2ba15a486-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 23 Dec 2024 00:33:39 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '6734'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999497'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_7d8df8b840e279bd64280d161d854161
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -2,6 +2,7 @@ import unittest
from unittest.mock import MagicMock, patch
import requests
from crewai.cli.authentication.main import AuthenticationCommand
@@ -47,7 +48,9 @@ class TestAuthenticationCommand(unittest.TestCase):
@patch("crewai.cli.authentication.main.requests.post")
@patch("crewai.cli.authentication.main.validate_token")
@patch("crewai.cli.authentication.main.console.print")
def test_poll_for_token_success(self, mock_print, mock_validate_token, mock_post, mock_tool):
def test_poll_for_token_success(
self, mock_print, mock_validate_token, mock_post, mock_tool
):
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {
@@ -62,7 +65,9 @@ class TestAuthenticationCommand(unittest.TestCase):
self.auth_command._poll_for_token({"device_code": "123456"})
mock_validate_token.assert_called_once_with("TOKEN")
mock_print.assert_called_once_with("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
mock_print.assert_called_once_with(
"\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n"
)
@patch("crewai.cli.authentication.main.requests.post")
@patch("crewai.cli.authentication.main.console.print")

View File

@@ -3,9 +3,10 @@ import unittest
from datetime import datetime, timedelta
from unittest.mock import MagicMock, patch
from crewai.cli.authentication.utils import TokenManager, validate_token
from cryptography.fernet import Fernet
from crewai.cli.authentication.utils import TokenManager, validate_token
class TestValidateToken(unittest.TestCase):
@patch("crewai.cli.authentication.utils.AsymmetricSignatureVerifier")

View File

@@ -1,10 +1,12 @@
import unittest
import json
import tempfile
import shutil
import tempfile
import unittest
from pathlib import Path
from crewai.cli.config import Settings
class TestSettings(unittest.TestCase):
def setUp(self):
self.test_dir = Path(tempfile.mkdtemp())
@@ -20,8 +22,7 @@ class TestSettings(unittest.TestCase):
def test_initialization_with_data(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
config_path=self.config_path, tool_repository_username="user1"
)
self.assertEqual(settings.tool_repository_username, "user1")
self.assertIsNone(settings.tool_repository_password)
@@ -37,22 +38,23 @@ class TestSettings(unittest.TestCase):
def test_merge_file_and_input_data(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({
"tool_repository_username": "file_user",
"tool_repository_password": "file_pass"
}, f)
json.dump(
{
"tool_repository_username": "file_user",
"tool_repository_password": "file_pass",
},
f,
)
settings = Settings(
config_path=self.config_path,
tool_repository_username="new_user"
config_path=self.config_path, tool_repository_username="new_user"
)
self.assertEqual(settings.tool_repository_username, "new_user")
self.assertEqual(settings.tool_repository_password, "file_pass")
def test_dump_new_settings(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
config_path=self.config_path, tool_repository_username="user1"
)
settings.dump()
@@ -67,8 +69,7 @@ class TestSettings(unittest.TestCase):
json.dump({"existing_setting": "value"}, f)
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
config_path=self.config_path, tool_repository_username="user1"
)
settings.dump()
@@ -79,10 +80,7 @@ class TestSettings(unittest.TestCase):
self.assertEqual(saved_data["tool_repository_username"], "user1")
def test_none_values(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username=None
)
settings = Settings(config_path=self.config_path, tool_repository_username=None)
settings.dump()
with self.config_path.open("r") as f:

View File

@@ -231,7 +231,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = ["crewai"]
""",
)
@@ -250,7 +250,7 @@ class TestDeployCommand(unittest.TestCase):
[project]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<=3.12"
requires-python = ">=3.10,<3.13"
dependencies = ["crewai"]
""",
)

View File

@@ -1,6 +1,7 @@
from crewai.cli.git import Repository
import pytest
from crewai.cli.git import Repository
@pytest.fixture()
def repository(fp):

View File

@@ -1,6 +1,7 @@
import os
import unittest
from unittest.mock import MagicMock, patch
from crewai.cli.plus_api import PlusAPI

View File

@@ -1,7 +1,9 @@
import pytest
import os
import shutil
import tempfile
import os
import pytest
from crewai.cli import utils

View File

@@ -85,7 +85,7 @@ def test_install_success(mock_get, mock_subprocess_run):
env=unittest.mock.ANY
)
assert "Succesfully installed sample-tool" in output
assert "Successfully installed sample-tool" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")

View File

@@ -10,6 +10,7 @@ import instructor
import pydantic_core
import pytest
from crewai.llm import LLM
from crewai.agent import Agent
from crewai.agents.cache import CacheHandler
from crewai.crew import Crew
@@ -300,6 +301,35 @@ def test_hierarchical_process():
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_test_with_custom_llm():
"""Test that Crew.test() works correctly with custom LLM instances."""
task = Task(
description="Test task",
expected_output="Test output",
agent=researcher,
)
custom_llm = LLM(model="gpt-4", temperature=0.5)
crew = Crew(agents=[researcher], tasks=[task], process=Process.sequential)
with mock.patch('crewai.crew.CrewEvaluator') as mock_evaluator:
crew.test(n_iterations=1, openai_model_name=custom_llm)
mock_evaluator.assert_called_once_with(mock.ANY, custom_llm)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_test_backward_compatibility():
"""Test that Crew.test() maintains backward compatibility with string model names."""
task = Task(
description="Test task",
expected_output="Test output",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task], process=Process.sequential)
with mock.patch('crewai.crew.CrewEvaluator') as mock_evaluator:
crew.test(n_iterations=1, openai_model_name="gpt-4")
mock_evaluator.assert_called_once_with(mock.ANY, "gpt-4")
def test_manager_llm_requirement_for_hierarchical_process():
task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
@@ -332,22 +362,31 @@ def test_manager_agent_delegating_to_assigned_task_agent():
tasks=[task],
)
crew.kickoff()
# Check if the manager agent has the correct tools
assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None
assert len(crew.manager_agent.tools) == 2
assert (
"Delegate a specific task to one of the following coworkers: Researcher\n"
in crew.manager_agent.tools[0].description
)
assert (
"Ask a specific question to one of the following coworkers: Researcher\n"
in crew.manager_agent.tools[1].description
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
# Because we are mocking execute_sync, we never hit the underlying _execute_core
# which sets the output attribute of the task
task.output = mock_task_output
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Verify execute_sync was called once
mock_execute_sync.assert_called_once()
# Get the tools argument from the call
_, kwargs = mock_execute_sync.call_args
tools = kwargs['tools']
# Verify the delegation tools were passed correctly
assert len(tools) == 2
assert any("Delegate a specific task to one of the following coworkers: Researcher" in tool.description for tool in tools)
assert any("Ask a specific question to one of the following coworkers: Researcher" in tool.description for tool in tools)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_manager_agent_delegating_to_all_agents():
@@ -402,9 +441,255 @@ def test_crew_with_delegating_agents():
assert (
result.raw
== "This is the complete content as specified:\nArtificial Intelligence (AI) Agents are sophisticated computer programs designed to perform tasks that typically require human intelligence, such as decision making, problem-solving, and learning. These agents operate autonomously, utilizing vast amounts of data, advanced algorithms, and machine learning techniques to analyze their environment, adapt to new information, and improve their performance over time.\n\nThe significance of AI Agents lies in their transformative potential across various industries. In healthcare, for example, they assist in diagnosing diseases with greater accuracy and speed than human practitioners, offering personalized treatment plans by analyzing patient data. In finance, AI Agents predict market trends, manage risks, and even execute trades, contributing to more stable and profitable financial systems. Customer service sectors benefit significantly from AI Agents, as they provide personalized and efficient responses, often resolving issues faster than traditional methods.\n\nMoreover, AI Agents are also making substantial contributions in fields like education and manufacturing. In education, they offer tailored learning experiences by assessing individual student needs and adjusting teaching methods accordingly. They help educators identify students who might need additional support and provide resources to enhance learning outcomes. In manufacturing, AI Agents optimize production lines, predict equipment failures, and improve supply chain management, thus boosting productivity and reducing downtime.\n\nAs these AI-powered entities continue to evolve, they are not only enhancing operational efficiencies but also driving innovation and creating new opportunities for growth and development in every sector they penetrate. The future of AI Agents looks promising, with the potential to revolutionize the way we live and work, making processes more efficient, decisions more data-driven, and solutions more innovative than ever before."
== "In the rapidly evolving landscape of technology, AI agents have emerged as formidable tools, revolutionizing how we interact with data and automate tasks. These sophisticated systems leverage machine learning and natural language processing to perform a myriad of functions, from virtual personal assistants to complex decision-making companions in industries such as finance, healthcare, and education. By mimicking human intelligence, AI agents can analyze massive data sets at unparalleled speeds, enabling businesses to uncover valuable insights, enhance productivity, and elevate user experiences to unprecedented levels.\n\nOne of the most striking aspects of AI agents is their adaptability; they learn from their interactions and continuously improve their performance over time. This feature is particularly valuable in customer service where AI agents can address inquiries, resolve issues, and provide personalized recommendations without the limitations of human fatigue. Moreover, with intuitive interfaces, AI agents enhance user interactions, making technology more accessible and user-friendly, thereby breaking down barriers that have historically hindered digital engagement.\n\nDespite their immense potential, the deployment of AI agents raises important ethical and practical considerations. Issues related to privacy, data security, and the potential for job displacement necessitate thoughtful dialogue and proactive measures. Striking a balance between technological innovation and societal impact will be crucial as organizations integrate these agents into their operations. Additionally, ensuring transparency in AI decision-making processes is vital to maintain public trust as AI agents become an integral part of daily life.\n\nLooking ahead, the future of AI agents appears bright, with ongoing advancements promising even greater capabilities. As we continue to harness the power of AI, we can expect these agents to play a transformative role in shaping various sectors—streamlining workflows, enabling smarter decision-making, and fostering more personalized experiences. Embracing this technology responsibly can lead to a future where AI agents not only augment human effort but also inspire creativity and efficiency across the board, ultimately redefining our interaction with the digital world."
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_delegating_agents_should_not_override_task_tools():
from typing import Type
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
query: str = Field(..., description="Query to process")
class TestTool(BaseTool):
name: str = "Test Tool"
description: str = "A test tool that just returns the input"
args_schema: Type[BaseModel] = TestToolInput
def _run(self, query: str) -> str:
return f"Processed: {query}"
# Create a task with the test tool
tasks = [
Task(
description="Produce and amazing 1 paragraph draft of an article about AI Agents.",
expected_output="A 4 paragraph article about AI.",
agent=ceo,
tools=[TestTool()],
)
]
crew = Crew(
agents=[ceo, writer],
process=Process.sequential,
tasks=tasks,
)
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
# Because we are mocking execute_sync, we never hit the underlying _execute_core
# which sets the output attribute of the task
tasks[0].output = mock_task_output
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Execute the task and verify both tools are present
_, kwargs = mock_execute_sync.call_args
tools = kwargs['tools']
assert any(isinstance(tool, TestTool) for tool in tools), "TestTool should be present"
assert any("delegate" in tool.name.lower() for tool in tools), "Delegation tool should be present"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_delegating_agents_should_not_override_agent_tools():
from typing import Type
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
query: str = Field(..., description="Query to process")
class TestTool(BaseTool):
name: str = "Test Tool"
description: str = "A test tool that just returns the input"
args_schema: Type[BaseModel] = TestToolInput
def _run(self, query: str) -> str:
return f"Processed: {query}"
new_ceo = ceo.model_copy()
new_ceo.tools = [TestTool()]
# Create a task with the test tool
tasks = [
Task(
description="Produce and amazing 1 paragraph draft of an article about AI Agents.",
expected_output="A 4 paragraph article about AI.",
agent=new_ceo
)
]
crew = Crew(
agents=[new_ceo, writer],
process=Process.sequential,
tasks=tasks,
)
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
# Because we are mocking execute_sync, we never hit the underlying _execute_core
# which sets the output attribute of the task
tasks[0].output = mock_task_output
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Execute the task and verify both tools are present
_, kwargs = mock_execute_sync.call_args
tools = kwargs['tools']
assert any(isinstance(tool, TestTool) for tool in new_ceo.tools), "TestTool should be present"
assert any("delegate" in tool.name.lower() for tool in tools), "Delegation tool should be present"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_task_tools_override_agent_tools():
from typing import Type
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
query: str = Field(..., description="Query to process")
class TestTool(BaseTool):
name: str = "Test Tool"
description: str = "A test tool that just returns the input"
args_schema: Type[BaseModel] = TestToolInput
def _run(self, query: str) -> str:
return f"Processed: {query}"
class AnotherTestTool(BaseTool):
name: str = "Another Test Tool"
description: str = "Another test tool"
args_schema: Type[BaseModel] = TestToolInput
def _run(self, query: str) -> str:
return f"Another processed: {query}"
# Set agent tools
new_researcher = researcher.model_copy()
new_researcher.tools = [TestTool()]
# Create task with different tools
task = Task(
description="Write a test task",
expected_output="Test output",
agent=new_researcher,
tools=[AnotherTestTool()]
)
crew = Crew(
agents=[new_researcher],
tasks=[task],
process=Process.sequential
)
crew.kickoff()
# Verify task tools override agent tools
assert len(task.tools) == 1 # AnotherTestTool
assert any(isinstance(tool, AnotherTestTool) for tool in task.tools)
assert not any(isinstance(tool, TestTool) for tool in task.tools)
# Verify agent tools remain unchanged
assert len(new_researcher.tools) == 1
assert isinstance(new_researcher.tools[0], TestTool)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_task_tools_override_agent_tools_with_allow_delegation():
"""
Test that task tools override agent tools while preserving delegation tools when allow_delegation=True
"""
from typing import Type
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
query: str = Field(..., description="Query to process")
class TestTool(BaseTool):
name: str = "Test Tool"
description: str = "A test tool that just returns the input"
args_schema: Type[BaseModel] = TestToolInput
def _run(self, query: str) -> str:
return f"Processed: {query}"
class AnotherTestTool(BaseTool):
name: str = "Another Test Tool"
description: str = "Another test tool"
args_schema: Type[BaseModel] = TestToolInput
def _run(self, query: str) -> str:
return f"Another processed: {query}"
# Set up agents with tools and allow_delegation
researcher_with_delegation = researcher.model_copy()
researcher_with_delegation.allow_delegation = True
researcher_with_delegation.tools = [TestTool()]
writer_for_delegation = writer.model_copy()
# Create a task with different tools
task = Task(
description="Write a test task",
expected_output="Test output",
agent=researcher_with_delegation,
tools=[AnotherTestTool()],
)
crew = Crew(
agents=[researcher_with_delegation, writer_for_delegation],
tasks=[task],
process=Process.sequential,
)
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
# We mock execute_sync to verify which tools get used at runtime
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Inspect the call kwargs to verify the actual tools passed to execution
_, kwargs = mock_execute_sync.call_args
used_tools = kwargs["tools"]
# Confirm AnotherTestTool is present but TestTool is not
assert any(isinstance(tool, AnotherTestTool) for tool in used_tools), "AnotherTestTool should be present"
assert not any(isinstance(tool, TestTool) for tool in used_tools), "TestTool should not be present among used tools"
# Confirm delegation tool(s) are present
assert any("delegate" in tool.name.lower() for tool in used_tools), "Delegation tool should be present"
# Finally, make sure the agent's original tools remain unchanged
assert len(researcher_with_delegation.tools) == 1
assert isinstance(researcher_with_delegation.tools[0], TestTool)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_verbose_output(capsys):
@@ -868,7 +1153,7 @@ def test_kickoff_for_each_empty_input():
assert results == []
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.vcr(filter_headeruvs=["authorization"])
def test_kickoff_for_each_invalid_input():
"""Tests if kickoff_for_each raises TypeError for invalid input types."""
@@ -1193,12 +1478,22 @@ def test_code_execution_flag_adds_code_tool_upon_kickoff():
crew = Crew(agents=[programmer], tasks=[task])
with patch.object(Agent, "execute_task") as executor:
executor.return_value = "ok"
crew.kickoff()
assert len(programmer.tools) == 1
assert programmer.tools[0].__class__ == CodeInterpreterTool
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Get the tools that were actually used in execution
_, kwargs = mock_execute_sync.call_args
used_tools = kwargs["tools"]
# Verify that exactly one tool was used and it was a CodeInterpreterTool
assert len(used_tools) == 1, "Should have exactly one tool"
assert isinstance(used_tools[0], CodeInterpreterTool), "Tool should be CodeInterpreterTool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_delegation_is_not_enabled_if_there_are_only_one_agent():
@@ -1307,21 +1602,37 @@ def test_hierarchical_crew_creation_tasks_with_agents():
process=Process.hierarchical,
manager_llm="gpt-4o",
)
crew.kickoff()
assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None
assert (
"Delegate a specific task to one of the following coworkers: Senior Writer\n"
in crew.manager_agent.tools[0].description
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
# Because we are mocking execute_sync, we never hit the underlying _execute_core
# which sets the output attribute of the task
task.output = mock_task_output
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Verify execute_sync was called once
mock_execute_sync.assert_called_once()
# Get the tools argument from the call
_, kwargs = mock_execute_sync.call_args
tools = kwargs['tools']
# Verify the delegation tools were passed correctly
assert len(tools) == 2
assert any("Delegate a specific task to one of the following coworkers: Senior Writer" in tool.description for tool in tools)
assert any("Ask a specific question to one of the following coworkers: Senior Writer" in tool.description for tool in tools)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_crew_creation_tasks_with_async_execution():
"""
Agents are not required for tasks in a hierarchical process but sometimes they are still added
This test makes sure that the manager still delegates the task to the agent even if the agent is passed in the task
Tests that async tasks in hierarchical crews are handled correctly with proper delegation tools
"""
task = Task(
description="Write one amazing paragraph about AI.",
@@ -1337,14 +1648,35 @@ def test_hierarchical_crew_creation_tasks_with_async_execution():
manager_llm="gpt-4o",
)
crew.kickoff()
assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None
assert (
"Delegate a specific task to one of the following coworkers: Senior Writer\n"
in crew.manager_agent.tools[0].description
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
# Create a mock Future that returns our TaskOutput
mock_future = MagicMock(spec=Future)
mock_future.result.return_value = mock_task_output
# Because we are mocking execute_async, we never hit the underlying _execute_core
# which sets the output attribute of the task
task.output = mock_task_output
with patch.object(Task, 'execute_async', return_value=mock_future) as mock_execute_async:
crew.kickoff()
# Verify execute_async was called once
mock_execute_async.assert_called_once()
# Get the tools argument from the call
_, kwargs = mock_execute_async.call_args
tools = kwargs['tools']
# Verify the delegation tools were passed correctly
assert len(tools) == 2
assert any("Delegate a specific task to one of the following coworkers: Senior Writer\n" in tool.description for tool in tools)
assert any("Ask a specific question to one of the following coworkers: Senior Writer\n" in tool.description for tool in tools)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_crew_creation_tasks_with_sync_last():
@@ -2583,3 +2915,244 @@ def test_hierarchical_verbose_false_manager_agent():
assert crew.manager_agent is not None
assert not crew.manager_agent.verbose
def test_task_tools_preserve_code_execution_tools():
"""
Test that task tools don't override code execution tools when allow_code_execution=True
"""
from typing import Type
from crewai_tools import CodeInterpreterTool
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
query: str = Field(..., description="Query to process")
class TestTool(BaseTool):
name: str = "Test Tool"
description: str = "A test tool that just returns the input"
args_schema: Type[BaseModel] = TestToolInput
def _run(self, query: str) -> str:
return f"Processed: {query}"
# Create a programmer agent with code execution enabled
programmer = Agent(
role="Programmer",
goal="Write code to solve problems.",
backstory="You're a programmer who loves to solve problems with code.",
allow_delegation=True,
allow_code_execution=True,
)
# Create a code reviewer agent
reviewer = Agent(
role="Code Reviewer",
goal="Review code for bugs and improvements",
backstory="You're an experienced code reviewer who ensures code quality and best practices.",
allow_delegation=True,
allow_code_execution=True,
)
# Create a task with its own tools
task = Task(
description="Write a program to calculate fibonacci numbers.",
expected_output="A working fibonacci calculator.",
agent=programmer,
tools=[TestTool()]
)
crew = Crew(
agents=[programmer, reviewer],
tasks=[task],
process=Process.sequential,
)
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Get the tools that were actually used in execution
_, kwargs = mock_execute_sync.call_args
used_tools = kwargs["tools"]
# Verify all expected tools are present
assert any(isinstance(tool, TestTool) for tool in used_tools), "Task's TestTool should be present"
assert any(isinstance(tool, CodeInterpreterTool) for tool in used_tools), "CodeInterpreterTool should be present"
assert any("delegate" in tool.name.lower() for tool in used_tools), "Delegation tool should be present"
# Verify the total number of tools (TestTool + CodeInterpreter + 2 delegation tools)
assert len(used_tools) == 4, "Should have TestTool, CodeInterpreter, and 2 delegation tools"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_multimodal_flag_adds_multimodal_tools():
"""
Test that an agent with multimodal=True automatically has multimodal tools added to the task execution.
"""
from crewai.tools.agent_tools.add_image_tool import AddImageTool
# Create an agent that supports multimodal
multimodal_agent = Agent(
role="Multimodal Analyst",
goal="Handle multiple media types (text, images, etc.).",
backstory="You're an agent specialized in analyzing text, images, and other media.",
allow_delegation=False,
multimodal=True, # crucial for adding the multimodal tool
)
# Create a dummy task
task = Task(
description="Describe what's in this image and generate relevant metadata.",
expected_output="An image description plus any relevant metadata.",
agent=multimodal_agent,
)
# Define a crew with the multimodal agent
crew = Crew(agents=[multimodal_agent], tasks=[task], process=Process.sequential)
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
# Mock execute_sync to verify the tools passed at runtime
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Get the tools that were actually used in execution
_, kwargs = mock_execute_sync.call_args
used_tools = kwargs["tools"]
# Check that the multimodal tool was added
assert any(isinstance(tool, AddImageTool) for tool in used_tools), (
"AddImageTool should be present when agent is multimodal"
)
# Verify we have exactly one tool (just the AddImageTool)
assert len(used_tools) == 1, "Should only have the AddImageTool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_multimodal_agent_image_tool_handling():
"""
Test that multimodal agents properly handle image tools in the CrewAgentExecutor
"""
# Create a multimodal agent
multimodal_agent = Agent(
role="Image Analyst",
goal="Analyze images and provide descriptions",
backstory="You're an expert at analyzing and describing images.",
allow_delegation=False,
multimodal=True,
)
# Create a task that involves image analysis
task = Task(
description="Analyze this image and describe what you see.",
expected_output="A detailed description of the image.",
agent=multimodal_agent,
)
crew = Crew(agents=[multimodal_agent], tasks=[task])
# Mock the image tool response
mock_image_tool_result = {
"role": "user",
"content": [
{"type": "text", "text": "Please analyze this image"},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/test-image.jpg",
},
},
],
}
# Create a mock task output for the final result
mock_task_output = TaskOutput(
description="Mock description",
raw="A detailed analysis of the image",
agent="Image Analyst"
)
with patch.object(Task, 'execute_sync') as mock_execute_sync:
# Set up the mock to return our task output
mock_execute_sync.return_value = mock_task_output
# Execute the crew
crew.kickoff()
# Get the tools that were passed to execute_sync
_, kwargs = mock_execute_sync.call_args
tools = kwargs['tools']
# Verify the AddImageTool is present and properly configured
image_tools = [tool for tool in tools if tool.name == "Add image to content"]
assert len(image_tools) == 1, "Should have exactly one AddImageTool"
# Test the tool's execution
image_tool = image_tools[0]
result = image_tool._run(
image_url="https://example.com/test-image.jpg",
action="Please analyze this image"
)
# Verify the tool returns the expected format
assert result == mock_image_tool_result
assert result["role"] == "user"
assert len(result["content"]) == 2
assert result["content"][0]["type"] == "text"
assert result["content"][1]["type"] == "image_url"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_multimodal_agent_live_image_analysis():
"""
Test that multimodal agents can analyze images through a real API call
"""
# Create a multimodal agent
image_analyst = Agent(
role="Image Analyst",
goal="Analyze images with high attention to detail",
backstory="You're an expert at visual analysis, trained to notice and describe details in images.",
allow_delegation=False,
multimodal=True,
verbose=True,
llm="gpt-4o"
)
# Create a task for image analysis
analyze_image = Task(
description="""
Analyze the provided image and describe what you see in detail.
Focus on main elements, colors, composition, and any notable details.
Image: {image_url}
""",
expected_output="A comprehensive description of the image contents.",
agent=image_analyst
)
# Create and run the crew
crew = Crew(
agents=[image_analyst],
tasks=[analyze_image]
)
# Execute with an image URL
result = crew.kickoff(inputs={
"image_url": "https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k="
})
# Verify we got a meaningful response
assert isinstance(result.raw, str)
assert len(result.raw) > 100 # Expecting a detailed analysis
assert "error" not in result.raw.lower() # No error messages in response

View File

@@ -3,6 +3,7 @@
import asyncio
import pytest
from crewai.flow.flow import Flow, and_, listen, or_, router, start
@@ -262,3 +263,62 @@ def test_flow_with_custom_state():
flow = StateFlow()
flow.kickoff()
assert flow.counter == 2
def test_router_with_multiple_conditions():
"""Test a router that triggers when any of multiple steps complete (OR condition),
and another router that triggers only after all specified steps complete (AND condition).
"""
execution_order = []
class ComplexRouterFlow(Flow):
@start()
def step_a(self):
execution_order.append("step_a")
@start()
def step_b(self):
execution_order.append("step_b")
@router(or_("step_a", "step_b"))
def router_or(self):
execution_order.append("router_or")
return "next_step_or"
@listen("next_step_or")
def handle_next_step_or_event(self):
execution_order.append("handle_next_step_or_event")
@listen(handle_next_step_or_event)
def branch_2_step(self):
execution_order.append("branch_2_step")
@router(and_(handle_next_step_or_event, branch_2_step))
def router_and(self):
execution_order.append("router_and")
return "final_step"
@listen("final_step")
def log_final_step(self):
execution_order.append("log_final_step")
flow = ComplexRouterFlow()
flow.kickoff()
assert "step_a" in execution_order
assert "step_b" in execution_order
assert "router_or" in execution_order
assert "handle_next_step_or_event" in execution_order
assert "branch_2_step" in execution_order
assert "router_and" in execution_order
assert "log_final_step" in execution_order
# Check that the AND router triggered after both relevant steps:
assert execution_order.index("router_and") > execution_order.index(
"handle_next_step_or_event"
)
assert execution_order.index("router_and") > execution_order.index("branch_2_step")
# final_step should run after router_and
assert execution_order.index("log_final_step") > execution_order.index("router_and")

View File

@@ -1,8 +1,12 @@
"""Test Knowledge creation and querying functionality."""
from pathlib import Path
from typing import List, Union
from unittest.mock import patch
import pytest
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
@@ -10,8 +14,6 @@ from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
import pytest
@pytest.fixture(autouse=True)
def mock_vector_db():
@@ -200,7 +202,7 @@ def test_single_short_file(mock_vector_db, tmpdir):
f.write(content)
file_source = TextFileKnowledgeSource(
file_path=file_path, metadata={"preference": "personal"}
file_paths=[file_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [file_source]
mock_vector_db.query.return_value = [{"context": content, "score": 0.9}]
@@ -242,7 +244,7 @@ def test_single_2k_character_file(mock_vector_db, tmpdir):
f.write(content)
file_source = TextFileKnowledgeSource(
file_path=file_path, metadata={"preference": "personal"}
file_paths=[file_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [file_source]
mock_vector_db.query.return_value = [{"context": content, "score": 0.9}]
@@ -279,7 +281,7 @@ def test_multiple_short_files(mock_vector_db, tmpdir):
file_paths.append((file_path, item["metadata"]))
file_sources = [
TextFileKnowledgeSource(file_path=path, metadata=metadata)
TextFileKnowledgeSource(file_paths=[path], metadata=metadata)
for path, metadata in file_paths
]
mock_vector_db.sources = file_sources
@@ -352,7 +354,7 @@ def test_multiple_2k_character_files(mock_vector_db, tmpdir):
file_paths.append(file_path)
file_sources = [
TextFileKnowledgeSource(file_path=path, metadata={"preference": "personal"})
TextFileKnowledgeSource(file_paths=[path], metadata={"preference": "personal"})
for path in file_paths
]
mock_vector_db.sources = file_sources
@@ -399,7 +401,7 @@ def test_hybrid_string_and_files(mock_vector_db, tmpdir):
file_paths.append(file_path)
file_sources = [
TextFileKnowledgeSource(file_path=path, metadata={"preference": "personal"})
TextFileKnowledgeSource(file_paths=[path], metadata={"preference": "personal"})
for path in file_paths
]
@@ -424,7 +426,7 @@ def test_pdf_knowledge_source(mock_vector_db):
# Create a PDFKnowledgeSource
pdf_source = PDFKnowledgeSource(
file_path=pdf_path, metadata={"preference": "personal"}
file_paths=[pdf_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [pdf_source]
mock_vector_db.query.return_value = [
@@ -461,7 +463,7 @@ def test_csv_knowledge_source(mock_vector_db, tmpdir):
# Create a CSVKnowledgeSource
csv_source = CSVKnowledgeSource(
file_path=csv_path, metadata={"preference": "personal"}
file_paths=[csv_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [csv_source]
mock_vector_db.query.return_value = [
@@ -496,7 +498,7 @@ def test_json_knowledge_source(mock_vector_db, tmpdir):
# Create a JSONKnowledgeSource
json_source = JSONKnowledgeSource(
file_path=json_path, metadata={"preference": "personal"}
file_paths=[json_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [json_source]
mock_vector_db.query.return_value = [
@@ -529,7 +531,7 @@ def test_excel_knowledge_source(mock_vector_db, tmpdir):
# Create an ExcelKnowledgeSource
excel_source = ExcelKnowledgeSource(
file_path=excel_path, metadata={"preference": "personal"}
file_paths=[excel_path], metadata={"preference": "personal"}
)
mock_vector_db.sources = [excel_source]
mock_vector_db.query.return_value = [
@@ -543,3 +545,42 @@ def test_excel_knowledge_source(mock_vector_db, tmpdir):
# Assert that the correct information is retrieved
assert any("30" in result["context"] for result in results)
mock_vector_db.query.assert_called_once()
def test_docling_source(mock_vector_db):
docling_source = CrewDoclingSource(
file_paths=[
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
],
)
mock_vector_db.sources = [docling_source]
mock_vector_db.query.return_value = [
{
"context": "Reward hacking is a technique used to improve the performance of reinforcement learning agents.",
"score": 0.9,
}
]
# Perform a query
query = "What is reward hacking?"
results = mock_vector_db.query(query)
assert any("reward hacking" in result["context"].lower() for result in results)
mock_vector_db.query.assert_called_once()
def test_multiple_docling_sources():
urls: List[Union[Path, str]] = [
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
"https://lilianweng.github.io/posts/2024-07-07-hallucination/",
]
docling_source = CrewDoclingSource(file_paths=urls)
assert docling_source.file_paths == urls
assert docling_source.content is not None
def test_docling_source_with_local_file():
current_dir = Path(__file__).parent
pdf_path = current_dir / "crewai_quickstart.pdf"
docling_source = CrewDoclingSource(file_paths=[pdf_path])
assert docling_source.file_paths == [pdf_path]
assert docling_source.content is not None

View File

@@ -1,5 +1,7 @@
import pytest
from unittest.mock import patch
import pytest
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.memory.short_term.short_term_memory import ShortTermMemory

View File

@@ -6,12 +6,13 @@ import os
from unittest.mock import MagicMock, patch
import pytest
from pydantic import BaseModel
from pydantic_core import ValidationError
from crewai import Agent, Crew, Process, Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
from crewai.utilities.converter import Converter
from pydantic import BaseModel
from pydantic_core import ValidationError
def test_task_tool_reflect_agent_tools():
@@ -685,7 +686,7 @@ def test_increment_tool_errors():
with patch.object(Task, "increment_tools_errors") as increment_tools_errors:
increment_tools_errors.return_value = None
crew.kickoff()
assert len(increment_tools_errors.mock_calls) == 12
assert len(increment_tools_errors.mock_calls) > 0
def test_task_definition_based_on_dict():
@@ -735,6 +736,48 @@ def test_interpolate_inputs():
assert task.expected_output == "Bullet point list of 5 interesting ideas about ML."
def test_interpolate_only():
"""Test the interpolate_only method for various scenarios including JSON structure preservation."""
task = Task(
description="Unused in this test",
expected_output="Unused in this test"
)
# Test JSON structure preservation
json_string = '{"info": "Look at {placeholder}", "nested": {"val": "{nestedVal}"}}'
result = task.interpolate_only(
input_string=json_string,
inputs={"placeholder": "the data", "nestedVal": "something else"}
)
assert '"info": "Look at the data"' in result
assert '"val": "something else"' in result
assert "{placeholder}" not in result
assert "{nestedVal}" not in result
# Test normal string interpolation
normal_string = "Hello {name}, welcome to {place}!"
result = task.interpolate_only(
input_string=normal_string,
inputs={"name": "John", "place": "CrewAI"}
)
assert result == "Hello John, welcome to CrewAI!"
# Test empty string
result = task.interpolate_only(
input_string="",
inputs={"unused": "value"}
)
assert result == ""
# Test string with no placeholders
no_placeholders = "Hello, this is a test"
result = task.interpolate_only(
input_string=no_placeholders,
inputs={"unused": "value"}
)
assert result == no_placeholders
def test_task_output_str_with_pydantic():
from crewai.tasks.output_format import OutputFormat

Some files were not shown because too many files have changed in this diff Show More