Compare commits

...

57 Commits

Author SHA1 Message Date
João Moura
0ab072a95e preparing new version 2024-09-18 04:36:05 -03:00
João Moura
5e8322b272 printing max rpm message in different color 2024-09-18 04:35:18 -03:00
João Moura
5a3b888f43 Updating all cassetes 2024-09-18 04:17:41 -03:00
João Moura
d7473edb41 always ending on a user message 2024-09-18 04:17:20 -03:00
João Moura
d125c85a2b updating dependenceis 2024-09-18 03:26:46 -03:00
João Moura
b46e663778 preparing new version 2024-09-18 03:24:20 -03:00
João Moura
2787c9b0ef quick bug fixes 2024-09-18 03:22:56 -03:00
João Moura
e77442cf34 Removing LangChain and Rebuilding Executor (#1322)
* rebuilding executor

* removing langchain

* Making all tests good

* fixing types and adding ability for nor using system prompts

* improving types

* pleasing the types gods

* pleasing the types gods

* fixing parser, tools and executor

* making sure all tests pass

* final pass

* fixing type

* Updating Docs

* preparing to cut new version
2024-09-16 14:14:04 -03:00
Paul Nugent
322780a5f3 Merge pull request #1315 from crewAIInc/docs_update
update readme.md
2024-09-10 15:26:00 +01:00
Rip&Tear
a54d34ea5b update readme.md 2024-09-10 21:56:46 +08:00
João Moura
bc793749a5 preparing enw version with deploy 2024-09-07 11:17:12 -07:00
João Moura
a9916940ef preparing new verison 0.55.1 2024-09-07 10:16:07 -07:00
João Moura
b7f4931de5 updating dependencies 2024-09-07 00:55:21 -07:00
João Moura
327b728bef preparing to cut new version 2024-09-07 00:34:34 -07:00
Sean
a9510eec88 Update LLM-Connections.md (#1181)
* Update LLM-Connections.md

* Update LLM-Connections.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:31:09 -03:00
Brandon Hancock (bhancock_ai)
d6db557f50 Update regex (#1228) 2024-09-07 04:27:58 -03:00
Brandon Hancock (bhancock_ai)
5ae56e3f72 add in 2 small improvements based on joao feedback (#1264) 2024-09-07 04:13:23 -03:00
Astha Puri
1c9ebb59b1 Update Start-a-New-CrewAI-Project-Template-Method.md (#1276)
* Update Start-a-New-CrewAI-Project-Template-Method.md

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:51 -03:00
Astha Puri
f520ceeb0d Add missing virtual environment commands (#1277)
* Add missing virtual environment commands

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:04 -03:00
Astha Puri
0df4d2fd4b Update Tasks.md (#1279) 2024-09-07 04:05:56 -03:00
Rip&Tear
596491d932 Update readme.md (#1294)
* Update pyproject.toml

More GH link updates

* Added FAQ section in README.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:59:48 -03:00
Ali Waleed
72fb109147 Fix: Langtrace Docs (#1297)
* fix langtrace docs

* remove gif for size constraint
2024-09-07 03:58:27 -03:00
Brandon Hancock (bhancock_ai)
40b336d2a5 Brandon/cre 256 default template crew isnt running properly (#1299)
* Update config typecheck to accept agents

* Clean up prints
2024-09-07 03:57:36 -03:00
anmol-aidora
5958df71a2 Updated CrewAI Documentation and Repository link in tools.poetry.urls (#1305)
* Updated CrewAI Documentation and Repository link in tools.poetry.urls

* Update pyproject.toml

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:55:02 -03:00
Brandon Hancock (bhancock_ai)
26d9af8367 Brandon/cre 252 add agent to crewai test (#1308)
* Update config typecheck to accept agents

* Clean up prints

* Adding agents to crew evaluator output table

* Properly generating table now

* Update tests
2024-09-07 03:53:23 -03:00
Brandon Hancock (bhancock_ai)
cdaf2d41c7 move away from pydantic v1 (#1284) 2024-09-06 14:22:01 -04:00
Paul Nugent
d9ee104167 Merge pull request #1290 from crewAIInc/DOCS/readme_update
Docs/readme update
2024-09-06 16:18:31 +01:00
Rip&Tear
0b9eeb7cdb Revert "feat: Improve documentation for Conditional Tasks in crewAI"
This reverts commit 18a2722e4d.
2024-09-05 10:30:08 +08:00
Rip&Tear
9b558ddc51 Revert "docs: Improve "Creating and Utilizing Tools in crewAI" documentation"
This reverts commit b955416458.
2024-09-05 10:30:00 +08:00
Rip&Tear
b857afe45b Revert "feat: Improve documentation for TXTSearchTool"
This reverts commit d2fab55561.
2024-09-05 10:29:03 +08:00
Rip&Tear
1d77c8de10 feat: Improve documentation for TXTSearchTool
Updated wording positioning
2024-09-05 10:27:11 +08:00
Rip&Tear
503f3a6372 Update README.md
Updated  GitHub links to point to new Repos
2024-09-05 10:17:46 +08:00
Rip&Tear (aider)
d2fab55561 feat: Improve documentation for TXTSearchTool 2024-09-05 00:06:11 +08:00
Rip&Tear (aider)
b955416458 docs: Improve "Creating and Utilizing Tools in crewAI" documentation 2024-09-04 18:31:09 +08:00
Rip&Tear (aider)
18a2722e4d feat: Improve documentation for Conditional Tasks in crewAI 2024-09-04 18:26:12 +08:00
Rip&Tear
c7e8d55926 Merge pull request #1273 from Astha0024/main
Update README.md with default model
2024-09-01 00:43:40 +08:00
Astha Puri
48698bf0b7 Merge branch 'main' into main 2024-08-30 21:58:02 -04:00
Thiago Moretto
f79b3fc322 Merge pull request #1269 from crewAIInc/tm-fix-cli-for-py310
Add py 3.10 support back to CLI + fixes
2024-08-30 13:39:04 -03:00
Thiago Moretto
0b9e753c2f Add comment to warn about dro simple_toml_parser 2024-08-30 11:52:53 -03:00
Astha Puri
5b3f7be1c4 Update README.md 2024-08-30 06:55:31 -04:00
Astha Puri
f2208f5f8e Update README.md 2024-08-30 06:54:34 -04:00
João Moura
79b5248b83 preparing new version 2024-08-30 00:33:51 -03:00
João Moura
d4791bef28 updating deployment cli with 2024-08-30 00:32:18 -03:00
João Moura
d861cb0d74 updating docs 2024-08-30 00:15:06 -03:00
João Moura
67f19f79c2 removing base_model from telemetry 2024-08-29 23:35:05 -03:00
Thiago Moretto
5f359b14f7 Fix test 2024-08-29 15:58:47 -03:00
Thiago Moretto
cda1900b14 Read as str no bytes
+handle when project_name is None (fails, basically)
2024-08-29 15:17:51 -03:00
Thiago Moretto
c8c0a89dc6 Fix type checking + lint 2024-08-29 15:02:19 -03:00
Thiago Moretto
9a10cc15f4 Add python 3.10 support back to CLI +fixes 2024-08-29 14:37:34 -03:00
Thiago Moretto
345f1eacde Get current crewai version from poetry.lock 2024-08-29 11:14:04 -03:00
Thiago Moretto
fa937bf3a7 Add Python 3.10 support to CLI 2024-08-29 10:22:54 -03:00
mvanwyk
172758020c bug: fix incorrect mkdocs site_url (#1238)
* bug: fix incorrect mkdocs site_url

* bug: fix incorrect mkdocs repo_url

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>

---------

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-08-24 15:45:59 -03:00
Brandon Hancock (bhancock_ai)
5ff178084e Fix deployment name issue to support Azure (#1253)
* Fix deployment name issue to support Azure

* More carefully check atters on llm
2024-08-23 12:58:37 -04:00
Brandon Hancock (bhancock_ai)
c012e0ff8d Update async docs with more examples (#1254)
* Update async docs with more examples

* Add use cases
2024-08-23 12:51:58 -04:00
Eduardo Chiarotti
f777c1c2e0 fix: All files pre commit (#1249) 2024-08-23 10:52:36 -03:00
Paul Nugent
782ce22d99 Update LLM-Connections.md (#1190)
Added missing quotes around os.environ

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-08-23 10:39:06 -03:00
Eduardo Chiarotti
f5246039e5 Feat/cli deploy (#1240)
* feat: set basic structure deploy commands

* feat: add first iteration of CLI Deploy

* feat: some minor refactor

* feat: Add api, Deploy command and update cli

* feat: Remove test token

* feat: add auth0 lib, update cli and improve code

* feat: update code and decouple auth

* fix: parts of the code

* feat: Add token manager to encrypt access token and get and save tokens

* feat: add audience to costants

* feat: add subsystem saving credentials and remove comment of type hinting

* feat: add get crew version to send on header of request

* feat: add docstrings

* feat: add tests for authentication module

* feat: add tests for utils

* feat: add unit tests for cl

* feat: add tests

* feat: add deploy man tests

* feat: fix type checking issue

* feat: rename tests to pass ci

* feat: fix pr issues

* feat: fix get crewai versoin

* fix: add timeout for tests.yml
2024-08-23 10:20:03 -03:00
212 changed files with 30767 additions and 1619847 deletions

View File

@@ -11,6 +11,7 @@ env:
jobs:
deploy:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code

View File

@@ -8,11 +8,11 @@
<h3>
[Homepage](https://www.crewai.io/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/joaomdmoura/crewai-examples) | [Discord](https://discord.com/invite/X4JWnZnxPb)
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
</h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/joaomdmoura/crewAI)
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/crewAIInc/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
@@ -64,24 +64,8 @@ from crewai_tools import SerperDevTool
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
# You can choose to use a local model through Ollama for example. See https://docs.crewai.com/how-to/LLM-Connections/ for more information.
# os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
# os.environ["OPENAI_MODEL_NAME"] ='openhermes' # Adjust based on available model
# os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
# You can pass an optional llm attribute specifying what model you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
#
# import os
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
#
# OR
#
# from langchain_openai import ChatOpenAI
search_tool = SerperDevTool()
# Define your agents with roles and goals
researcher = Agent(
@@ -94,7 +78,7 @@ researcher = Agent(
allow_delegation=False,
# You can pass an optional llm attribute specifying what model you wanna use.
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
tools=[search_tool]
tools=[SerperDevTool()]
)
writer = Agent(
role='Tech Content Strategist',
@@ -153,12 +137,12 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Examples
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file):
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
### Quick Tutorial
@@ -166,19 +150,19 @@ You can test different real life examples of AI crews in the [crewAI-examples re
### Write Job Descriptions
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
@@ -190,13 +174,12 @@ Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
@@ -284,3 +267,39 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
## License
CrewAI is released under the MIT License.
## Frequently Asked Questions (FAQ)
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### Q: How do I install CrewAI?
A: You can install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 810 KiB

BIN
docs/assets/langtrace1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

BIN
docs/assets/langtrace2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

BIN
docs/assets/langtrace3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

View File

@@ -11,31 +11,34 @@ description: What are crewAI Agents and how to use them.
<li class='leading-3'>Make decisions</li>
<li class='leading-3'>Communicate with other agents</li>
</ul>
<br/>
<br/>
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
## Agent Attributes
| Attribute | Parameter | Description |
| :------------------------- | :---- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| :------------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory`| Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
| **Use Stop Words** *(optional)* | `use_stop_words` | Adds the ability to not use stop words (to support o1 models). Default is `True`. |
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
## Creating an Agent
@@ -63,7 +66,7 @@ agent = Agent(
max_rpm=None, # Optional
max_execution_time=None, # Optional
verbose=True, # Optional
allow_delegation=True, # Optional
allow_delegation=False, # Optional
step_callback=my_intermediate_step_callback, # Optional
cache=True, # Optional
system_template=my_system_template, # Optional
@@ -74,8 +77,11 @@ agent = Agent(
tools_handler=my_tools_handler, # Optional
cache_handler=my_cache_handler, # Optional
callbacks=[callback1, callback2], # Optional
allow_code_execution=True, # Optiona
allow_code_execution=True, # Optional
max_retry_limit=2, # Optional
use_stop_words=True, # Optional
use_system_prompt=True, # Optional
respect_context_window=True, # Optional
)
```
@@ -105,7 +111,7 @@ agent = Agent(
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
CrewAI is a universal multi agent framework that allows for all agents to work together to automate tasks and solve problems.
CrewAI is a universal multi-agent framework that allows for all agents to work together to automate tasks and solve problems.
```py

View File

@@ -27,7 +27,7 @@ The `Crew` class has been enriched with several attributes to support advanced f
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution.
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew's execution.
- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency.
- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting.

View File

@@ -13,18 +13,18 @@ A crew in crewAI represents a collaborative group of agents working together to
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
| **Agents** | `agents` | A list of agents that are part of the crew. |
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. |
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. Default is `sequential`. |
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. Defaults to `False`. |
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
@@ -38,65 +38,6 @@ A crew in crewAI represents a collaborative group of agents working together to
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
## Creating a Crew
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
### Example: Assembling a Crew
```python
from crewai import Crew, Agent, Task, Process
from langchain_community.tools import DuckDuckGoSearchRun
from crewai_tools import tool
@tool('DuckDuckGoSearch')
def search(search_query: str):
"""Search the web for information on a given topic"""
return DuckDuckGoSearchRun().run(search_query)
# Define agents with specific roles and tools
researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
backstory="""You're a senior research analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
trends and innovations in the space of artificial intelligence.""",
tools=[search]
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
backstory="""You're a senior writer at a large company.
You're responsible for creating content to the business.
You're currently working on a project to write about trends
and innovations in the space of AI for your next meeting.""",
verbose=True
)
# Create tasks for the agents
research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher,
expected_output='A bullet list summary of the top 5 most important AI news'
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer,
expected_output='3 paragraph blog post on the latest AI technologies'
)
# Assemble the crew with a sequential process
my_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True,
)
```
## Crew Output

View File

@@ -4,16 +4,17 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
---
## Introduction to Memory Systems in crewAI
!!! note "Enhancing Agent Intelligence"
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :------------------- | :----------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context during the current executions. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. So Agents can remember what they did right and wrong across multiple executions |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. |
| Component | Description |
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## How Memory Systems Empower Agents
@@ -27,12 +28,12 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package.
The **Long-Term Memory** uses SQLLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform specific location found using the appdirs package
and the name of the project which can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform-specific location found using the appdirs package,
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
### Example: Configuring Memory for a Crew
@@ -56,17 +57,17 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "openai",
"config":{
"model": 'text-embedding-3-small'
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "openai",
"config": {
"model": 'text-embedding-3-small'
}
}
)
```
@@ -75,19 +76,19 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "google",
"config":{
"model": 'models/embedding-001',
"task_type": "retrieval_document",
"title": "Embeddings for Embedchain"
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "google",
"config": {
"model": 'models/embedding-001',
"task_type": "retrieval_document",
"title": "Embeddings for Embedchain"
}
}
)
```
@@ -96,18 +97,18 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "azure_openai",
"config":{
"model": 'text-embedding-ada-002',
"deployment_name": "your_embedding_model_deployment_name"
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "azure_openai",
"config": {
"model": 'text-embedding-ada-002',
"deployment_name": "your_embedding_model_deployment_name"
}
}
)
```
@@ -116,14 +117,14 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "gpt4all"
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "gpt4all"
}
)
```
@@ -132,17 +133,17 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "vertexai",
"config":{
"model": 'textembedding-gecko'
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "vertexai",
"config": {
"model": 'textembedding-gecko'
}
}
)
```
@@ -151,18 +152,18 @@ my_crew = Crew(
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "cohere",
"config":{
"model": "embed-english-v3.0",
"vector_dimension": 1024
}
}
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "cohere",
"config": {
"model": "embed-english-v3.0",
"vector_dimension": 1024
}
}
)
```

View File

@@ -12,7 +12,7 @@ A pipeline in crewAI represents a structured workflow that allows for the sequen
Understanding the following terms is crucial for working effectively with pipelines:
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
- **Run**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
@@ -28,13 +28,13 @@ This represents a pipeline with three stages:
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
3. Another sequential stage (crew4)
Each input creates its own run, flowing through all stages of the pipeline. Multiple runs can be processed concurrently, each following the defined pipeline structure.
Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
## Pipeline Attributes
| Attribute | Parameters | Description |
| :--------- | :--------- | :------------------------------------------------------------------------------------ |
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
| Attribute | Parameters | Description |
| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
## Creating a Pipeline
@@ -43,7 +43,7 @@ When creating a pipeline, you define a series of stages, each consisting of eith
### Example: Assembling a Pipeline
```python
from crewai import Crew, Agent, Task, Pipeline
from crewai import Crew, Process, Pipeline
# Define your crews
research_crew = Crew(
@@ -74,7 +74,8 @@ my_pipeline = Pipeline(
| Method | Description |
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **process_runs** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more runs through the pipeline, handling the flow of data between stages. |
| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
## Pipeline Output
@@ -99,12 +100,12 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
| Attribute | Parameters | Type | Description |
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline run. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the final stage, if applicable. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the final stage, if applicable. |
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage across all stages of the pipeline run. |
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline run. |
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline run. |
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
### Pipeline Run Result Methods and Properties
@@ -112,7 +113,7 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
| :-------------- | :------------------------------------------------------------------------------------------------------- |
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| \***\*str\*\*** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
### Accessing Pipeline Outputs
@@ -239,7 +240,7 @@ email_router = Router(
pipeline=normal_pipeline
)
},
default=Pipeline(stages=[normal_pipeline]) # Default to just classification if no urgency score
default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
)
# Use the router in a main pipeline
@@ -247,7 +248,7 @@ main_pipeline = Pipeline(stages=[classification_crew, email_router])
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
main_pipeline.kickoff(inputs=inputs)
main_pipeline.kickoff(inputs=inputs=inputs)
```
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7, it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
@@ -261,7 +262,7 @@ In this example, the router decides between an urgent pipeline and a normal pipe
### Error Handling and Validation
The Pipeline class includes validation mechanisms to ensure the robustness of the pipeline structure:
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
- Validates that stages contain only Crew instances or lists of Crew instances.
- Prevents double nesting of stages to maintain a clear structure.
- Prevents double nesting of stages to maintain a clear structure.

View File

@@ -43,7 +43,7 @@ my_crew = Crew(
### Example
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks.
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents' tasks.
```
[2024-07-15 16:49:11][INFO]: Planning the crew execution
@@ -96,7 +96,7 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
**Task Expected Output:** A fully fledge report with the main topics, each with a full section of information. Formatted as markdown without '```'
**Task Expected Output:** A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'
**Task Tools:** None specified
@@ -130,5 +130,4 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
- Double-check formatting and make any necessary adjustments.
**Expected Output:**
A fully-fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.

View File

@@ -1,3 +1,4 @@
```markdown
---
title: crewAI Tasks
description: Detailed guide on managing and creating tasks within the crewAI framework, reflecting the latest codebase updates.
@@ -12,22 +13,22 @@ Tasks within crewAI can be collaborative, requiring multiple agents to work toge
## Task Attributes
| Attribute | Parameters | Description |
| :------------------------------- | :---------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | A detailed description of what the task's completion looks like. |
| **Tools** _(optional)_ | `tools` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
| **Async Execution** _(optional)_ | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
| **Context** _(optional)_ | `context` | Specifies tasks whose outputs are used as context for this task. |
| **Config** _(optional)_ | `config` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
| **Output JSON** _(optional)_ | `output_json` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** _(optional)_ | `output_file` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Output** _(optional)_ | `output` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
| **Callback** _(optional)_ | `callback` | A callable that is executed with the task's output upon completion. |
| **Human Input** _(optional)_ | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. Defaults to False.|
| **Converter Class** _(optional)_ | `converter_cls` | A converter class used to export structured output. Defaults to None. |
| Attribute | Parameters | Type | Description |
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | `Optional[BaseAgent]` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Tools** _(optional)_ | `tools` | `Optional[List[Any]]` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Specifies tasks whose outputs are used as context for this task. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Output** _(optional)_ | `output` | `Optional[TaskOutput]` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | A callable that is executed with the task's output upon completion. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Indicates if the task should involve human review at the end, useful for tasks needing human oversight. Defaults to False.|
| **Converter Class** _(optional)_ | `converter_cls` | `Optional[Type[Converter]]` | A converter class used to export structured output. Defaults to None. |
## Creating a Task
@@ -49,28 +50,28 @@ Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's pr
## Task Output
!!! note "Understanding Task Outputs"
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw strings, JSON, and Pydantic models.
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
### Task Output Attributes
| Attribute | Parameters | Type | Description |
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A brief description of the task. |
| **Summary** | `summary` | `Optional[str]` | A short summary of the task, auto-generated from the first 10 words of the description. |
| **Description** | `description` | `str` | Description of the task. |
| **Summary** | `summary` | `Optional[str]` | Summary of the task, auto-generated from the first 10 words of the description. |
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
| **Agent** | `agent` | `str` | The agent that executed the task. |
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
### Task Output Methods and Properties
### Task Methods and Properties
| Method/Property | Description |
| :-------------- | :------------------------------------------------------------------------------------------------ |
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| \***\*str\*\*** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
| **str** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
### Accessing Task Outputs
@@ -131,6 +132,7 @@ research_agent = Agent(
verbose=True
)
# to perform a semantic search for a specified query from a text's content across the internet
search_tool = SerperDevTool()
task = Task(
@@ -233,7 +235,7 @@ def callback_function(output: TaskOutput):
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
Output: {output.raw}
""")
research_task = Task(
@@ -274,7 +276,7 @@ result = crew.kickoff()
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
Output: {task1.output.raw}
""")
```

View File

@@ -9,7 +9,7 @@ Testing is a crucial part of the development process, and it is essential to ens
### Using the Testing Feature
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model` which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
```bash
crewai test
@@ -21,20 +21,36 @@ If you want to run more iterations or use a different model, you can specify the
crewai test --n_iterations 5 --model gpt-4o
```
or using the short forms:
```bash
crewai test -n 5 -m gpt-4o
```
When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.
A table of scores at the end will show the performance of the crew in terms of the following metrics:
```
Task Scores
(1-10 Higher is better)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Tasks/Crew Run 1 Run 2 Avg. Total ┃
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
Task 1 │ 10.0 │ 9.0 9.5 │
│ Task 29.09.0 │ 9.0 │
│ Crew │ 9.5 │ 9.0 │ 9.2
└────────────┴───────┴───────┴────────────┘
Tasks Scores
(1-10 Higher is better)
┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Tasks/Crew/Agents │ Run 1 Run 2 Avg. Total │ Agents │
┠────────────────────┼───────┼───────┼────────────┼────────────────────────────────┼─────────────────────────────────┨
Task 1 │ 9.0 │ 9.5 │ 9.2 │ - Professional Insights │ ┃
┃ │ │ Researcher │ ┃
┃ │ │ │ │ │
┃ Task 2 │ 9.0 │ 10.0 │ 9.5 │ - Company Profile Investigator │ ┃
┃ │ │ │ │ │ ┃
┃ Task 3 │ 9.0 │ 9.0 │ 9.0 │ - Automation Insights │ ┃
┃ │ │ │ │ Specialist │ ┃
┃ │ │ │ │ │ ┃
┃ Task 4 │ 9.0 │ 9.0 │ 9.0 │ - Final Report Compiler │ ┃
┃ │ │ │ │ │ - Automation Insights ┃
┃ │ │ │ │ │ Specialist ┃
┃ Crew │ 9.00 │ 9.38 │ 9.2 │ │ ┃
┃ Execution Time (s) │ 126 │ 145 │ 135 │ │ ┃
┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
```
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.

View File

@@ -106,7 +106,7 @@ Here is a list of the available tools and their descriptions:
| **CodeInterpreterTool** | A tool for interpreting python code. |
| **ComposioTool** | Enables use of Composio tools. |
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
@@ -114,7 +114,7 @@ Here is a list of the available tools and their descriptions:
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages url using Firecrawl and returning its contents. |
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
@@ -123,14 +123,14 @@ Here is a list of the available tools and their descriptions:
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
| **Vision Tool** | A tool for generating images using the DALL-E API. |
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
| **Vision Tool** | A tool for generating images using the DALL-E API. |
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
## Creating your own Tools
@@ -144,6 +144,7 @@ pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a crewAI tool:
### Subclassing `BaseTool`
```python

View File

@@ -16,7 +16,7 @@ To use the training feature, follow these steps:
3. Run the following command:
```shell
crewai train -n <n_iterations> <filename>
crewai train -n <n_iterations> <filename> (optional)
```
!!! note "Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`."

View File

@@ -5,9 +5,10 @@ description: Learn how to integrate LangChain tools with CrewAI agents to enhanc
## Using LangChain Tools
!!! info "LangChain Integration"
CrewAI seamlessly integrates with LangChains comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)
CrewAI seamlessly integrates with LangChains comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with crewAI.
```python
import os
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper

View File

@@ -35,10 +35,10 @@ query_tool = LlamaIndexTool.from_query_engine(
# Create and assign the tools to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
)
# rest of the code ...
@@ -54,4 +54,4 @@ To effectively use the LlamaIndexTool, follow these steps:
pip install 'crewai[tools]'
```
2. **Install and Use LlamaIndex**: Follow LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
2. **Install and Use LlamaIndex**: Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.

View File

@@ -71,25 +71,59 @@ To customize your pipeline project, you can:
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
4. Add your environment variables into the `.env` file.
### Example: Defining a Pipeline
## Example 1: Defining a Two-Stage Sequential Pipeline
Here's an example of how to define a pipeline in `src/<project_name>/pipelines/normal_pipeline.py`:
Here's an example of how to define a pipeline with sequential stages in `src/<project_name>/pipelines/pipeline.py`:
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.normal_crew import NormalCrew
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
@PipelineBase
class NormalPipeline:
class SequentialPipeline:
def __init__(self):
# Initialize crews
self.normal_crew = NormalCrew().crew()
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.normal_crew
self.research_crew,
self.write_x_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
@PipelineBase
class ParallelExecutionPipeline:
def __init__(self):
# Initialize crews
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
self.write_linkedin_crew = WriteLinkedInCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.research_crew,
[self.write_x_crew, self.write_linkedin_crew] # Parallel execution
]
)
@@ -126,4 +160,4 @@ This will initialize your pipeline and begin task execution as defined in your `
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.

View File

@@ -1,5 +1,7 @@
---
title: Starting a New CrewAI Project - Using Template
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
---
@@ -17,40 +19,13 @@ Before we start, there are a couple of things to note:
Before getting started with CrewAI, make sure that you have installed it via pip:
```shell
$ pip install crewai crewai-tools
$ pip install 'crewai[tools]'
```
### Virtual Environments
It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version:
1. Use venv (Python's built-in virtual environment tool):
venv is included with Python 3.3 and later, making it a convenient choice for many developers. It's lightweight and easy to use, perfect for simple project setups.
To set up virtual environments with venv, refer to the official [Python documentation](https://docs.python.org/3/tutorial/venv.html).
2. Use Conda (A Python virtual environment manager):
Conda is an open-source package manager and environment management system for Python. It's widely used by data scientists, developers, and researchers to manage dependencies and environments in a reproducible way.
To set up virtual environments with Conda, refer to the official [Conda documentation](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html).
3. Use Poetry (A Python package manager and dependency management tool):
Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies.
Poetry is CrewAI's preferred tool for package / dependency management in CrewAI.
### Code IDEs
Most users of CrewAI use a Code Editor / Integrated Development Environment (IDE) for building their Crews. You can use any code IDE of your choice. See below for some popular options for Code Editors / Integrated Development Environments (IDE):
- [Visual Studio Code](https://code.visualstudio.com/) - Most popular
- [PyCharm](https://www.jetbrains.com/pycharm/)
- [Cursor AI](https://cursor.com)
Pick one that suits your style and needs.
## Creating a New Project
In this example, we will be using Venv as our virtual environment manager.
To set up a virtual environment, run the following CLI command:
In this example, we will be using poetry as our virtual environment manager.
To create a new CrewAI project, run the following CLI command:
```shell
@@ -123,10 +98,13 @@ research_candidates_task:
```
### Referencing Variables:
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from task.yaml file. Ensure your annotated agent and function name is the same otherwise your task won't recognize the reference properly.
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
#### Example References
agent.yaml
`agents.yaml`
```yaml
email_summarizer:
role: >
@@ -138,7 +116,8 @@ email_summarizer:
llm: mixtal_llm
```
task.yaml
`tasks.yaml`
```yaml
email_summarizer_task:
description: >
@@ -151,37 +130,34 @@ email_summarizer_task:
- research_task
```
Use the annotations to properly reference the agent and task in the crew.py file.
Use the annotations to properly reference the agent and task in the `crew.py` file.
### Annotations include:
* [@agent](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L17)
* [@task](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L4)
* [@crew](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L69)
* [@llm](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L23)
* [@tool](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L39)
* [@callback](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L44)
* [@output_json](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L29)
* [@output_pydantic](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L34)
* [@cache_handler](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L49)
crew.py
```py
* `@agent`
* `@task`
* `@crew`
* `@tool`
* `@callback`
* `@output_json`
* `@output_pydantic`
* `@cache_handler`
`crew.py`
```python
# ...
@llm
def mixtal_llm(self):
return ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
@agent
def email_summarizer(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
)
@agent
def email_summarizer(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
)
## ...other tasks defined
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
# ...
```
@@ -232,6 +208,7 @@ To run your project, use the following command:
```shell
$ crewai run
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
### Replay Tasks from Latest Crew Kickoff

View File

@@ -19,7 +19,7 @@ from crewai.task import Task
from crewai_tools import SerperDevTool
# Define a condition function for the conditional task
# if false task will be skipped, true, then execute task
# If false, the task will be skipped, if true, then execute the task.
def is_data_missing(output: TaskOutput) -> bool:
return len(output.pydantic.events) < 10 # this will skip this task
@@ -29,21 +29,21 @@ data_fetcher_agent = Agent(
goal="Fetch data online using Serper tool",
backstory="Backstory 1",
verbose=True,
tools=[SerperDevTool()],
tools=[SerperDevTool()]
)
data_processor_agent = Agent(
role="Data Processor",
goal="Process fetched data",
backstory="Backstory 2",
verbose=True,
verbose=True
)
summary_generator_agent = Agent(
role="Summary Generator",
goal="Generate summary from fetched data",
backstory="Backstory 3",
verbose=True,
verbose=True
)
class EventOutput(BaseModel):
@@ -69,7 +69,7 @@ conditional_task = ConditionalTask(
task3 = Task(
description="Generate summary of events in San Francisco from fetched data",
expected_output="summary_generated",
expected_output="A complete report on the customer and their customers and competitors, including their demographics, preferences, market positioning and audience engagement.",
agent=summary_generator_agent,
)
@@ -78,7 +78,7 @@ crew = Crew(
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
tasks=[task1, conditional_task, task3],
verbose=True,
planning=True # Enable planning feature
planning=True
)
# Run the crew

View File

@@ -91,4 +91,4 @@ Custom prompt files should be structured in JSON format and include all necessar
- **Improved Usability**: Supports multiple languages, making it suitable for global projects.
- **Consistency**: Ensures uniform prompt structures across different agents and tasks.
By incorporating these updates, CrewAI provides users with the ability to fully customize and internationalize their agent prompts, making the platform more versatile and user-friendly.
By incorporating these updates, CrewAI provides users with the ability to fully customize and internationalize their agent prompts, making the platform more versatile and user-friendly.

View File

@@ -14,12 +14,16 @@ Crafting an efficient CrewAI team hinges on the ability to dynamically tailor yo
- **Cache** *(Optional)*: Determines whether the agent should use a cache for tool usage.
- **Max RPM**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
- **Verbose** *(Optional)*: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents.
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents. This attribute is now set to `False` by default.
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency.
- **Max Execution Time** *(Optional)*: `max_execution_time` Sets the maximum execution time for an agent to complete a task.
- **System Template** *(Optional)*: `system_template` defines the system format for the agent.
- **Prompt Template** *(Optional)*: `prompt_template` defines the prompt format for the agent.
- **Response Template** *(Optional)*: `response_template` defines the response format for the agent.
- **Use Stop Words** *(Optional)*: `use_stop_words` attribute controls whether the agent will use stop words during task execution. This is now supported to aid o1 models.
- **Use System Prompt** *(Optional)*: `use_system_prompt` controls whether the agent will use a system prompt for task execution. Agents can now operate without system prompts.
- **Respect Context Window**: `respect_context_window` renames the sliding context window attribute and enables it by default to maintain context size.
- **Max Retry Limit**: `max_retry_limit` defines the maximum number of retries for an agent to execute a task when an error occurs.
## Advanced Customization Options
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
@@ -67,12 +71,11 @@ agent = Agent(
verbose=True,
max_rpm=None, # No limit on requests per minute
max_iter=25, # Default value for maximum iterations
allow_delegation=False
)
```
## Delegation and Autonomy
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to `True`, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is now set to `False`, disabling agents to seek assistance or delegate tasks as needed. This default behavior can be changed to promote collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be enabled to suit specific operational requirements.
### Example: Disabling Delegation for an Agent
```python
@@ -80,7 +83,7 @@ agent = Agent(
role='Content Writer',
goal='Write engaging content on market trends',
backstory='A seasoned writer with expertise in market analysis.',
allow_delegation=False # Disabling delegation
allow_delegation=True # Enabling delegation
)
```

View File

@@ -1,27 +1,31 @@
---
title: Forcing Tool Output as Result
description: Learn how to force tool output as the result in of an Agent's task in CrewAI.
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
---
## Introduction
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution.
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, avoiding any agent modification during the task execution.
## Forcing Tool Output as Result
To force the tool output as the result of an agent's task, you can set the `result_as_answer` parameter to `True` when creating the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
To force the tool output as the result of an agent's task, you need to set the `result_as_answer` parameter to `True` when adding a tool to the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
Here's an example of how to force the tool output as the result of an agent's task:
```python
# ...
from crewai.agent import Agent
from my_tool import MyCustomTool
# Define a custom tool that returns the result as the answer
# Create a coding agent with the custom tool
coding_agent = Agent(
role="Data Scientist",
goal="Produce amazing reports on AI",
backstory="You work with data and AI",
tools=[MyCustomTool(result_as_answer=True)],
)
# Assuming the tool's execution and result population occurs within the system
task_result = coding_agent.execute_task(task)
```
## Workflow in Action

View File

@@ -16,6 +16,13 @@ By default, tasks in CrewAI are managed through a sequential process. However, a
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
- **System Prompt Handling**: Optionally specify whether the system should use predefined prompts.
- **Stop Words Control**: Optionally specify whether stop words should be used, supporting various models including the o1 models.
- **Context Window Respect**: Prioritize important context by enabling respect of the context window, which is now the default behavior.
- **Delegation Control**: Delegation is now disabled by default to give users explicit control.
- **Max Requests Per Minute**: Configurable option to set the maximum number of requests per minute.
- **Max Iterations**: Limit the maximum number of iterations for obtaining a final answer.
## Implementing the Hierarchical Process
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`. Define a crew with a designated manager and establish a clear chain of command.
@@ -38,6 +45,10 @@ researcher = Agent(
cache=True,
verbose=False,
# tools=[] # This can be optionally specified; defaults to an empty list
use_system_prompt=True, # Enable or disable system prompts for this agent
use_stop_words=True, # Enable or disable stop words for this agent
max_rpm=30, # Limit on the number of requests per minute
max_iter=5 # Maximum number of iterations for a final answer
)
writer = Agent(
role='Writer',
@@ -46,6 +57,10 @@ writer = Agent(
cache=True,
verbose=False,
# tools=[] # Optionally specify tools; defaults to an empty list
use_system_prompt=True, # Enable or disable system prompts for this agent
use_stop_words=True, # Enable or disable stop words for this agent
max_rpm=30, # Limit on the number of requests per minute
max_iter=5 # Maximum number of iterations for a final answer
)
# Establishing the crew with a hierarchical process and additional configurations
@@ -54,6 +69,7 @@ project_crew = Crew(
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory if manager_agent is not set
process=Process.hierarchical, # Specifies the hierarchical management approach
respect_context_window=True, # Enable respect of the context window for tasks
memory=True, # Enable memory usage for enhanced task execution
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
planning=True, # Enable planning feature for pre-execution strategy

View File

@@ -74,7 +74,8 @@ task2 = Task(
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
agent=writer
agent=writer,
human_input=True
)
# Instantiate your crew with a sequential process

View File

@@ -4,9 +4,11 @@ description: Kickoff a Crew Asynchronously
---
## Introduction
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner. This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
## Asynchronous Crew Execution
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
### Method Signature
@@ -23,10 +25,20 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
- `CrewOutput`: An object representing the result of the crew execution.
## Example
Here's an example of how to kickoff a crew asynchronously:
## Potential Use Cases
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
## Example: Single Asynchronous Crew Execution
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
```python
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
@@ -49,6 +61,57 @@ analysis_crew = Crew(
tasks=[data_analysis_task]
)
# Execute the crew asynchronously
result = analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
# Async function to kickoff the crew asynchronously
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
# Run the async function
asyncio.run(async_crew_execution())
```
## Example: Multiple Asynchronous Crew Executions
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
```python
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create tasks that require code execution
task_1 = Task(
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
task_2 = Task(
description="Analyze the second dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
# Create two crews and add tasks
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
# Async function to kickoff multiple crews asynchronously and wait for all to finish
async def async_multiple_crews():
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
# Wait for both crews to finish
results = await asyncio.gather(result_1, result_2)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
# Run the async function
asyncio.run(async_multiple_crews())
```

View File

@@ -25,13 +25,17 @@ coding_agent = Agent(
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
agent=coding_agent,
expected_output="The average age calculated from the dataset"
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
tasks=[data_analysis_task],
verbose=True,
memory=False,
respect_context_window=True # enable by default
)
datasets = [
@@ -42,4 +46,4 @@ datasets = [
# Execute the crew
result = analysis_crew.kickoff_for_each(inputs=datasets)
```
```

View File

@@ -1,197 +1,113 @@
---
title: Connect CrewAI to LLMs
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes, methods, and configuration options.
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
---
## Connect CrewAI to LLMs
CrewAI now uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can easily configure your agents to use a different model or provider as described in this guide.
CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
## Supported Providers
The platform supports connections to an array of Generative AI models, including:
LiteLLM supports a wide range of providers, including but not limited to:
- OpenAI's suite of advanced language models
- Anthropic's cutting-edge AI offerings
- Ollama's diverse range of locally-hosted generative model & embeddings
- LM Studio's diverse range of locally hosted generative models & embeddings
- Groq's Super Fast LLM offerings
- Azures' generative AI offerings
- HuggingFace's generative AI offerings
- OpenAI
- Anthropic
- Google (Vertex AI, Gemini)
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- Hugging Face
- Ollama
- Mistral AI
- Replicate
- Together AI
- AI21
- Cloudflare Workers AI
- DeepInfra
- Groq
- And many more!
This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability.
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
## Changing the LLM
To use a different LLM with your CrewAI agents, you simply need to pass the model name as a string when initializing the agent. Here are some examples:
## Changing the default LLM
The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation.
```python
# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
# Agent will automatically use the model defined in the environment variable
example_agent = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
verbose=True
)
```
## Ollama Local Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.
```sh
os.environ[OPENAI_API_BASE]='http://localhost:11434'
os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model
os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
```
## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
1. [Download and install Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
3. Llama3.1 should now be served locally on `http://localhost:11434`
```
from crewai import Agent, Task, Crew
from langchain_ollama import ChatOllama
import os
os.environ["OPENAI_API_KEY"] = "NA"
llm = ChatOllama(
model = "llama3.1",
base_url = "http://localhost:11434")
general_agent = Agent(role = "Math Professor",
goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",
backstory = """You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
allow_delegation = False,
verbose = True,
llm = llm)
task = Task(description="""what is 3 + 5""",
agent = general_agent,
expected_output="A numerical answer.")
crew = Crew(
agents=[general_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
print(result)
```
## HuggingFace Integration
There are a couple of different ways you can use HuggingFace to host your LLM.
### Your own HuggingFace endpoint
```python
from langchain_huggingface import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
repo_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
agent = Agent(
role="HuggingFace Agent",
goal="Generate text using HuggingFace",
backstory="A diligent explorer of GitHub docs.",
llm=llm
)
```
## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.
### Configuration Examples
#### FastChat
```sh
os.environ[OPENAI_API_BASE]="http://localhost:8001/v1"
os.environ[OPENAI_MODEL_NAME]='oh-2.5m7b-q51'
os.environ[OPENAI_API_KEY]=NA
```
#### LM Studio
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
```sh
os.environ[OPENAI_API_BASE]="http://localhost:1234/v1"
os.environ[OPENAI_API_KEY]="lm-studio"
```
#### Groq API
```sh
os.environ[OPENAI_API_KEY]=your-groq-api-key
os.environ[OPENAI_MODEL_NAME]='llama3-8b-8192'
os.environ[OPENAI_API_BASE]=https://api.groq.com/openai/v1
```
#### Mistral API
```sh
os.environ[OPENAI_API_KEY]=your-mistral-api-key
os.environ[OPENAI_API_BASE]=https://api.mistral.ai/v1
os.environ[OPENAI_MODEL_NAME]="mistral-small"
```
### Solar
```sh
from langchain_community.chat_models.solar import SolarChat
```
```sh
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
os.environ[SOLAR_API_KEY]="your-solar-api-key"
```
# Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
### Cohere
```python
from langchain_cohere import ChatCohere
# Initialize language model
os.environ["COHERE_API_KEY"] = "your-cohere-api-key"
llm = ChatCohere()
# Free developer API key available here: https://cohere.com/
# Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
```
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
```sh
os.environ[AZURE_OPENAI_DEPLOYMENT] = "Your deployment"
os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"
os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint"
os.environ["AZURE_OPENAI_API_KEY"] = "<Your API Key>"
```
### Example Agent with Azure LLM
```python
from dotenv import load_dotenv
from crewai import Agent
from langchain_openai import AzureChatOpenAI
load_dotenv()
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY")
# Using OpenAI's GPT-4
openai_agent = Agent(
role='OpenAI Expert',
goal='Provide insights using GPT-4',
backstory="An AI assistant powered by OpenAI's latest model.",
llm='gpt-4'
)
azure_agent = Agent(
role='Example Agent',
goal='Demonstrate custom LLM configuration',
backstory='A diligent explorer of GitHub docs.',
llm=azure_llm
# Using Anthropic's Claude
claude_agent = Agent(
role='Anthropic Expert',
goal='Analyze data using Claude',
backstory="An AI assistant leveraging Anthropic's language model.",
llm='claude-2'
)
# Using Ollama's local Llama 2 model
ollama_agent = Agent(
role='Local AI Expert',
goal='Process information using a local model',
backstory="An AI assistant running on local hardware.",
llm='ollama/llama2'
)
# Using Google's Gemini model
gemini_agent = Agent(
role='Google AI Expert',
goal='Generate creative content with Gemini',
backstory="An AI assistant powered by Google's advanced language model.",
llm='gemini-pro'
)
```
## Configuration
For most providers, you'll need to set up your API keys as environment variables. Here's how you can do it for some common providers:
```python
import os
# OpenAI
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
# Anthropic
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-api-key"
# Google (Vertex AI)
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/your/credentials.json"
# Azure OpenAI
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "your-azure-endpoint"
# AWS (Bedrock)
os.environ["AWS_ACCESS_KEY_ID"] = "your-aws-access-key-id"
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-aws-secret-access-key"
```
For providers that require additional configuration or have specific setup requirements, please refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions.
## Using Local Models
For local models like those provided by Ollama, ensure you have the necessary software installed and running. For example, to use Ollama:
1. [Download and install Ollama](https://ollama.com/download)
2. Pull the desired model (e.g., `ollama pull llama2`)
3. Use the model in your CrewAI agent by specifying `llm='ollama/llama2'`
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
By leveraging LiteLLM, CrewAI now offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.

View File

@@ -7,10 +7,14 @@ description: How to monitor cost, latency, and performance of CrewAI Agents usin
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
![Overview of a select series of agent session runs](..%2Fassets%2Flangtrace1.png)
![Overview of agent traces](..%2Fassets%2Flangtrace2.png)
![Overview of llm traces in details](..%2Fassets%2Flangtrace3.png)
## Setup Instructions
1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
2. Create a project and generate an API key.
2. Create a project, set the project type to crewAI & generate an API key.
3. Install Langtrace in your CrewAI project using the following commands:
```bash
@@ -32,58 +36,29 @@ langtrace.init(api_key='<LANGTRACE_API_KEY>')
from crewai import Agent, Task, Crew
```
2. Create your CrewAI agents and tasks as usual.
3. Use Langtrace's tracking functions to monitor your CrewAI operations. For example:
```python
with langtrace.trace("CrewAI Task Execution"):
result = crew.kickoff()
```
### Features and Their Application to CrewAI
1. **LLM Token and Cost Tracking**
- Monitor the token usage and associated costs for each CrewAI agent interaction.
- Example:
```python
with langtrace.trace("Agent Interaction"):
agent_response = agent.execute(task)
```
2. **Trace Graph for Execution Steps**
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
- Useful for identifying bottlenecks in your agent workflows.
3. **Dataset Curation with Manual Annotation**
- Create datasets from your CrewAI task outputs for future training or evaluation.
- Example:
```python
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
```
4. **Prompt Versioning and Management**
- Keep track of different versions of prompts used in your CrewAI agents.
- Useful for A/B testing and optimizing agent performance.
5. **Prompt Playground with Model Comparisons**
- Test and compare different prompts and models for your CrewAI agents before deployment.
6. **Testing and Evaluations**
- Set up automated tests for your CrewAI agents and tasks.
- Example:
```python
langtrace.evaluate(agent_output, expected_output, "accuracy")
```
## Monitoring New CrewAI Features
CrewAI has introduced several new features that can be monitored using Langtrace:
1. **Code Execution**: Monitor the performance and output of code executed by agents.
```python
with langtrace.trace("Agent Code Execution"):
code_output = agent.execute_code(code_snippet)
```
2. **Third-party Agent Integration**: Track interactions with LlamaIndex, LangChain, and Autogen agents.

View File

@@ -1,6 +1,7 @@
---
title: Replay Tasks from Latest Crew Kickoff
description: Replay tasks from the latest crew.kickoff(...)
---
## Introduction
@@ -16,22 +17,24 @@ To use the replay feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
3. Run the following commands:
To view the latest kickoff task_ids use:
```shell
crewai log-tasks-outputs
```
Once you have your task_id to replay from use:
Once you have your `task_id` to replay, use:
```shell
crewai replay -t <task_id>
```
**Note:** Ensure `crewai` is installed and configured correctly in your development environment.
### Replaying from a Task Programmatically
To replay from a task programmatically, use the following steps:
1. Specify the task_id and input parameters for the replay process.
1. Specify the `task_id` and input parameters for the replay process.
2. Execute the replay command within a try-except block to handle potential errors.
```python
@@ -49,4 +52,7 @@ To replay from a task programmatically, use the following steps:
except Exception as e:
raise Exception(f"An unexpected error occurred: {e}")
```
```
## Conclusion
With the above enhancements and detailed functionality, replaying specific tasks in CrewAI has been made more efficient and robust. Ensure you follow the commands and steps precisely to make the most of these features.

View File

@@ -52,14 +52,17 @@ report_crew = Crew(
# Execute the crew
result = report_crew.kickoff()
# Accessing the type safe output
# Accessing the type-safe output
task_output: TaskOutput = result.tasks[0].output
crew_output: CrewOutput = result.output
```
### Note:
Each task in a sequential process **must** have an agent assigned. Ensure that every `Task` includes an `agent` parameter.
### Workflow in Action
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager directives guiding their execution.
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or directives guiding their execution.
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
## Advanced Features
@@ -87,4 +90,6 @@ CrewAI tracks token usage across all tasks and agents. You can access these metr
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
This updated documentation ensures that details accurately reflect the latest changes in the codebase and clearly describes how to leverage new features and configurations. The content is kept simple and direct to ensure easy understanding.

View File

@@ -5,24 +5,39 @@ description: Understanding the telemetry data collected by CrewAI and how it con
## Telemetry
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users. We don't offer a way to disable it now, but we will in the future.
!!! note "Personal Information"
By default, we collect no data that would be considered personal information under GDPR and other privacy regulations.
We do collect Tool's names and Agent's roles, so be advised not to include any personal information in the tool's names or the Agent's roles.
Because no personal information is collected, it's not necessary to worry about data residency.
When `share_crew` is enabled, additional data is collected which may contain personal information if included by the user. Users should exercise caution when enabling this feature to ensure compliance with privacy regulations.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy.
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users.
### Data Collected Includes:
- **Version of CrewAI**: Assessing the adoption rate of our latest version helps us understand user needs and guide our updates.
- **Python Version**: Identifying the Python versions our users operate with assists in prioritizing our support efforts for these versions.
- **General OS Information**: Details like the number of CPUs and the operating system type (macOS, Windows, Linux) enable us to focus our development on the most used operating systems and explore the potential for OS-specific features.
- **Number of Agents and Tasks in a Crew**: Ensures our internal testing mirrors real-world scenarios, helping us guide users towards best practices.
- **Crew Process Utilization**: Understanding how crews are utilized aids in directing our development focus.
- **Memory and Delegation Use by Agents**: Insights into how these features are used help evaluate their effectiveness and future.
- **Task Execution Mode**: Knowing whether tasks are executed in parallel or sequentially influences our emphasis on enhancing parallel execution capabilities.
- **Language Model Utilization**: Supports our goal to improve support for the most popular languages among our users.
- **Roles of Agents within a Crew**: Understanding the various roles agents play aids in crafting better tools, integrations, and examples.
- **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
It's pivotal to understand that by default, **NO personal data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables.
When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights. This expanded data collection may include personal information if users have incorporated it into their crews or tasks. Users should carefully consider the content of their crews and tasks before enabling `share_crew`. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
### Data Explanation:
| Defaulted | Data | Reason and Specifics |
|-----------|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| Yes | CrewAI and Python Version | Tracks software versions. Example: CrewAI v1.2.3, Python 3.8.10. No personal data. |
| Yes | Crew Metadata | Includes: randomly generated key and ID, process type (e.g., 'sequential', 'parallel'), boolean flag for memory usage (true/false), count of tasks, count of agents. All non-personal. |
| Yes | Agent Data | Includes: randomly generated key and ID, role name (should not include personal info), boolean settings (verbose, delegation enabled, code execution allowed), max iterations, max RPM, max retry limit, LLM info (see LLM Attributes), list of tool names (should not include personal info). No personal data. |
| Yes | Task Metadata | Includes: randomly generated key and ID, boolean execution settings (async_execution, human_input), associated agent's role and key, list of tool names. All non-personal. |
| Yes | Tool Usage Statistics | Includes: tool name (should not include personal info), number of usage attempts (integer), LLM attributes used. No personal data. |
| Yes | Test Execution Data | Includes: crew's randomly generated key and ID, number of iterations, model name used, quality score (float), execution time (in seconds). All non-personal. |
| Yes | Task Lifecycle Data | Includes: creation and execution start/end times, crew and task identifiers. Stored as spans with timestamps. No personal data. |
| Yes | LLM Attributes | Includes: name, model_name, model, top_k, temperature, and class name of the LLM. All technical, non-personal data. |
| Yes | Crew Deployment attempt using crewAI CLI | Includes: The fact a deploy is being made and crew id, and if it's trying to pull logs, no other data. |
| No | Agent's Expanded Data | Includes: goal description, backstory text, i18n prompt file identifier. Users should ensure no personal info is included in text fields. |
| No | Detailed Task Information | Includes: task description, expected output description, context references. Users should ensure no personal info is included in these fields. |
| No | Environment Information | Includes: platform, release, system, version, and CPU count. Example: 'Windows 10', 'x86_64'. No personal data. |
| No | Crew and Task Inputs and Outputs | Includes: input parameters and output results as non-identifiable data. Users should ensure no personal info is included. |
| No | Comprehensive Crew Execution Data | Includes: detailed logs of crew operations, all agents and tasks data, final output. All non-personal and technical in nature. |
Note: "No" in the "Defaulted" column indicates that this data is only collected when `share_crew` is set to `true`.
### Opt-In Further Telemetry Sharing
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns.
### Updates and Revisions
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.
!!! warning "Potential Personal Information"
If you enable `share_crew`, the collected data may include personal information if it has been incorporated into crew configurations, task descriptions, or outputs. Users should carefully review their data and ensure compliance with GDPR and other applicable privacy regulations before enabling this feature.

View File

@@ -2,8 +2,8 @@ site_name: crewAI
site_author: crewAI, Inc
site_description: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
repo_name: crewAI
repo_url: https://github.com/joaomdmoura/crewai/
site_url: https://crewai.com
repo_url: https://github.com/crewAIInc/crewAI
site_url: https://docs.crewai.com
edit_uri: edit/main/docs/
copyright: Copyright &copy; 2024 crewAI, Inc

3816
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "crewai"
version = "0.51.1"
version = "0.61.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
@@ -8,20 +8,20 @@ packages = [{ include = "crewai", from = "src" }]
[tool.poetry.urls]
Homepage = "https://crewai.com"
Documentation = "https://github.com/joaomdmoura/CrewAI/wiki/Index"
Repository = "https://github.com/joaomdmoura/crewai"
Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
pydantic = "^2.4.2"
langchain = ">0.2,<=0.3"
langchain = "^0.2.16"
openai = "^1.13.3"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "1.3.3"
regex = "^2023.12.25"
crewai-tools = { version = "^0.8.3", optional = true }
regex = "^2024.9.11"
crewai-tools = { version = "^0.12.1", optional = true }
click = "^8.1.7"
python-dotenv = "^1.0.0"
appdirs = "^1.4.4"
@@ -29,6 +29,9 @@ jsonref = "^1.1.0"
agentops = { version = "^0.3.0", optional = true }
embedchain = "^0.1.114"
json-repair = "^0.25.2"
auth0-python = "^4.7.1"
poetry = "^1.8.3"
litellm = "^1.44.22"
[tool.poetry.extras]
tools = ["crewai-tools"]
@@ -46,7 +49,7 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai-tools = "^0.8.3"
crewai-tools = "^0.12.1"
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"

View File

@@ -1,7 +1,17 @@
import warnings
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.pipeline import Pipeline
from crewai.process import Process
from crewai.routers import Router
from crewai.task import Task
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline"]
warnings.filterwarnings(
"ignore",
message="Pydantic serializer warnings:",
category=UserWarning,
module="pydantic.main",
)
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router"]

View File

@@ -1,23 +1,17 @@
import os
from inspect import signature
from typing import Any, List, Optional, Tuple
from langchain.agents.agent import RunnableAgent
from langchain.agents.tools import BaseTool
from langchain.agents.tools import tool as LangChainTool
from langchain_core.agents import AgentAction
from langchain_core.callbacks import BaseCallbackHandler
from langchain_openai import ChatOpenAI
from typing import Any, List, Optional
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser
from crewai.agents import CacheHandler
from crewai.utilities import Converter, Prompts
from crewai.tools.agent_tools import AgentTools
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.token_counter_callback import TokenCalcHandler
def mock_agent_ops_provider():
@@ -34,7 +28,6 @@ agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore # Name "agentops" already defined on line 21
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
@@ -64,7 +57,6 @@ class Agent(BaseAgent):
allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
"""
_times_executed: int = PrivateAttr(default=0)
@@ -81,18 +73,20 @@ class Agent(BaseAgent):
default=None,
description="Callback to be executed after each step of the agent execution.",
)
use_stop_words: bool = Field(
default=True,
description="Use stop words for the agent.",
)
use_system_prompt: Optional[bool] = Field(
default=True,
description="Use system prompt for the agent.",
)
llm: Any = Field(
default_factory=lambda: ChatOpenAI(
model=os.environ.get("OPENAI_MODEL_NAME", "gpt-4o")
),
description="Language model that will run the agent.",
description="Language model that will run the agent.", default="gpt-4o"
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None, description="Callback to be executed"
)
system_template: Optional[str] = Field(
default=None, description="System format for the agent."
)
@@ -108,6 +102,14 @@ class Agent(BaseAgent):
allow_code_execution: Optional[bool] = Field(
default=False, description="Enable code execution for the agent."
)
respect_context_window: bool = Field(
default=True,
description="Keep messages under the context window size by summarizing content.",
)
max_iter: int = Field(
default=15,
description="Maximum number of iterations for an agent to execute a task before giving it's best answer",
)
max_retry_limit: int = Field(
default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.",
@@ -116,33 +118,21 @@ class Agent(BaseAgent):
@model_validator(mode="after")
def post_init_setup(self):
self.agent_ops_agent_name = self.role
if hasattr(self.llm, "model_name"):
self._setup_llm_callbacks()
self.llm = (
getattr(self.llm, "model_name", None)
or getattr(self.llm, "deployment_name", None)
or self.llm
)
self.function_calling_llm = (
getattr(self.function_calling_llm, "model_name", None)
or getattr(self.function_calling_llm, "deployment_name", None)
or self.function_calling_llm
)
if not self.agent_executor:
self._setup_agent_executor()
return self
def _setup_llm_callbacks(self):
token_handler = TokenCalcHandler(self.llm.model_name, self._token_process)
if not isinstance(self.llm.callbacks, list):
self.llm.callbacks = []
if not any(
isinstance(handler, TokenCalcHandler) for handler in self.llm.callbacks
):
self.llm.callbacks.append(token_handler)
if agentops and not any(
isinstance(handler, agentops.LangchainCallbackHandler)
for handler in self.llm.callbacks
):
agentops.stop_instrumenting()
self.llm.callbacks.append(agentops.LangchainCallbackHandler())
def _setup_agent_executor(self):
if not self.cache_handler:
self.cache_handler = CacheHandler()
@@ -185,15 +175,7 @@ class Agent(BaseAgent):
task_prompt += self.i18n.slice("memory").format(memory=memory)
tools = tools or self.tools or []
parsed_tools = self._parse_tools(tools)
self.create_agent_executor(tools=tools)
self.agent_executor.tools = parsed_tools
self.agent_executor.task = task
self.agent_executor.tools_description = self._render_text_description_and_args(
parsed_tools
)
self.agent_executor.tools_names = self.__tools_names(parsed_tools)
self.create_agent_executor(tools=tools, task=task)
if self.crew and self.crew._train:
task_prompt = self._training_handler(task_prompt=task_prompt)
@@ -206,6 +188,7 @@ class Agent(BaseAgent):
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
"ask_for_human_input": task.human_input,
}
)["output"]
except Exception as e:
@@ -226,73 +209,25 @@ class Agent(BaseAgent):
return result
def format_log_to_str(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
observation_prefix: str = "Observation: ",
llm_prefix: str = "",
) -> str:
"""Construct the scratchpad that lets the agent continue its thought process."""
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\n{observation_prefix}{observation}\n{llm_prefix}"
return thoughts
def create_agent_executor(self, tools=None) -> None:
def create_agent_executor(self, tools=None, task=None) -> None:
"""Create an agent executor for the agent.
Returns:
An instance of the CrewAgentExecutor class.
"""
tools = tools or self.tools or []
agent_args = {
"input": lambda x: x["input"],
"tools": lambda x: x["tools"],
"tool_names": lambda x: x["tool_names"],
"agent_scratchpad": lambda x: self.format_log_to_str(
x["intermediate_steps"]
),
}
executor_args = {
"llm": self.llm,
"i18n": self.i18n,
"crew": self.crew,
"crew_agent": self,
"tools": self._parse_tools(tools),
"verbose": self.verbose,
"original_tools": tools,
"handle_parsing_errors": True,
"max_iterations": self.max_iter,
"max_execution_time": self.max_execution_time,
"step_callback": self.step_callback,
"tools_handler": self.tools_handler,
"function_calling_llm": self.function_calling_llm,
"callbacks": self.callbacks,
"max_tokens": self.max_tokens,
}
if self._rpm_controller:
executor_args["request_within_rpm_limit"] = (
self._rpm_controller.check_or_wait
)
parsed_tools = self._parse_tools(tools)
prompt = Prompts(
i18n=self.i18n,
agent=self,
tools=tools,
i18n=self.i18n,
use_system_prompt=self.use_system_prompt,
system_template=self.system_template,
prompt_template=self.prompt_template,
response_template=self.response_template,
).task_execution()
execution_prompt = prompt.partial(
goal=self.goal,
role=self.role,
backstory=self.backstory,
)
stop_words = [self.i18n.slice("observation")]
if self.response_template:
@@ -300,11 +235,27 @@ class Agent(BaseAgent):
self.response_template.split("{{ .Response }}")[1].strip()
)
bind = self.llm.bind(stop=stop_words)
inner_agent = agent_args | execution_prompt | bind | CrewAgentParser(agent=self)
self.agent_executor = CrewAgentExecutor(
agent=RunnableAgent(runnable=inner_agent), **executor_args
llm=self.llm,
task=task,
agent=self,
crew=self.crew,
tools=parsed_tools,
prompt=prompt,
original_tools=tools,
stop_words=stop_words,
max_iter=self.max_iter,
tools_handler=self.tools_handler,
use_stop_words=self.use_stop_words,
tools_names=self.__tools_names(parsed_tools),
tools_description=self._render_text_description_and_args(parsed_tools),
step_callback=self.step_callback,
function_calling_llm=self.function_calling_llm,
respect_context_window=self.respect_context_window,
request_within_rpm_limit=self._rpm_controller.check_or_wait
if self._rpm_controller
else None,
callbacks=[TokenCalcHandler(self._token_process)],
)
def get_delegation_tools(self, agents: List[BaseAgent]):
@@ -325,7 +276,7 @@ class Agent(BaseAgent):
def get_output_converter(self, llm, text, model, instructions):
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def _parse_tools(self, tools: List[Any]) -> List[LangChainTool]: # type: ignore # Function "langchain_core.tools.tool" is not valid as a type
def _parse_tools(self, tools: List[Any]) -> List[Any]: # type: ignore
"""Parse tools to be used for the task."""
tools_list = []
try:
@@ -368,7 +319,7 @@ class Agent(BaseAgent):
)
return task_prompt
def _render_text_description(self, tools: List[BaseTool]) -> str:
def _render_text_description(self, tools: List[Any]) -> str:
"""Render the tool name and description in plain text.
Output will be in the format of:
@@ -387,7 +338,7 @@ class Agent(BaseAgent):
return description
def _render_text_description_and_args(self, tools: List[BaseTool]) -> str:
def _render_text_description_and_args(self, tools: List[Any]) -> str:
"""Render the tool name, description, and args in plain text.
Output will be in the format of:

View File

@@ -1,4 +1,5 @@
from .cache.cache_handler import CacheHandler
from .executor import CrewAgentExecutor
from .parser import CrewAgentParser
from .tools_handler import ToolsHandler
__all__ = ["CacheHandler", "CrewAgentParser", "ToolsHandler"]

View File

@@ -102,7 +102,8 @@ class BaseAgent(ABC, BaseModel):
description="Maximum number of requests per minute for the agent execution to be respected.",
)
allow_delegation: bool = Field(
default=True, description="Allow delegation of tasks to agents"
default=False,
description="Enable agent to delegate and ask questions among each other.",
)
tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents' disposal"
@@ -224,10 +225,8 @@ class BaseAgent(ABC, BaseModel):
# Copy llm and clear callbacks
existing_llm = shallow_copy(self.llm)
existing_llm.callbacks = []
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(**copied_data, llm=existing_llm, tools=self.tools)
return copied_agent

View File

@@ -19,15 +19,13 @@ class CrewAgentExecutorMixin:
crew_agent: Optional["BaseAgent"]
task: Optional["Task"]
iterations: int
force_answer_max_iterations: int
have_forced_answer: bool
max_iter: int
_i18n: I18N
def _should_force_answer(self) -> bool:
"""Determine if a forced answer is required based on iteration count."""
return (
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
return (self.iterations >= self.max_iter) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
"""Create and save a short-term memory item if conditions are met."""

View File

@@ -1 +1,3 @@
from .cache_handler import CacheHandler
__all__ = ["CacheHandler"]

View File

@@ -0,0 +1,352 @@
import json
import re
from typing import Any, Dict, List, Union
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.parser import CrewAgentParser
from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N, Printer
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from crewai.utilities.logger import Logger
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.llm import LLM
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserException,
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE,
)
class CrewAgentExecutor(CrewAgentExecutorMixin):
_logger: Logger = Logger()
def __init__(
self,
llm: Any,
task: Any,
crew: Any,
agent: Any,
prompt: dict[str, str],
max_iter: int,
tools: List[Any],
tools_names: str,
use_stop_words: bool,
stop_words: List[str],
tools_description: str,
tools_handler: ToolsHandler,
step_callback: Any = None,
original_tools: List[Any] = [],
function_calling_llm: Any = None,
respect_context_window: bool = False,
request_within_rpm_limit: Any = None,
callbacks: List[Any] = [],
):
self._i18n: I18N = I18N()
self.llm = llm
self.task = task
self.agent = agent
self.crew = crew
self.prompt = prompt
self.tools = tools
self.tools_names = tools_names
self.stop = stop_words
self.max_iter = max_iter
self.callbacks = callbacks
self._printer: Printer = Printer()
self.tools_handler = tools_handler
self.original_tools = original_tools
self.step_callback = step_callback
self.use_stop_words = use_stop_words
self.tools_description = tools_description
self.function_calling_llm = function_calling_llm
self.respect_context_window = respect_context_window
self.request_within_rpm_limit = request_within_rpm_limit
self.ask_for_human_input = False
self.messages: List[Dict[str, str]] = []
self.iterations = 0
self.have_forced_answer = False
self.name_to_tool_map = {tool.name: tool for tool in self.tools}
def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]:
if "system" in self.prompt:
system_prompt = self._format_prompt(self.prompt.get("system", ""), inputs)
user_prompt = self._format_prompt(self.prompt.get("user", ""), inputs)
self.messages.append(self._format_msg(system_prompt, role="system"))
self.messages.append(self._format_msg(user_prompt))
else:
user_prompt = self._format_prompt(self.prompt.get("prompt", ""), inputs)
self.messages.append(self._format_msg(user_prompt))
self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
formatted_answer = self._invoke_loop()
if self.ask_for_human_input:
human_feedback = self._ask_human_input(formatted_answer.output)
if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer, human_feedback)
# Making sure we only ask for it once, so disabling for the next thought loop
self.ask_for_human_input = False
self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))
formatted_answer = self._invoke_loop()
return {"output": formatted_answer.output}
def _invoke_loop(self, formatted_answer=None):
try:
while not isinstance(formatted_answer, AgentFinish):
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
answer = LLM(
self.llm,
stop=self.stop if self.use_stop_words else None,
callbacks=self.callbacks,
).call(self.messages)
if not self.use_stop_words:
try:
self._format_answer(answer)
except OutputParserException as e:
if (
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE
in e.error
):
answer = answer.split("Observation:")[0].strip()
self.iterations += 1
formatted_answer = self._format_answer(answer)
if isinstance(formatted_answer, AgentAction):
action_result = self._use_tool(formatted_answer)
formatted_answer.text += f"\nObservation: {action_result}"
formatted_answer.result = action_result
self._show_logs(formatted_answer)
if self.step_callback:
self.step_callback(formatted_answer)
if self._should_force_answer():
if self.have_forced_answer:
return AgentFinish(
output=self._i18n.errors(
"force_final_answer_error"
).format(formatted_answer.text),
text=formatted_answer.text,
)
else:
formatted_answer.text += (
f'\n{self._i18n.errors("force_final_answer")}'
)
self.have_forced_answer = True
self.messages.append(
self._format_msg(formatted_answer.text, role="user")
)
except OutputParserException as e:
self.messages.append({"role": "user", "content": e.error})
return self._invoke_loop(formatted_answer)
except Exception as e:
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
self._handle_context_length()
return self._invoke_loop(formatted_answer)
else:
raise e
self._show_logs(formatted_answer)
return formatted_answer
def _show_start_logs(self):
if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
):
self._printer.print(
content=f"\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{self.agent.role}\033[00m"
)
self._printer.print(
content=f"\033[95m## Task:\033[00m \033[92m{self.task.description}\033[00m"
)
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
):
if isinstance(formatted_answer, AgentAction):
thought = re.sub(r"\n+", "\n", formatted_answer.thought)
formatted_json = json.dumps(
json.loads(formatted_answer.tool_input),
indent=2,
ensure_ascii=False,
)
self._printer.print(
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{self.agent.role}\033[00m"
)
if thought and thought != "":
self._printer.print(
content=f"\033[95m## Thought:\033[00m \033[92m{thought}\033[00m"
)
self._printer.print(
content=f"\033[95m## Using tool:\033[00m \033[92m{formatted_answer.tool}\033[00m"
)
self._printer.print(
content=f"\033[95m## Tool Input:\033[00m \033[92m\n{formatted_json}\033[00m"
)
self._printer.print(
content=f"\033[95m## Tool Output:\033[00m \033[92m\n{formatted_answer.result}\033[00m"
)
elif isinstance(formatted_answer, AgentFinish):
self._printer.print(
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{self.agent.role}\033[00m"
)
self._printer.print(
content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m"
)
def _use_tool(self, agent_action: AgentAction) -> Any:
tool_usage = ToolUsage(
tools_handler=self.tools_handler,
tools=self.tools,
original_tools=self.original_tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
function_calling_llm=self.function_calling_llm,
task=self.task, # type: ignore[arg-type]
agent=self.agent,
action=agent_action,
)
tool_calling = tool_usage.parse(agent_action.text)
if isinstance(tool_calling, ToolUsageErrorException):
tool_result = tool_calling.message
else:
if tool_calling.tool_name.casefold().strip() in [
name.casefold().strip() for name in self.name_to_tool_map
] or tool_calling.tool_name.casefold().replace("_", " ") in [
name.casefold().strip() for name in self.name_to_tool_map
]:
tool_result = tool_usage.use(tool_calling, agent_action.text)
else:
tool_result = self._i18n.errors("wrong_tool_name").format(
tool=tool_calling.tool_name,
tools=", ".join([tool.name.casefold() for tool in self.tools]),
)
return tool_result
def _summarize_messages(self) -> None:
llm = LLM(self.llm)
messages_groups = []
for message in self.messages:
content = message["content"]
for i in range(0, len(content), 5000):
messages_groups.append(content[i : i + 5000])
summarized_contents = []
for group in messages_groups:
summary = llm.call(
[
self._format_msg(
self._i18n.slices("summarizer_system_message"), role="system"
),
self._format_msg(
self._i18n.errors("sumamrize_instruction").format(group=group),
),
]
)
summarized_contents.append(summary)
merged_summary = " ".join(str(content) for content in summarized_contents)
self.messages = [
self._format_msg(
self._i18n.errors("summary").format(merged_summary=merged_summary)
)
]
def _handle_context_length(self) -> None:
if self.respect_context_window:
self._logger.log(
"debug",
"Context length exceeded. Summarizing content to fit the model context window.",
color="yellow",
)
self._summarize_messages()
else:
self._logger.log(
"debug",
"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)
def _handle_crew_training_output(
self, result: AgentFinish, human_feedback: str | None = None
) -> None:
"""Function to handle the process of the training data."""
agent_id = str(self.agent.id)
if (
CrewTrainingHandler(TRAINING_DATA_FILE).load()
and not self.ask_for_human_input
):
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
if training_data.get(agent_id):
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
training_data[agent_id][self.crew._train_iteration][
"improved_output"
] = result.output
CrewTrainingHandler(TRAINING_DATA_FILE).save(training_data)
else:
self._logger.log(
"error",
"Invalid crew or missing _train_iteration attribute.",
color="red",
)
if self.ask_for_human_input and human_feedback is not None:
training_data = {
"initial_output": result.output,
"human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.agent.role,
}
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if isinstance(train_iteration, int):
CrewTrainingHandler(TRAINING_DATA_FILE).append(
train_iteration, agent_id, training_data
)
else:
self._logger.log(
"error",
"Invalid train iteration type. Expected int.",
color="red",
)
else:
self._logger.log(
"error",
"Crew is None or does not have _train_iteration attribute.",
color="red",
)
def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str:
prompt = prompt.replace("{input}", inputs["input"])
prompt = prompt.replace("{tool_names}", inputs["tool_names"])
prompt = prompt.replace("{tools}", inputs["tools"])
return prompt
def _format_answer(self, answer: str) -> Union[AgentAction, AgentFinish]:
return CrewAgentParser(agent=self.agent).parse(answer)
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
return {"role": role, "content": prompt}

View File

@@ -1,397 +0,0 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
import click
from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.agents import AgentAction, AgentFinish, AgentStep
from langchain_core.exceptions import OutputParserException
from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from crewai.utilities.logger import Logger
from crewai.utilities.training_handler import CrewTrainingHandler
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
_i18n: I18N = I18N()
should_ask_for_human_input: bool = False
llm: Any = None
iterations: int = 0
task: Any = None
tools_description: str = ""
tools_names: str = ""
original_tools: List[Any] = []
crew_agent: Any = None
crew: Any = None
function_calling_llm: Any = None
request_within_rpm_limit: Any = None
tools_handler: Optional[InstanceOf[ToolsHandler]] = None
max_iterations: Optional[int] = 15
have_forced_answer: bool = False
force_answer_max_iterations: Optional[int] = None # type: ignore # Incompatible types in assignment (expression has type "int | None", base class "CrewAgentExecutorMixin" defined the type as "int")
step_callback: Optional[Any] = None
system_template: Optional[str] = None
prompt_template: Optional[str] = None
response_template: Optional[str] = None
_logger: Logger = Logger()
_fit_context_window_strategy: Optional[Literal["summarize"]] = "summarize"
def _call(
self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
"""Run text through and get agent response."""
# Construct a mapping of tool name to tool for easy lookup
name_to_tool_map = {tool.name: tool for tool in self.tools}
# We construct a mapping from each tool to a color, used for logging.
color_mapping = get_color_mapping(
[tool.name.casefold() for tool in self.tools],
excluded_colors=["green", "red"],
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
# Allowing human input given task setting
if self.task and self.task.human_input:
self.should_ask_for_human_input = True
# Let's start tracking the number of iterations and time elapsed
self.iterations = 0
time_elapsed = 0.0
start_time = time.time()
# We now enter the agent loop (until it returns something).
while self._should_continue(self.iterations, time_elapsed):
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
next_step_output = self._take_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager=run_manager,
)
if self.step_callback:
self.step_callback(next_step_output)
if isinstance(next_step_output, AgentFinish):
# Creating long term memory
create_long_term_memory = threading.Thread(
target=self._create_long_term_memory, args=(next_step_output,)
)
create_long_term_memory.start()
return self._return(
next_step_output, intermediate_steps, run_manager=run_manager
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
# See if tool should return directly
tool_return = self._get_tool_return(next_step_action)
if tool_return is not None:
return self._return(
tool_return, intermediate_steps, run_manager=run_manager
)
self.iterations += 1
time_elapsed = time.time() - start_time
output = self.agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return self._return(output, intermediate_steps, run_manager=run_manager)
def _iter_next_step(
self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Iterator[Union[AgentFinish, AgentAction, AgentStep]]:
"""Take a single step in the thought-action-observation loop.
Override this to take control of how the agent makes and acts on choices.
"""
try:
if self._should_force_answer():
error = self._i18n.errors("force_final_answer")
output = AgentAction("_Exception", error, error)
self.have_forced_answer = True
yield AgentStep(action=output, observation=error)
return
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
except OutputParserException as e:
if isinstance(self.handle_parsing_errors, bool):
raise_error = not self.handle_parsing_errors
else:
raise_error = False
if raise_error:
raise ValueError(
"An output parsing error occurred. "
"In order to pass this error back to the agent and have it try "
"again, pass `handle_parsing_errors=True` to the AgentExecutor. "
f"This is the error: {str(e)}"
)
str(e)
if isinstance(self.handle_parsing_errors, bool):
if e.send_to_llm:
observation = f"\n{str(e.observation)}"
str(e.llm_output)
else:
observation = ""
elif isinstance(self.handle_parsing_errors, str):
observation = f"\n{self.handle_parsing_errors}"
elif callable(self.handle_parsing_errors):
observation = f"\n{self.handle_parsing_errors(e)}"
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, "")
if run_manager:
run_manager.on_agent_action(output, color="green")
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = ExceptionTool().run(
output.tool_input,
verbose=False,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
if self._should_force_answer():
error = self._i18n.errors("force_final_answer")
output = AgentAction("_Exception", error, error)
yield AgentStep(action=output, observation=error)
return
yield AgentStep(action=output, observation=observation)
return
except Exception as e:
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
output = self._handle_context_length_error(
intermediate_steps, run_manager, inputs
)
if isinstance(output, AgentFinish):
yield output
elif isinstance(output, list):
for step in output:
yield step
return
raise e
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
if self.should_ask_for_human_input:
human_feedback = self._ask_human_input(output.return_values["output"])
if self.crew and self.crew._train:
self._handle_crew_training_output(output, human_feedback)
# Making sure we only ask for it once, so disabling for the next thought loop
self.should_ask_for_human_input = False
action = AgentAction(
tool="Human Input", tool_input=human_feedback, log=output.log
)
yield AgentStep(
action=action,
observation=self._i18n.slice("human_feedback").format(
human_feedback=human_feedback
),
)
return
else:
if self.crew and self.crew._train:
self._handle_crew_training_output(output)
yield output
return
self._create_short_term_memory(output)
actions: List[AgentAction]
actions = [output] if isinstance(output, AgentAction) else output
yield from actions
for agent_action in actions:
if run_manager:
run_manager.on_agent_action(agent_action, color="green")
tool_usage = ToolUsage(
tools_handler=self.tools_handler, # type: ignore # Argument "tools_handler" to "ToolUsage" has incompatible type "ToolsHandler | None"; expected "ToolsHandler"
tools=self.tools, # type: ignore # Argument "tools" to "ToolUsage" has incompatible type "Sequence[BaseTool]"; expected "list[BaseTool]"
original_tools=self.original_tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
function_calling_llm=self.function_calling_llm,
task=self.task,
agent=self.crew_agent,
action=agent_action,
)
tool_calling = tool_usage.parse(agent_action.log)
if isinstance(tool_calling, ToolUsageErrorException):
observation = tool_calling.message
else:
if tool_calling.tool_name.casefold().strip() in [
name.casefold().strip() for name in name_to_tool_map
] or tool_calling.tool_name.casefold().replace("_", " ") in [
name.casefold().strip() for name in name_to_tool_map
]:
observation = tool_usage.use(tool_calling, agent_action.log)
else:
observation = self._i18n.errors("wrong_tool_name").format(
tool=tool_calling.tool_name,
tools=", ".join([tool.name.casefold() for tool in self.tools]),
)
yield AgentStep(action=agent_action, observation=observation)
def _handle_crew_training_output(
self, output: AgentFinish, human_feedback: str | None = None
) -> None:
"""Function to handle the process of the training data."""
agent_id = str(self.crew_agent.id)
if (
CrewTrainingHandler(TRAINING_DATA_FILE).load()
and not self.should_ask_for_human_input
):
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
if training_data.get(agent_id):
training_data[agent_id][self.crew._train_iteration][
"improved_output"
] = output.return_values["output"]
CrewTrainingHandler(TRAINING_DATA_FILE).save(training_data)
if self.should_ask_for_human_input and human_feedback is not None:
training_data = {
"initial_output": output.return_values["output"],
"human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.crew_agent.role,
}
CrewTrainingHandler(TRAINING_DATA_FILE).append(
self.crew._train_iteration, agent_id, training_data
)
def _handle_context_length(
self, intermediate_steps: List[Tuple[AgentAction, str]]
) -> List[Tuple[AgentAction, str]]:
text = intermediate_steps[0][1]
original_action = intermediate_steps[0][0]
text_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n"],
chunk_size=8000,
chunk_overlap=500,
)
if self._fit_context_window_strategy == "summarize":
docs = text_splitter.create_documents([text])
self._logger.log(
"debug",
"Summarizing Content, it is recommended to use a RAG tool",
color="bold_blue",
)
summarize_chain = load_summarize_chain(
self.llm, chain_type="map_reduce", verbose=True
)
summarized_docs = []
for doc in docs:
summary = summarize_chain.invoke(
{"input_documents": [doc]}, return_only_outputs=True
)
summarized_docs.append(summary["output_text"])
formatted_results = "\n\n".join(summarized_docs)
summary_step = AgentStep(
action=AgentAction(
tool=original_action.tool,
tool_input=original_action.tool_input,
log=original_action.log,
),
observation=formatted_results,
)
summary_tuple = (summary_step.action, summary_step.observation)
return [summary_tuple]
return intermediate_steps
def _handle_context_length_error(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun],
inputs: Dict[str, str],
) -> Union[AgentFinish, List[AgentStep]]:
self._logger.log(
"debug",
"Context length exceeded. Asking user if they want to use summarize prompt to fit, this will reduce context length.",
color="yellow",
)
user_choice = click.confirm(
"Context length exceeded. Do you want to summarize the text to fit models context window?"
)
if user_choice:
self._logger.log(
"debug",
"Context length exceeded. Using summarize prompt to fit, this will reduce context length.",
color="bold_blue",
)
intermediate_steps = self._handle_context_length(intermediate_steps)
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
if isinstance(output, AgentFinish):
return output
elif isinstance(output, AgentAction):
return [AgentStep(action=output, observation=None)]
else:
return [AgentStep(action=action, observation=None) for action in output]
else:
self._logger.log(
"debug",
"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)

View File

@@ -1,10 +1,6 @@
import re
from typing import Any, Union
from json_repair import repair_json
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.exceptions import OutputParserException
from crewai.utilities import I18N
@@ -14,7 +10,39 @@ MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE = "I did it wrong. Invalid Forma
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE = "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"
class CrewAgentParser(ReActSingleInputOutputParser):
class AgentAction:
thought: str
tool: str
tool_input: str
text: str
result: str
def __init__(self, thought: str, tool: str, tool_input: str, text: str):
self.thought = thought
self.tool = tool
self.tool_input = tool_input
self.text = text
class AgentFinish:
thought: str
output: str
text: str
def __init__(self, thought: str, output: str, text: str):
self.thought = thought
self.output = output
self.text = text
class OutputParserException(Exception):
error: str
def __init__(self, error: str):
self.error = error
class CrewAgentParser:
"""Parses ReAct-style LLM calls that have a single tool input.
Expects output to be in one of two formats.
@@ -38,7 +66,11 @@ class CrewAgentParser(ReActSingleInputOutputParser):
_i18n: I18N = I18N()
agent: Any = None
def __init__(self, agent: Any):
self.agent = agent
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
thought = self._extract_thought(text)
includes_answer = FINAL_ANSWER_ACTION in text
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
@@ -47,7 +79,7 @@ class CrewAgentParser(ReActSingleInputOutputParser):
if action_match:
if includes_answer:
raise OutputParserException(
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}"
)
action = action_match.group(1)
clean_action = self._clean_action(action)
@@ -57,30 +89,23 @@ class CrewAgentParser(ReActSingleInputOutputParser):
tool_input = action_input.strip(" ").strip('"')
safe_tool_input = self._safe_repair_json(tool_input)
return AgentAction(clean_action, safe_tool_input, text)
return AgentAction(thought, clean_action, safe_tool_input, text)
elif includes_answer:
return AgentFinish(
{"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
return AgentFinish(thought, final_answer, text)
if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
self.agent.increment_formatting_errors()
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation=f"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\n{self._i18n.slice('final_answer_format')}",
llm_output=text,
send_to_llm=True,
f"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\n{self._i18n.slice('final_answer_format')}",
)
elif not re.search(
r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
):
self.agent.increment_formatting_errors()
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation=MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
llm_output=text,
send_to_llm=True,
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
)
else:
format = self._i18n.slice("format_without_tools")
@@ -88,11 +113,15 @@ class CrewAgentParser(ReActSingleInputOutputParser):
self.agent.increment_formatting_errors()
raise OutputParserException(
error,
observation=error,
llm_output=text,
send_to_llm=True,
)
def _extract_thought(self, text: str) -> str:
regex = r"(.*?)(?:\n\nAction|\n\nFinal Answer)"
thought_match = re.search(regex, text, re.DOTALL)
if thought_match:
return thought_match.group(1).strip()
return ""
def _clean_action(self, text: str) -> str:
"""Clean action string by removing non-essential formatting characters."""
return re.sub(r"^\s*\*+\s*|\s*\*+\s*$", "", text).strip()

View File

@@ -0,0 +1,3 @@
from .main import AuthenticationCommand
__all__ = ["AuthenticationCommand"]

View File

@@ -0,0 +1,4 @@
ALGORITHMS = ["RS256"]
AUTH0_DOMAIN = "crewai.us.auth0.com"
AUTH0_CLIENT_ID = "DEVC5Fw6NlRoSzmDCcOhVq85EfLBjKa8"
AUTH0_AUDIENCE = "https://crewai.us.auth0.com/api/v2/"

View File

@@ -0,0 +1,75 @@
import time
import webbrowser
from typing import Any, Dict
import requests
from rich.console import Console
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
from .utils import TokenManager, validate_token
console = Console()
class AuthenticationCommand:
DEVICE_CODE_URL = f"https://{AUTH0_DOMAIN}/oauth/device/code"
TOKEN_URL = f"https://{AUTH0_DOMAIN}/oauth/token"
def __init__(self):
self.token_manager = TokenManager()
def signup(self) -> None:
"""Sign up to CrewAI+"""
console.print("Signing Up to CrewAI+ \n", style="bold blue")
device_code_data = self._get_device_code()
self._display_auth_instructions(device_code_data)
return self._poll_for_token(device_code_data)
def _get_device_code(self) -> Dict[str, Any]:
"""Get the device code to authenticate the user."""
device_code_payload = {
"client_id": AUTH0_CLIENT_ID,
"scope": "openid",
"audience": AUTH0_AUDIENCE,
}
response = requests.post(url=self.DEVICE_CODE_URL, data=device_code_payload)
response.raise_for_status()
return response.json()
def _display_auth_instructions(self, device_code_data: Dict[str, str]) -> None:
"""Display the authentication instructions to the user."""
console.print("1. Navigate to: ", device_code_data["verification_uri_complete"])
console.print("2. Enter the following code: ", device_code_data["user_code"])
webbrowser.open(device_code_data["verification_uri_complete"])
def _poll_for_token(self, device_code_data: Dict[str, Any]) -> None:
"""Poll the server for the token."""
token_payload = {
"grant_type": "urn:ietf:params:oauth:grant-type:device_code",
"device_code": device_code_data["device_code"],
"client_id": AUTH0_CLIENT_ID,
}
attempts = 0
while True and attempts < 5:
response = requests.post(self.TOKEN_URL, data=token_payload)
token_data = response.json()
if response.status_code == 200:
validate_token(token_data["id_token"])
expires_in = 360000 # Token expiration time in seconds
self.token_manager.save_tokens(token_data["access_token"], expires_in)
console.print("\nWelcome to CrewAI+ !!", style="green")
return
if token_data["error"] not in ("authorization_pending", "slow_down"):
raise requests.HTTPError(token_data["error_description"])
time.sleep(device_code_data["interval"])
attempts += 1
console.print(
"Timeout: Failed to get the token. Please try again.", style="bold red"
)

View File

@@ -0,0 +1,144 @@
import json
import os
import sys
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional
from auth0.authentication.token_verifier import (
AsymmetricSignatureVerifier,
TokenVerifier,
)
from cryptography.fernet import Fernet
from .constants import AUTH0_CLIENT_ID, AUTH0_DOMAIN
def validate_token(id_token: str) -> None:
"""
Verify the token and its precedence
:param id_token:
"""
jwks_url = f"https://{AUTH0_DOMAIN}/.well-known/jwks.json"
issuer = f"https://{AUTH0_DOMAIN}/"
signature_verifier = AsymmetricSignatureVerifier(jwks_url)
token_verifier = TokenVerifier(
signature_verifier=signature_verifier, issuer=issuer, audience=AUTH0_CLIENT_ID
)
token_verifier.verify(id_token)
class TokenManager:
def __init__(self, file_path: str = "tokens.enc") -> None:
"""
Initialize the TokenManager class.
:param file_path: The file path to store the encrypted tokens. Default is "tokens.enc".
"""
self.file_path = file_path
self.key = self._get_or_create_key()
self.fernet = Fernet(self.key)
def _get_or_create_key(self) -> bytes:
"""
Get or create the encryption key.
:return: The encryption key.
"""
key_filename = "secret.key"
key = self.read_secure_file(key_filename)
if key is not None:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
return new_key
def save_tokens(self, access_token: str, expires_in: int) -> None:
"""
Save the access token and its expiration time.
:param access_token: The access token to save.
:param expires_in: The expiration time of the access token in seconds.
"""
expiration_time = datetime.now() + timedelta(seconds=expires_in)
data = {
"access_token": access_token,
"expiration": expiration_time.isoformat(),
}
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
self.save_secure_file(self.file_path, encrypted_data)
def get_token(self) -> Optional[str]:
"""
Get the access token if it is valid and not expired.
:return: The access token if valid and not expired, otherwise None.
"""
encrypted_data = self.read_secure_file(self.file_path)
decrypted_data = self.fernet.decrypt(encrypted_data) # type: ignore
data = json.loads(decrypted_data)
expiration = datetime.fromisoformat(data["expiration"])
if expiration <= datetime.now():
return None
return data["access_token"]
def get_secure_storage_path(self) -> Path:
"""
Get the secure storage path based on the operating system.
:return: The secure storage path.
"""
if sys.platform == "win32":
# Windows: Use %LOCALAPPDATA%
base_path = os.environ.get("LOCALAPPDATA")
elif sys.platform == "darwin":
# macOS: Use ~/Library/Application Support
base_path = os.path.expanduser("~/Library/Application Support")
else:
# Linux and other Unix-like: Use ~/.local/share
base_path = os.path.expanduser("~/.local/share")
app_name = "crewai/credentials"
storage_path = Path(base_path) / app_name
storage_path.mkdir(parents=True, exist_ok=True)
return storage_path
def save_secure_file(self, filename: str, content: bytes) -> None:
"""
Save the content to a secure file.
:param filename: The name of the file.
:param content: The content to save.
"""
storage_path = self.get_secure_storage_path()
file_path = storage_path / filename
with open(file_path, "wb") as f:
f.write(content)
# Set appropriate permissions (read/write for owner only)
os.chmod(file_path, 0o600)
def read_secure_file(self, filename: str) -> Optional[bytes]:
"""
Read the content of a secure file.
:param filename: The name of the file.
:return: The content of the file if it exists, otherwise None.
"""
storage_path = self.get_secure_storage_path()
file_path = storage_path / filename
if not file_path.exists():
return None
with open(file_path, "rb") as f:
return f.read()

View File

@@ -1,3 +1,5 @@
from typing import Optional
import click
import pkg_resources
@@ -7,6 +9,8 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)
from .authentication.main import AuthenticationCommand
from .deploy.main import DeployCommand
from .evaluate_crew import evaluate_crew
from .install_crew import install_crew
from .replay_from_task import replay_task_command
@@ -179,5 +183,71 @@ def run():
run_crew()
@crewai.command()
def signup():
"""Sign Up/Login to CrewAI+."""
AuthenticationCommand().signup()
@crewai.command()
def login():
"""Sign Up/Login to CrewAI+."""
AuthenticationCommand().signup()
# DEPLOY CREWAI+ COMMANDS
@crewai.group()
def deploy():
"""Deploy the Crew CLI group."""
pass
@deploy.command(name="create")
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
def deploy_create(yes: bool):
"""Create a Crew deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.create_crew(yes)
@deploy.command(name="list")
def deploy_list():
"""List all deployments."""
deploy_cmd = DeployCommand()
deploy_cmd.list_crews()
@deploy.command(name="push")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_push(uuid: Optional[str]):
"""Deploy the Crew."""
deploy_cmd = DeployCommand()
deploy_cmd.deploy(uuid=uuid)
@deploy.command(name="status")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deply_status(uuid: Optional[str]):
"""Get the status of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_status(uuid=uuid)
@deploy.command(name="logs")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_logs(uuid: Optional[str]):
"""Get the logs of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_logs(uuid=uuid)
@deploy.command(name="remove")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_remove(uuid: Optional[str]):
"""Remove a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.remove_crew(uuid=uuid)
if __name__ == "__main__":
crewai()

View File

View File

@@ -0,0 +1,66 @@
from os import getenv
import requests
from crewai.cli.deploy.utils import get_crewai_version
class CrewAPI:
"""
CrewAPI class to interact with the crewAI+ API.
"""
def __init__(self, api_key: str) -> None:
self.api_key = api_key
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
}
self.base_url = getenv(
"CREWAI_BASE_URL", "https://crewai.com/crewai_plus/api/v1/crews"
)
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
url = f"{self.base_url}/{endpoint}"
return requests.request(method, url, headers=self.headers, **kwargs)
# Deploy
def deploy_by_name(self, project_name: str) -> requests.Response:
return self._make_request("POST", f"by-name/{project_name}/deploy")
def deploy_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("POST", f"{uuid}/deploy")
# Status
def status_by_name(self, project_name: str) -> requests.Response:
return self._make_request("GET", f"by-name/{project_name}/status")
def status_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("GET", f"{uuid}/status")
# Logs
def logs_by_name(
self, project_name: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request("GET", f"by-name/{project_name}/logs/{log_type}")
def logs_by_uuid(
self, uuid: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request("GET", f"{uuid}/logs/{log_type}")
# Delete
def delete_by_name(self, project_name: str) -> requests.Response:
return self._make_request("DELETE", f"by-name/{project_name}")
def delete_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("DELETE", f"{uuid}")
# List
def list_crews(self) -> requests.Response:
return self._make_request("GET", "")
# Create
def create_crew(self, payload) -> requests.Response:
return self._make_request("POST", "", json=payload)

View File

@@ -0,0 +1,318 @@
from typing import Any, Dict, List, Optional
from rich.console import Console
from crewai.telemetry import Telemetry
from .api import CrewAPI
from .utils import (
fetch_and_json_env_file,
get_auth_token,
get_git_remote_url,
get_project_name,
)
console = Console()
class DeployCommand:
"""
A class to handle deployment-related operations for CrewAI projects.
"""
def __init__(self):
"""
Initialize the DeployCommand with project name and API client.
"""
try:
self._telemetry = Telemetry()
self._telemetry.set_tracer()
access_token = get_auth_token()
except Exception:
self._deploy_signup_error_span = self._telemetry.deploy_signup_error_span()
console.print(
"Please sign up/login to CrewAI+ before using the CLI.",
style="bold red",
)
console.print("Run 'crewai signup' to sign up/login.", style="bold green")
raise SystemExit
self.project_name = get_project_name()
if self.project_name is None:
console.print(
"No project name found. Please ensure your project has a valid pyproject.toml file.",
style="bold red",
)
raise SystemExit
self.client = CrewAPI(api_key=access_token)
def _handle_error(self, json_response: Dict[str, Any]) -> None:
"""
Handle and display error messages from API responses.
Args:
json_response (Dict[str, Any]): The JSON response containing error information.
"""
error = json_response.get("error", "Unknown error")
message = json_response.get("message", "No message provided")
console.print(f"Error: {error}", style="bold red")
console.print(f"Message: {message}", style="bold red")
def _standard_no_param_error_message(self) -> None:
"""
Display a standard error message when no UUID or project name is available.
"""
console.print(
"No UUID provided, project pyproject.toml not found or with error.",
style="bold red",
)
def _display_deployment_info(self, json_response: Dict[str, Any]) -> None:
"""
Display deployment information.
Args:
json_response (Dict[str, Any]): The deployment information to display.
"""
console.print("Deploying the crew...\n", style="bold blue")
for key, value in json_response.items():
console.print(f"{key.title()}: [green]{value}[/green]")
console.print("\nTo check the status of the deployment, run:")
console.print("crewai deploy status")
console.print(" or")
console.print(f"crewai deploy status --uuid \"{json_response['uuid']}\"")
def _display_logs(self, log_messages: List[Dict[str, Any]]) -> None:
"""
Display log messages.
Args:
log_messages (List[Dict[str, Any]]): The log messages to display.
"""
for log_message in log_messages:
console.print(
f"{log_message['timestamp']} - {log_message['level']}: {log_message['message']}"
)
def deploy(self, uuid: Optional[str] = None) -> None:
"""
Deploy a crew using either UUID or project name.
Args:
uuid (Optional[str]): The UUID of the crew to deploy.
"""
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue")
if uuid:
response = self.client.deploy_by_uuid(uuid)
elif self.project_name:
response = self.client.deploy_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
json_response = response.json()
if response.status_code == 200:
self._display_deployment_info(json_response)
else:
self._handle_error(json_response)
def create_crew(self, confirm: bool = False) -> None:
"""
Create a new crew deployment.
"""
self._create_crew_deployment_span = (
self._telemetry.create_crew_deployment_span()
)
console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file()
remote_repo_url = get_git_remote_url()
if remote_repo_url is None:
console.print("No remote repository URL found.", style="bold red")
console.print(
"Please ensure your project has a valid remote repository.",
style="yellow",
)
return
self._confirm_input(env_vars, remote_repo_url, confirm)
payload = self._create_payload(env_vars, remote_repo_url)
response = self.client.create_crew(payload)
if response.status_code == 201:
self._display_creation_success(response.json())
else:
self._handle_error(response.json())
def _confirm_input(
self, env_vars: Dict[str, str], remote_repo_url: str, confirm: bool
) -> None:
"""
Confirm input parameters with the user.
Args:
env_vars (Dict[str, str]): Environment variables.
remote_repo_url (str): Remote repository URL.
confirm (bool): Whether to confirm input.
"""
if not confirm:
input(f"Press Enter to continue with the following Env vars: {env_vars}")
input(
f"Press Enter to continue with the following remote repository: {remote_repo_url}\n"
)
def _create_payload(
self,
env_vars: Dict[str, str],
remote_repo_url: str,
) -> Dict[str, Any]:
"""
Create the payload for crew creation.
Args:
remote_repo_url (str): Remote repository URL.
env_vars (Dict[str, str]): Environment variables.
Returns:
Dict[str, Any]: The payload for crew creation.
"""
return {
"deploy": {
"name": self.project_name,
"repo_clone_url": remote_repo_url,
"env": env_vars,
}
}
def _display_creation_success(self, json_response: Dict[str, Any]) -> None:
"""
Display success message after crew creation.
Args:
json_response (Dict[str, Any]): The response containing crew information.
"""
console.print("Deployment created successfully!\n", style="bold green")
console.print(
f"Name: {self.project_name} ({json_response['uuid']})", style="bold green"
)
console.print(f"Status: {json_response['status']}", style="bold green")
console.print("\nTo (re)deploy the crew, run:")
console.print("crewai deploy push")
console.print(" or")
console.print(f"crewai deploy push --uuid {json_response['uuid']}")
def list_crews(self) -> None:
"""
List all available crews.
"""
console.print("Listing all Crews\n", style="bold blue")
response = self.client.list_crews()
json_response = response.json()
if response.status_code == 200:
self._display_crews(json_response)
else:
self._display_no_crews_message()
def _display_crews(self, crews_data: List[Dict[str, Any]]) -> None:
"""
Display the list of crews.
Args:
crews_data (List[Dict[str, Any]]): List of crew data to display.
"""
for crew_data in crews_data:
console.print(
f"- {crew_data['name']} ({crew_data['uuid']}) [blue]{crew_data['status']}[/blue]"
)
def _display_no_crews_message(self) -> None:
"""
Display a message when no crews are available.
"""
console.print("You don't have any Crews yet. Let's create one!", style="yellow")
console.print(" crewai create crew <crew_name>", style="green")
def get_crew_status(self, uuid: Optional[str] = None) -> None:
"""
Get the status of a crew.
Args:
uuid (Optional[str]): The UUID of the crew to check.
"""
console.print("Fetching deployment status...", style="bold blue")
if uuid:
response = self.client.status_by_uuid(uuid)
elif self.project_name:
response = self.client.status_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
json_response = response.json()
if response.status_code == 200:
self._display_crew_status(json_response)
else:
self._handle_error(json_response)
def _display_crew_status(self, status_data: Dict[str, str]) -> None:
"""
Display the status of a crew.
Args:
status_data (Dict[str, str]): The status data to display.
"""
console.print(f"Name:\t {status_data['name']}")
console.print(f"Status:\t {status_data['status']}")
def get_crew_logs(self, uuid: Optional[str], log_type: str = "deployment") -> None:
"""
Get logs for a crew.
Args:
uuid (Optional[str]): The UUID of the crew to get logs for.
log_type (str): The type of logs to retrieve (default: "deployment").
"""
self._get_crew_logs_span = self._telemetry.get_crew_logs_span(uuid, log_type)
console.print(f"Fetching {log_type} logs...", style="bold blue")
if uuid:
response = self.client.logs_by_uuid(uuid, log_type)
elif self.project_name:
response = self.client.logs_by_name(self.project_name, log_type)
else:
self._standard_no_param_error_message()
return
if response.status_code == 200:
self._display_logs(response.json())
else:
self._handle_error(response.json())
def remove_crew(self, uuid: Optional[str]) -> None:
"""
Remove a crew deployment.
Args:
uuid (Optional[str]): The UUID of the crew to remove.
"""
self._remove_crew_span = self._telemetry.remove_crew_span(uuid)
console.print("Removing deployment...", style="bold blue")
if uuid:
response = self.client.delete_by_uuid(uuid)
elif self.project_name:
response = self.client.delete_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
if response.status_code == 204:
console.print(
f"Crew '{self.project_name}' removed successfully.", style="green"
)
else:
console.print(
f"Failed to remove crew '{self.project_name}'", style="bold red"
)

View File

@@ -0,0 +1,155 @@
import sys
import re
import subprocess
from rich.console import Console
from ..authentication.utils import TokenManager
console = Console()
if sys.version_info >= (3, 11):
import tomllib
# Drop the simple_toml_parser when we move to python3.11
def simple_toml_parser(content):
result = {}
current_section = result
for line in content.split('\n'):
line = line.strip()
if line.startswith('[') and line.endswith(']'):
# New section
section = line[1:-1].split('.')
current_section = result
for key in section:
current_section = current_section.setdefault(key, {})
elif '=' in line:
key, value = line.split('=', 1)
key = key.strip()
value = value.strip().strip('"')
current_section[key] = value
return result
def parse_toml(content):
if sys.version_info >= (3, 11):
return tomllib.loads(content)
else:
return simple_toml_parser(content)
def get_git_remote_url() -> str | None:
"""Get the Git repository's remote URL."""
try:
# Run the git remote -v command
result = subprocess.run(
["git", "remote", "-v"], capture_output=True, text=True, check=True
)
# Get the output
output = result.stdout
# Parse the output to find the origin URL
matches = re.findall(r"origin\s+(.*?)\s+\(fetch\)", output)
if matches:
return matches[0] # Return the first match (origin URL)
else:
console.print("No origin remote found.", style="bold red")
except subprocess.CalledProcessError as e:
console.print(f"Error running trying to fetch the Git Repository: {e}", style="bold red")
except FileNotFoundError:
console.print("Git command not found. Make sure Git is installed and in your PATH.", style="bold red")
return None
def get_project_name(pyproject_path: str = "pyproject.toml") -> str | None:
"""Get the project name from the pyproject.toml file."""
try:
# Read the pyproject.toml file
with open(pyproject_path, "r") as f:
pyproject_content = parse_toml(f.read())
# Extract the project name
project_name = pyproject_content["tool"]["poetry"]["name"]
if "crewai" not in pyproject_content["tool"]["poetry"]["dependencies"]:
raise Exception("crewai is not in the dependencies.")
return project_name
except FileNotFoundError:
print(f"Error: {pyproject_path} not found.")
except KeyError:
print(f"Error: {pyproject_path} is not a valid pyproject.toml file.")
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
print(
f"Error: {pyproject_path} is not a valid TOML file."
if sys.version_info >= (3, 11)
else f"Error reading the pyproject.toml file: {e}"
)
except Exception as e:
print(f"Error reading the pyproject.toml file: {e}")
return None
def get_crewai_version(poetry_lock_path: str = "poetry.lock") -> str:
"""Get the version number of crewai from the poetry.lock file."""
try:
with open(poetry_lock_path, "r") as f:
lock_content = f.read()
match = re.search(
r'\[\[package\]\]\s*name\s*=\s*"crewai"\s*version\s*=\s*"([^"]+)"',
lock_content,
re.DOTALL,
)
if match:
return match.group(1)
else:
print("crewai package not found in poetry.lock")
return "no-version-found"
except FileNotFoundError:
print(f"Error: {poetry_lock_path} not found.")
except Exception as e:
print(f"Error reading the poetry.lock file: {e}")
return "no-version-found"
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
"""Fetch the environment variables from a .env file and return them as a dictionary."""
try:
# Read the .env file
with open(env_file_path, "r") as f:
env_content = f.read()
# Parse the .env file content to a dictionary
env_dict = {}
for line in env_content.splitlines():
if line.strip() and not line.strip().startswith("#"):
key, value = line.split("=", 1)
env_dict[key.strip()] = value.strip()
return env_dict
except FileNotFoundError:
print(f"Error: {env_file_path} not found.")
except Exception as e:
print(f"Error reading the .env file: {e}")
return {}
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token

View File

@@ -10,8 +10,6 @@ from crewai.project import CrewBase, agent, crew, task
@CrewBase
class {{crew_name}}Crew():
"""{{crew_name}} crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def researcher(self) -> Agent:

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.61.0,<1.0.0" }
[tool.poetry.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.61.0,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.61.0,<1.0.0" }
[tool.poetry.scripts]

View File

@@ -6,7 +6,6 @@ from concurrent.futures import Future
from hashlib import md5
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from langchain_core.callbacks import BaseCallbackHandler
from pydantic import (
UUID4,
BaseModel,
@@ -68,7 +67,6 @@ class Crew(BaseModel):
manager_llm: The language model that will run manager agent.
manager_agent: Custom agent that will be used as manager.
memory: Whether the crew should use memory to store memories of it's execution.
manager_callbacks: The callback handlers to be executed by the manager agent when hierarchical process is used
cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential, hierarchical).
@@ -126,10 +124,6 @@ class Crew(BaseModel):
manager_agent: Optional[BaseAgent] = Field(
description="Custom agent that will be used as manager.", default=None
)
manager_callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None,
description="A list of callback handlers to be executed by the manager agent when hierarchical process is used",
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
@@ -205,6 +199,11 @@ class Crew(BaseModel):
if self.output_log_file:
self._file_handler = FileHandler(self.output_log_file)
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
self.function_calling_llm = (
getattr(self.function_calling_llm, "model_name", None)
or getattr(self.function_calling_llm, "deployment_name", None)
or self.function_calling_llm
)
self._telemetry = Telemetry()
self._telemetry.set_tracer()
return self
@@ -584,9 +583,18 @@ class Crew(BaseModel):
self.manager_agent.allow_delegation = True
manager = self.manager_agent
if manager.tools is not None and len(manager.tools) > 0:
self._logger.log(
"warning", "Manager agent should not have tools", color="orange"
)
manager.tools = []
raise Exception("Manager agent should not have tools")
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else:
self.manager_llm = (
getattr(self.manager_llm, "model_name", None)
or getattr(self.manager_llm, "deployment_name", None)
or self.manager_llm
)
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
@@ -596,6 +604,7 @@ class Crew(BaseModel):
verbose=self.verbose,
)
self.manager_agent = manager
manager.crew = self
def _execute_tasks(
self,
@@ -740,9 +749,6 @@ class Crew(BaseModel):
task.tools.append(new_tool)
def _log_task_start(self, task: Task, role: str = "None"):
color = self._logging_color
self._logger.log("debug", f"== Working Agent: {role}", color=color)
self._logger.log("info", f"== Starting Task: {task.description}", color=color)
if self.output_log_file:
self._file_handler.log(agent=role, task=task.description, status="started")
@@ -765,7 +771,6 @@ class Crew(BaseModel):
def _process_task_result(self, task: Task, output: TaskOutput) -> None:
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"== [{role}] Task output: {output}\n\n")
if self.output_log_file:
self._file_handler.log(agent=role, task=output, status="completed")
@@ -918,25 +923,23 @@ class Crew(BaseModel):
def calculate_usage_metrics(self) -> UsageMetrics:
"""Calculates and returns the usage metrics."""
total_usage_metrics = UsageMetrics()
for agent in self.agents:
if hasattr(agent, "_token_process"):
token_sum = agent._token_process.get_summary()
total_usage_metrics.add_usage_metrics(token_sum)
if self.manager_agent and hasattr(self.manager_agent, "_token_process"):
token_sum = self.manager_agent._token_process.get_summary()
total_usage_metrics.add_usage_metrics(token_sum)
self.usage_metrics = total_usage_metrics
return total_usage_metrics
def test(
self,
n_iterations: int,
openai_model_name: str,
openai_model_name: Optional[str] = None,
inputs: Optional[Dict[str, Any]] = None,
) -> None:
"""Test and evaluate the Crew with the given inputs for n iterations."""
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
self._test_execution_span = self._telemetry.test_execution_span(
self, n_iterations, inputs, openai_model_name
)

View File

@@ -1 +1,3 @@
from .crew_output import CrewOutput
__all__ = ["CrewOutput"]

20
src/crewai/llm.py Normal file
View File

@@ -0,0 +1,20 @@
from typing import Any, Dict, List
from litellm import completion
import litellm
class LLM:
def __init__(self, model: str, stop: List[str] = [], callbacks: List[Any] = []):
self.stop = stop
self.model = model
litellm.callbacks = callbacks
def call(self, messages: List[Dict[str, str]]) -> Dict[str, Any]:
response = completion(
stop=self.stop, model=self.model, messages=messages, num_retries=5
)
return response["choices"][0]["message"]["content"]
def _call_callbacks(self, formatted_answer):
for callback in self.callbacks:
callback(formatted_answer)

View File

@@ -1,3 +1,5 @@
from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -21,7 +21,7 @@ class Memory:
if agent:
metadata["agent"] = agent
self.storage.save(value, metadata) # type: ignore # Maybe BUG? Should be self.storage.save(key, value, metadata)
self.storage.save(value, metadata)
def search(self, query: str) -> Dict[str, Any]:
return self.storage.search(query)

View File

@@ -5,13 +5,14 @@ import os
import shutil
from typing import Any, Dict, List, Optional
from crewai.memory.storage.interface import Storage
from crewai.utilities.paths import db_storage_path
from embedchain import App
from embedchain.llm.base import BaseLlm
from embedchain.models.data_type import DataType
from embedchain.vectordb.chroma import InvalidDimensionException
from crewai.memory.storage.interface import Storage
from crewai.utilities.paths import db_storage_path
@contextlib.contextmanager
def suppress_logging(
@@ -77,12 +78,12 @@ class RAGStorage(Storage):
self.app.llm = FakeLLM()
if allow_reset:
self.app.reset()
def _sanitize_role(self, role: str) -> str:
"""
Sanitizes agent roles to ensure valid directory names.
"""
return role.replace('\n', '').replace(' ', '_').replace('/', '_')
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
self._generate_embedding(value, metadata)

View File

@@ -1,3 +1,5 @@
from crewai.pipeline.pipeline import Pipeline
from crewai.pipeline.pipeline_kickoff_result import PipelineKickoffResult
from crewai.pipeline.pipeline_output import PipelineOutput
__all__ = ["Pipeline", "PipelineKickoffResult", "PipelineOutput"]

View File

@@ -103,7 +103,8 @@ def crew(func):
for task_name in sorted_task_names:
task_instance = tasks[task_name]()
instantiated_tasks.append(task_instance)
if hasattr(task_instance, "agent"):
agent_instance = getattr(task_instance, "agent", None)
if agent_instance is not None:
agent_instance = task_instance.agent
if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance)

View File

@@ -89,7 +89,10 @@ def CrewBase(cls):
callbacks: Dict[str, Callable],
) -> None:
if llm := agent_info.get("llm"):
self.agents_config[agent_name]["llm"] = llms[llm]()
try:
self.agents_config[agent_name]["llm"] = llms[llm]()
except KeyError:
self.agents_config[agent_name]["llm"] = llm
if tools := agent_info.get("tools"):
self.agents_config[agent_name]["tools"] = [

View File

@@ -1 +1,3 @@
from crewai.routers.router import Router
__all__ = ["Router"]

View File

@@ -6,7 +6,7 @@ import uuid
from concurrent.futures import Future
from copy import copy
from hashlib import md5
from typing import Any, Dict, List, Optional, Tuple, Type, Union
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
from opentelemetry.trace import Span
from pydantic import (
@@ -108,6 +108,7 @@ class Task(BaseModel):
description="A converter class used to export structured output",
default=None,
)
processed_by_agents: Set[str] = Field(default_factory=set)
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
_execution_span: Optional[Span] = PrivateAttr(default=None)
@@ -241,6 +242,8 @@ class Task(BaseModel):
self.prompt_context = context
tools = tools or self.tools or []
self.processed_by_agents.add(agent.role)
result = agent.execute_task(
task=self,
context=context,
@@ -273,7 +276,9 @@ class Task(BaseModel):
content = (
json_output
if json_output
else pydantic_output.model_dump_json() if pydantic_output else result
else pydantic_output.model_dump_json()
if pydantic_output
else result
)
self._save_file(content)
@@ -308,8 +313,10 @@ class Task(BaseModel):
"""Increment the tools errors counter."""
self.tools_errors += 1
def increment_delegations(self) -> None:
def increment_delegations(self, agent_name: Optional[str]) -> None:
"""Increment the delegations counter."""
if agent_name:
self.processed_by_agents.add(agent_name)
self.delegations += 1
def copy(self, agents: List["BaseAgent"]) -> "Task":

View File

@@ -1 +1,3 @@
from .telemetry import Telemetry
__all__ = ["Telemetry"]

View File

@@ -4,15 +4,28 @@ import asyncio
import json
import os
import platform
from typing import TYPE_CHECKING, Any
import warnings
from typing import TYPE_CHECKING, Any, Optional
from contextlib import contextmanager
import pkg_resources
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import Span, Status, StatusCode
@contextmanager
def suppress_warnings():
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
yield
with suppress_warnings():
import pkg_resources
from opentelemetry import trace # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
from opentelemetry.trace import Span, Status, StatusCode # noqa: E402
if TYPE_CHECKING:
from crewai.crew import Crew
@@ -28,18 +41,6 @@ class Telemetry:
agents backstories or goals nor responses or any data that is being
processed by the agents, nor any secrets and env vars.
Data collected includes:
- Version of crewAI
- Version of Python
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- Number of agents and tasks in a crew
- Crew Process being used
- If Agents are using memory or allowing delegation
- If Tasks are being executed in parallel or sequentially
- Language model being used
- Roles of agents in a crew
- Tools names available
Users can opt-in to sharing more complete data using the `share_crew`
attribute in the Crew class.
"""
@@ -74,8 +75,9 @@ class Telemetry:
def set_tracer(self):
if self.ready and not self.trace_set:
try:
trace.set_tracer_provider(self.provider)
self.trace_set = True
with suppress_warnings():
trace.set_tracer_provider(self.provider)
self.trace_set = True
except Exception:
self.ready = False
self.trace_set = False
@@ -114,10 +116,11 @@ class Telemetry:
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"llm": json.dumps(
self._safe_llm_attributes(agent.llm)
),
"function_calling_llm": agent.function_calling_llm,
"llm": agent.llm,
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [
tool.name.casefold()
for tool in agent.tools or []
@@ -165,7 +168,56 @@ class Telemetry:
self._add_attribute(
span, "crew_inputs", json.dumps(inputs) if inputs else None
)
else:
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"key": agent.key,
"id": str(agent.id),
"role": agent.role,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"function_calling_llm": agent.function_calling_llm,
"llm": agent.llm,
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [
tool.name.casefold()
for tool in agent.tools or []
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"key": task.key,
"id": str(task.id),
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": task.agent.role
if task.agent
else "None",
"agent_key": task.agent.key if task.agent else None,
"tools_names": [
tool.name.casefold()
for tool in task.tools or []
],
}
for task in crew.tasks
]
),
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -244,9 +296,7 @@ class Telemetry:
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
self._add_attribute(span, "llm", llm)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -266,9 +316,7 @@ class Telemetry:
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
self._add_attribute(span, "llm", llm)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -286,9 +334,7 @@ class Telemetry:
pkg_resources.get_distribution("crewai").version,
)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
self._add_attribute(span, "llm", llm)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -349,6 +395,63 @@ class Telemetry:
except Exception:
pass
def deploy_signup_error_span(self):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Deploy Signup Error")
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def start_deployment_span(self, uuid: Optional[str] = None):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Start Deployment")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def create_crew_deployment_span(self):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Create Crew Deployment")
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Get Crew Logs")
self._add_attribute(span, "log_type", log_type)
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def remove_crew_span(self, uuid: Optional[str] = None):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Remove Crew")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
@@ -384,7 +487,7 @@ class Telemetry:
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"llm": agent.llm,
"delegation_enabled?": agent.allow_delegation,
"tools_names": [
tool.name.casefold() for tool in agent.tools or []
@@ -460,11 +563,3 @@ class Telemetry:
return span.set_attribute(key, value)
except Exception:
pass
def _safe_llm_attributes(self, llm):
attributes = ["name", "model_name", "base_url", "model", "top_k", "temperature"]
if llm:
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__
return safe_attributes
return {}

View File

@@ -1,5 +1,4 @@
from langchain.tools import StructuredTool
from crewai.agents.agent_builder.utilities.base_agent_tool import BaseAgentTools

View File

@@ -1,8 +1,8 @@
from typing import Any, Dict, Optional
from pydantic import BaseModel, Field
from pydantic import BaseModel as PydanticBaseModel
from pydantic import Field as PydanticField
from pydantic.v1 import BaseModel, Field
class ToolCalling(BaseModel):

View File

@@ -1,39 +0,0 @@
import json
from typing import Any, List
import regex
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError
class ToolOutputParser(PydanticOutputParser):
"""Parses the function calling of a tool usage and it's arguments."""
def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
result[0].text = self._transform_in_valid_json(result[0].text)
json_object = super().parse_result(result)
try:
return self.pydantic_object.parse_obj(json_object)
except ValidationError as e:
name = self.pydantic_object.__name__
msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
raise OutputParserException(msg, llm_output=json_object)
def _transform_in_valid_json(self, text) -> str:
text = text.replace("```", "").replace("json", "")
json_pattern = r"\{(?:[^{}]|(?R))*\}"
matches = regex.finditer(json_pattern, text)
for match in matches:
try:
# Attempt to parse the matched string as JSON
json_obj = json.loads(match.group())
# Return the first successfully parsed JSON object
json_obj = json.dumps(json_obj)
return str(json_obj)
except json.JSONDecodeError:
# If parsing fails, skip to the next match
continue
return text

View File

@@ -4,10 +4,8 @@ from difflib import SequenceMatcher
from textwrap import dedent
from typing import Any, List, Union
from langchain_core.tools import BaseTool
from langchain_openai import ChatOpenAI
from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task
from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.utilities import I18N, Converter, ConverterError, Printer
@@ -19,7 +17,7 @@ if os.environ.get("AGENTOPS_API_KEY"):
except ImportError:
pass
OPENAI_BIGGER_MODELS = ["gpt-4o"]
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini"]
class ToolUsageErrorException(Exception):
@@ -47,11 +45,11 @@ class ToolUsage:
def __init__(
self,
tools_handler: ToolsHandler,
tools: List[BaseTool],
tools: List[Any],
original_tools: List[Any],
tools_description: str,
tools_names: str,
task: Any,
task: Task,
function_calling_llm: Any,
agent: Any,
action: Any,
@@ -72,20 +70,13 @@ class ToolUsage:
self.action = action
self.function_calling_llm = function_calling_llm
# Handling bug (see https://github.com/langchain-ai/langchain/pull/16395): raise an error if tools_names have space for ChatOpenAI
if isinstance(self.function_calling_llm, ChatOpenAI):
if " " in self.tools_names:
raise Exception(
"Tools names should not have spaces for ChatOpenAI models."
)
# Set the maximum parsing attempts for bigger models
if (isinstance(self.function_calling_llm, ChatOpenAI)) and (
self.function_calling_llm.openai_api_base is None
if (
self._is_gpt(self.function_calling_llm)
and self.function_calling_llm in OPENAI_BIGGER_MODELS
):
if self.function_calling_llm.model_name in OPENAI_BIGGER_MODELS:
self._max_parsing_attempts = 2
self._remember_format_after_usages = 4
self._max_parsing_attempts = 2
self._remember_format_after_usages = 4
def parse(self, tool_string: str):
"""Parse the tool string and return the tool calling."""
@@ -115,7 +106,7 @@ class ToolUsage:
def _use(
self,
tool_string: str,
tool: BaseTool,
tool: Any,
calling: Union[ToolCalling, InstructorToolCalling],
) -> str: # TODO: Fix this return type
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None # type: ignore
@@ -124,8 +115,6 @@ class ToolUsage:
result = self._i18n.errors("task_repeated_usage").format(
tool_names=self.tools_names
)
if self.agent.verbose:
self._printer.print(content=f"\n\n{result}\n", color="purple")
self._telemetry.tool_repeated_usage(
llm=self.function_calling_llm,
tool_name=tool.name,
@@ -154,7 +143,10 @@ class ToolUsage:
"Delegate work to coworker",
"Ask question to coworker",
]:
self.task.increment_delegations()
coworker = (
calling.arguments.get("coworker") if calling.arguments else None
)
self.task.increment_delegations(coworker)
if calling.arguments:
try:
@@ -208,8 +200,6 @@ class ToolUsage:
calling=calling, output=result, should_cache=should_cache
)
if self.agent.verbose:
self._printer.print(content=f"\n\n{result}\n", color="purple")
if agentops:
agentops.record(tool_event)
self._telemetry.tool_usage(
@@ -241,7 +231,7 @@ class ToolUsage:
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
return result
def _should_remember_format(self) -> None:
def _should_remember_format(self) -> bool:
return self.task.used_tools % self._remember_format_after_usages == 0
def _remember_format(self, result: str) -> None:
@@ -261,7 +251,7 @@ class ToolUsage:
calling.arguments == last_tool_usage.arguments
)
def _select_tool(self, tool_name: str) -> BaseTool:
def _select_tool(self, tool_name: str) -> Any:
order_tools = sorted(
self.tools,
key=lambda tool: SequenceMatcher(
@@ -281,7 +271,7 @@ class ToolUsage:
self.task.increment_tools_errors()
if tool_name and tool_name != "":
raise Exception(
f"Action '{tool_name}' don't exist, these are the only available Actions:\n {self.tools_description}"
f"Action '{tool_name}' don't exist, these are the only available Actions:\n{self.tools_description}"
)
else:
raise Exception(
@@ -308,7 +298,11 @@ class ToolUsage:
return "\n--\n".join(descriptions)
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
return (
"gpt" in str(llm).lower()
or "o1-preview" in str(llm).lower()
or "o1-mini" in str(llm).lower()
)
def _tool_calling(
self, tool_string: str
@@ -321,7 +315,7 @@ class ToolUsage:
else ToolCalling
)
converter = Converter(
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid output schema:\n\n{tool_string}```",
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid output schema:\n\n### TEXT \n{tool_string}",
llm=self.function_calling_llm,
model=model,
instructions=dedent(
@@ -353,10 +347,10 @@ class ToolUsage:
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}'
)
calling = ToolCalling( # type: ignore # Unexpected keyword argument "log" for "ToolCalling"
calling = ToolCalling(
tool_name=tool.name,
arguments=arguments,
log=tool_string,
log=tool_string, # type: ignore
)
except Exception as e:
self._run_attempts += 1
@@ -404,19 +398,19 @@ class ToolUsage:
'"' + value.replace('"', '\\"') + '"'
) # Re-encapsulate with double quotes
elif value.isdigit(): # Check if value is a digit, hence integer
formatted_value = value
value = value
elif value.lower() in [
"true",
"false",
"null",
]: # Check for boolean and null values
formatted_value = value.lower()
value = value.lower()
else:
# Assume the value is a string and needs quotes
formatted_value = '"' + value.replace('"', '\\"') + '"'
value = '"' + value.replace('"', '\\"') + '"'
# Rebuild the entry with proper quoting
formatted_entry = f'"{key}": {formatted_value}'
formatted_entry = f'"{key}": {value}'
formatted_entries.append(formatted_entry)
# Reconstruct the JSON string

View File

@@ -5,22 +5,26 @@
"backstory": "You are a seasoned manager with a knack for getting the best out of your team.\nYou are also known for your ability to delegate work to the right people, and to ask the right questions to get the best out of your team.\nEven though you don't perform tasks by yourself, you have a lot of experience in the field, which allows you to properly evaluate the work of your team members."
},
"slices": {
"observation": "\nObservation",
"observation": "\nObservation:",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:",
"memory": "\n\n# Useful context: \n{memory}",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "To give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n ",
"no_tools": "\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n ",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output} \n you MUST return the actual complete content as the final answer, not a summary.",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent's final answer: {final_answer}\nPlease provide feedback: "
"getting_input": "This is the agent's final answer: {final_answer}\nPlease provide feedback: ",
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
"sumamrize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
"summary": "This is a summary of our conversation so far:\n{merged_summary}"
},
"errors": {
"force_final_answer": "Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.",
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexsiting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}",

View File

@@ -1,7 +1,7 @@
from .converter import Converter, ConverterError
from .file_handler import FileHandler
from .i18n import I18N
from .instructor import Instructor
from .internal_instructor import InternalInstructor
from .logger import Logger
from .parser import YamlParser
from .printer import Printer
@@ -16,7 +16,7 @@ __all__ = [
"ConverterError",
"FileHandler",
"I18N",
"Instructor",
"InternalInstructor",
"Logger",
"Printer",
"Prompts",

View File

@@ -23,17 +23,16 @@ def process_config(
# Copy values from config (originally from YAML) to the model's attributes.
# Only copy if the attribute isn't already set, preserving any explicitly defined values.
for key, value in config.items():
if key not in model_class.model_fields:
if key not in model_class.model_fields or values.get(key) is not None:
continue
if values.get(key) is not None:
continue
if isinstance(value, (str, int, float, bool, list)):
values[key] = value
elif isinstance(value, dict):
if isinstance(value, dict):
if isinstance(values.get(key), dict):
values[key].update(value)
else:
values[key] = value
else:
values[key] = value
# Remove the config from values to avoid duplicate processing
values.pop("config", None)

View File

@@ -2,8 +2,7 @@ import json
import re
from typing import Any, Optional, Type, Union
from langchain.schema import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from crewai.llm import LLM
from pydantic import BaseModel, ValidationError
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
@@ -28,7 +27,12 @@ class Converter(OutputConverter):
if self.is_gpt:
return self._create_instructor().to_pydantic()
else:
return self._create_chain().invoke({})
return LLM(model=self.llm).call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
except Exception as e:
if current_attempt < self.max_attempts:
return self.to_pydantic(current_attempt + 1)
@@ -42,7 +46,14 @@ class Converter(OutputConverter):
if self.is_gpt:
return self._create_instructor().to_json()
else:
return json.dumps(self._create_chain().invoke({}).model_dump())
return json.dumps(
LLM(model=self.llm).call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
)
except Exception as e:
if current_attempt < self.max_attempts:
return self.to_json(current_attempt + 1)
@@ -50,33 +61,39 @@ class Converter(OutputConverter):
def _create_instructor(self):
"""Create an instructor."""
from crewai.utilities import Instructor
from crewai.utilities import InternalInstructor
inst = Instructor(
inst = InternalInstructor(
llm=self.llm,
max_attempts=self.max_attempts,
model=self.model,
content=self.text,
instructions=self.instructions,
)
return inst
def _create_chain(self):
def _convert_with_instructions(self):
"""Create a chain."""
from crewai.utilities.crew_pydantic_output_parser import (
CrewPydanticOutputParser,
)
parser = CrewPydanticOutputParser(pydantic_object=self.model)
new_prompt = SystemMessage(content=self.instructions) + HumanMessage(
content=self.text
result = LLM(model=self.llm).call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
return new_prompt | self.llm | parser
return parser.parse_result(result)
@property
def is_gpt(self) -> bool:
"""Return if llm provided is of gpt from openai."""
return isinstance(self.llm, ChatOpenAI) and self.llm.openai_api_base is None
return (
"gpt" in str(self.llm).lower()
or "o1-preview" in str(self.llm).lower()
or "o1-mini" in str(self.llm).lower()
)
def convert_to_model(
@@ -89,23 +106,14 @@ def convert_to_model(
model = output_pydantic or output_json
if model is None:
return result
try:
escaped_result = json.dumps(json.loads(result, strict=False))
return validate_model(escaped_result, model, bool(output_json))
except json.JSONDecodeError as e:
Printer().print(
content=f"Error parsing JSON: {e}. Attempting to handle partial JSON.",
color="yellow",
)
except json.JSONDecodeError:
return handle_partial_json(
result, model, bool(output_json), agent, converter_cls
)
except ValidationError as e:
Printer().print(
content=f"Pydantic validation error: {e}. Attempting to handle partial JSON.",
color="yellow",
)
except ValidationError:
return handle_partial_json(
result, model, bool(output_json), agent, converter_cls
)
@@ -140,16 +148,10 @@ def handle_partial_json(
if is_json_output:
return exported_result.model_dump()
return exported_result
except json.JSONDecodeError as e:
Printer().print(
content=f"Error parsing JSON: {e}. The extracted JSON-like string is not valid JSON. Attempting alternative conversion method.",
color="yellow",
)
except ValidationError as e:
Printer().print(
content=f"Pydantic validation error: {e}. The JSON structure doesn't match the expected model. Attempting alternative conversion method.",
color="yellow",
)
except json.JSONDecodeError:
pass
except ValidationError:
pass
except Exception as e:
Printer().print(
content=f"Unexpected error during partial JSON handling: {type(e).__name__}: {e}. Attempting alternative conversion method.",
@@ -170,7 +172,6 @@ def convert_with_instructions(
) -> Union[dict, BaseModel, str]:
llm = agent.function_calling_llm or agent.llm
instructions = get_conversion_instructions(model, llm)
converter = create_converter(
agent=agent,
converter_cls=converter_cls,
@@ -179,6 +180,7 @@ def convert_with_instructions(
model=model,
instructions=instructions,
)
exported_result = (
converter.to_pydantic() if not is_json_output else converter.to_json()
)
@@ -202,9 +204,12 @@ def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
def is_gpt(llm: Any) -> bool:
from langchain_openai import ChatOpenAI
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
"""Return if llm provided is of gpt from openai."""
return (
"gpt" in str(llm).lower()
or "o1-preview" in str(llm).lower()
or "o1-mini" in str(llm).lower()
)
def create_converter(

View File

@@ -1,34 +1,31 @@
import json
from typing import Any, List, Type
import regex
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError
from pydantic import BaseModel
from typing import Any, Type
from crewai.agents.parser import OutputParserException
from pydantic import BaseModel, ValidationError
class CrewPydanticOutputParser(PydanticOutputParser):
class CrewPydanticOutputParser:
"""Parses the text into pydantic models"""
pydantic_object: Type[BaseModel]
def parse_result(self, result: List[Generation]) -> Any:
result[0].text = self._transform_in_valid_json(result[0].text)
def parse_result(self, result: str) -> Any:
result = self._transform_in_valid_json(result)
# Treating edge case of function calling llm returning the name instead of tool_name
json_object = json.loads(result[0].text)
json_object = json.loads(result)
if "tool_name" not in json_object:
json_object["tool_name"] = json_object.get("name", "")
result[0].text = json.dumps(json_object)
result = json.dumps(json_object)
try:
return self.pydantic_object.model_validate(json_object)
except ValidationError as e:
name = self.pydantic_object.__name__
msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
raise OutputParserException(msg, llm_output=json_object)
raise OutputParserException(error=msg)
def _transform_in_valid_json(self, text) -> str:
text = text.replace("```", "").replace("json", "")

View File

@@ -1,14 +1,13 @@
from collections import defaultdict
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from rich.console import Console
from rich.table import Table
from crewai.agent import Agent
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
class TaskEvaluationPydanticOutput(BaseModel):
@@ -51,7 +50,7 @@ class CrewEvaluator:
),
backstory="Evaluator agent for crew evaluation with precise capabilities to evaluate the performance of the agents in the crew based on the tasks they have performed",
verbose=False,
llm=ChatOpenAI(model=self.openai_model_name),
llm=self.openai_model_name,
)
def _evaluation_task(
@@ -77,50 +76,72 @@ class CrewEvaluator:
def print_crew_evaluation_result(self) -> None:
"""
Prints the evaluation result of the crew in a table.
A Crew with 2 tasks using the command crewai test -n 2
A Crew with 2 tasks using the command crewai test -n 3
will output the following table:
Task Scores
Tasks Scores
(1-10 Higher is better)
┏━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓
┃ Tasks/Crew ┃ Run 1 ┃ Run 2 ┃ Avg. Total ┃
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
│ Task 1 │ 10.0 │ 9.0 │ 9.5
│ Task 29.09.09.0
│ Crew │ 9.5 │ 9.0 │ 9.2
└────────────┴───────┴───────┴────────────┘
━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Tasks/Crew/Agents ┃ Run 1 ┃ Run 2 ┃ Run 3 ┃ Avg. Total ┃ Agents ┃
━━━━━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ Task 1 │ 9.0 │ 10.0 │ 9.0 │ 9.3 │ - AI LLMs Senior Researcher
│ │ │ - AI LLMs Reporting Analyst
│ │ │ │ │ │
│ Task 2 │ 9.0 │ 9.0 │ 9.0 │ 9.0 │ - AI LLMs Senior Researcher │
│ │ │ │ │ │ - AI LLMs Reporting Analyst │
│ │ │ │ │ │ │
│ Crew │ 9.0 │ 9.5 │ 9.0 │ 9.2 │ │
│ Execution Time (s) │ 42 │ 79 │ 52 │ 57 │ │
└────────────────────┴───────┴───────┴───────┴────────────┴──────────────────────────────┘
"""
task_averages = [
sum(scores) / len(scores) for scores in zip(*self.tasks_scores.values())
]
crew_average = sum(task_averages) / len(task_averages)
# Create a table
table = Table(title="Tasks Scores \n (1-10 Higher is better)")
table = Table(title="Tasks Scores \n (1-10 Higher is better)", box=HEAVY_EDGE)
# Add columns for the table
table.add_column("Tasks/Crew")
table.add_column("Tasks/Crew/Agents", style="cyan")
for run in range(1, len(self.tasks_scores) + 1):
table.add_column(f"Run {run}")
table.add_column("Avg. Total")
table.add_column(f"Run {run}", justify="center")
table.add_column("Avg. Total", justify="center")
table.add_column("Agents", style="green")
# Add rows for each task
for task_index in range(len(task_averages)):
for task_index, task in enumerate(self.crew.tasks):
task_scores = [
self.tasks_scores[run][task_index]
for run in range(1, len(self.tasks_scores) + 1)
]
avg_score = task_averages[task_index]
agents = list(task.processed_by_agents)
# Add the task row with the first agent
table.add_row(
f"Task {task_index + 1}", *map(str, task_scores), f"{avg_score:.1f}"
f"Task {task_index + 1}",
*[f"{score:.1f}" for score in task_scores],
f"{avg_score:.1f}",
f"- {agents[0]}" if agents else "",
)
# Add a row for the crew average
# Add rows for additional agents
for agent in agents[1:]:
table.add_row("", "", "", "", "", f"- {agent}")
# Add a blank separator row if it's not the last task
if task_index < len(self.crew.tasks) - 1:
table.add_row("", "", "", "", "", "")
# Add Crew and Execution Time rows
crew_scores = [
sum(self.tasks_scores[run]) / len(self.tasks_scores[run])
for run in range(1, len(self.tasks_scores) + 1)
]
table.add_row("Crew", *map(str, crew_scores), f"{crew_average:.1f}")
table.add_row(
"Crew",
*[f"{score:.2f}" for score in crew_scores],
f"{crew_average:.1f}",
"",
)
run_exec_times = [
int(sum(tasks_exec_times))
@@ -128,11 +149,9 @@ class CrewEvaluator:
]
execution_time_avg = int(sum(run_exec_times) / len(run_exec_times))
table.add_row(
"Execution Time (s)",
*map(str, run_exec_times),
f"{execution_time_avg}",
"Execution Time (s)", *map(str, run_exec_times), f"{execution_time_avg}", ""
)
# Display the table in the terminal
console = Console()
console.print(table)

View File

@@ -1,7 +1,6 @@
import os
from typing import List
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from crewai.utilities import Converter
@@ -93,7 +92,11 @@ class TaskEvaluator:
return converter.to_pydantic()
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
return (
"gpt" in str(self.llm).lower()
or "o1-preview" in str(self.llm).lower()
or "o1-mini" in str(self.llm).lower()
)
def evaluate_training_data(
self, training_data: dict, agent_id: str

View File

@@ -1,50 +0,0 @@
from typing import Any, Optional, Type
import instructor
from pydantic import BaseModel, Field, PrivateAttr, model_validator
class Instructor(BaseModel):
"""Class that wraps an agent llm with instructor."""
_client: Any = PrivateAttr()
content: str = Field(description="Content to be sent to the instructor.")
agent: Optional[Any] = Field(
description="The agent that needs to use instructor.", default=None
)
llm: Optional[Any] = Field(
description="The agent that needs to use instructor.", default=None
)
instructions: Optional[str] = Field(
description="Instructions to be sent to the instructor.",
default=None,
)
model: Type[BaseModel] = Field(
description="Pydantic model to be used to create an output."
)
@model_validator(mode="after")
def set_instructor(self):
"""Set instructor."""
if self.agent and not self.llm:
self.llm = self.agent.function_calling_llm or self.agent.llm
self._client = instructor.patch(
self.llm.client._client,
mode=instructor.Mode.TOOLS,
)
return self
def to_json(self):
model = self.to_pydantic()
return model.model_dump_json(indent=2)
def to_pydantic(self):
messages = [{"role": "user", "content": self.content}]
if self.instructions:
messages.append({"role": "system", "content": self.instructions})
model = self._client.chat.completions.create(
model=self.llm.model_name, response_model=self.model, messages=messages
)
return model

View File

@@ -0,0 +1,47 @@
from typing import Any, Optional, Type
import instructor
from litellm import completion
class InternalInstructor:
"""Class that wraps an agent llm with instructor."""
def __init__(
self,
content: str,
model: Type,
agent: Optional[Any] = None,
llm: Optional[str] = None,
instructions: Optional[str] = None,
):
self.content = content
self.agent = agent
self.llm = llm
self.instructions = instructions
self.model = model
self._client = None
self.set_instructor()
def set_instructor(self):
"""Set instructor."""
if self.agent and not self.llm:
self.llm = self.agent.function_calling_llm or self.agent.llm
self._client = instructor.from_litellm(
completion,
mode=instructor.Mode.TOOLS,
)
def to_json(self):
model = self.to_pydantic()
return model.model_dump_json(indent=2)
def to_pydantic(self):
messages = [{"role": "user", "content": self.content}]
if self.instructions:
messages.append({"role": "system", "content": self.instructions})
model = self._client.chat.completions.create(
model=self.llm, response_model=self.model, messages=messages
)
return model

View File

@@ -9,9 +9,9 @@ class Logger(BaseModel):
verbose: bool = Field(default=False)
_printer: Printer = PrivateAttr(default_factory=Printer)
def log(self, level, message, color="bold_green"):
def log(self, level, message, color="bold_yellow"):
if self.verbose:
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self._printer.print(
f"[{timestamp}][{level.upper()}]: {message}", color=color
f"\n[{timestamp}][{level.upper()}]: {message}", color=color
)

View File

@@ -1,5 +1,6 @@
import re
class YamlParser:
@staticmethod
def parse(file):
@@ -16,7 +17,9 @@ class YamlParser:
# Replace single { and } with doubled ones, while leaving already doubled ones intact and the other special characters {# and {%
modified_content = re.sub(r"(?<!\{){(?!\{)(?!\#)(?!\%)", "{{", content)
modified_content = re.sub(r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content)
modified_content = re.sub(
r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content
)
# Check for 'context:' not followed by '[' and raise an error
if re.search(r"context:(?!\s*\[)", modified_content):

View File

@@ -1,6 +1,4 @@
from typing import Any, List, Optional
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from crewai.agent import Agent
@@ -27,7 +25,7 @@ class CrewPlanner:
self.tasks = tasks
if planning_agent_llm is None:
self.planning_agent_llm = ChatOpenAI(model="gpt-4o-mini")
self.planning_agent_llm = "gpt-4o-mini"
else:
self.planning_agent_llm = planning_agent_llm

View File

@@ -1,5 +1,8 @@
from typing import Optional
class Printer:
def print(self, content: str, color: str):
def print(self, content: str, color: Optional[str] = None):
if color == "purple":
self._print_purple(content)
elif color == "red":
@@ -12,6 +15,8 @@ class Printer:
self._print_bold_blue(content)
elif color == "yellow":
self._print_yellow(content)
elif color == "bold_yellow":
self._print_bold_yellow(content)
else:
print(content)
@@ -32,3 +37,6 @@ class Printer:
def _print_yellow(self, content):
print("\033[93m {}\033[00m".format(content))
def _print_bold_yellow(self, content):
print("\033[1m\033[93m {}\033[00m".format(content))

View File

@@ -1,8 +1,5 @@
from typing import Any, ClassVar, Optional
from langchain.prompts import BasePromptTemplate, PromptTemplate
from pydantic import BaseModel, Field
from typing import Any, Optional
from crewai.utilities import I18N
@@ -14,27 +11,38 @@ class Prompts(BaseModel):
system_template: Optional[str] = None
prompt_template: Optional[str] = None
response_template: Optional[str] = None
SCRATCHPAD_SLICE: ClassVar[str] = "\n{agent_scratchpad}"
use_system_prompt: Optional[bool] = False
agent: Any
def task_execution(self) -> BasePromptTemplate:
def task_execution(self) -> dict[str, str]:
"""Generate a standard prompt for task execution."""
slices = ["role_playing"]
if len(self.tools) > 0:
slices.append("tools")
else:
slices.append("no_tools")
system = self._build_prompt(slices)
slices.append("task")
if not self.system_template and not self.prompt_template:
return self._build_prompt(slices)
if (
not self.system_template
and not self.prompt_template
and self.use_system_prompt
):
return {
"system": system,
"user": self._build_prompt(["task"]),
"prompt": self._build_prompt(slices),
}
else:
return self._build_prompt(
slices,
self.system_template,
self.prompt_template,
self.response_template,
)
return {
"prompt": self._build_prompt(
slices,
self.system_template,
self.prompt_template,
self.response_template,
)
}
def _build_prompt(
self,
@@ -42,12 +50,11 @@ class Prompts(BaseModel):
system_template=None,
prompt_template=None,
response_template=None,
) -> BasePromptTemplate:
) -> str:
"""Constructs a prompt string from specified components."""
if not system_template and not prompt_template:
prompt_parts = [self.i18n.slice(component) for component in components]
prompt_parts.append(self.SCRATCHPAD_SLICE)
prompt = PromptTemplate.from_template("".join(prompt_parts))
prompt = "".join(prompt_parts)
else:
prompt_parts = [
self.i18n.slice(component)
@@ -56,9 +63,14 @@ class Prompts(BaseModel):
]
system = system_template.replace("{{ .System }}", "".join(prompt_parts))
prompt = prompt_template.replace(
"{{ .Prompt }}",
"".join([self.i18n.slice("task"), self.SCRATCHPAD_SLICE]),
"{{ .Prompt }}", "".join(self.i18n.slice("task"))
)
response = response_template.split("{{ .Response }}")[0]
prompt = PromptTemplate.from_template(f"{system}\n{prompt}\n{response}")
prompt = f"{system}\n{prompt}\n{response}"
prompt = (
prompt.replace("{goal}", self.agent.goal)
.replace("{role}", self.agent.role)
.replace("{backstory}", self.agent.backstory)
)
return prompt

View File

@@ -1,36 +1,17 @@
from typing import Any, Dict, List
import tiktoken
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import LLMResult
from litellm.integrations.custom_logger import CustomLogger
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
class TokenCalcHandler(BaseCallbackHandler):
model_name: str = ""
token_cost_process: TokenProcess
encoding: tiktoken.Encoding
def __init__(self, model_name, token_cost_process):
self.model_name = model_name
class TokenCalcHandler(CustomLogger):
def __init__(self, token_cost_process: TokenProcess):
self.token_cost_process = token_cost_process
try:
self.encoding = tiktoken.encoding_for_model(self.model_name)
except KeyError:
self.encoding = tiktoken.get_encoding("cl100k_base")
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
def log_success_event(self, kwargs, response_obj, start_time, end_time):
if self.token_cost_process is None:
return
for prompt in prompts:
self.token_cost_process.sum_prompt_tokens(len(self.encoding.encode(prompt)))
async def on_llm_new_token(self, token: str, **kwargs) -> None:
self.token_cost_process.sum_completion_tokens(1)
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(response_obj["usage"].prompt_tokens)
self.token_cost_process.sum_completion_tokens(
response_obj["usage"].completion_tokens
)

View File

@@ -6,15 +6,14 @@ from unittest.mock import patch
import pytest
from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
from crewai.agents.executor import CrewAgentExecutor
from crewai.agents.parser import CrewAgentParser
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.llm import LLM
from crewai.agents.parser import CrewAgentParser, OutputParserException
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.utilities import RPMController
from langchain.schema import AgentAction
from langchain.tools import tool
from langchain_core.exceptions import OutputParserException
from langchain_openai import ChatOpenAI
from crewai_tools import tool
from crewai.agents.parser import AgentAction
def test_agent_creation():
@@ -28,15 +27,20 @@ def test_agent_creation():
def test_agent_default_values():
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
assert isinstance(agent.llm, ChatOpenAI)
assert agent.llm.model_name == "gpt-4o"
assert agent.llm.temperature == 0.7
assert agent.llm.verbose is False
assert agent.allow_delegation is True
assert agent.llm == "gpt-4o"
assert agent.allow_delegation is False
def test_custom_llm():
agent = Agent(
role="test role", goal="test goal", backstory="test backstory", llm="gpt-4"
)
assert agent.llm == "gpt-4"
def test_custom_llm_with_langchain():
from langchain_openai import ChatOpenAI
agent = Agent(
role="test role",
goal="test goal",
@@ -44,9 +48,7 @@ def test_custom_llm():
llm=ChatOpenAI(temperature=0, model="gpt-4"),
)
assert isinstance(agent.llm, ChatOpenAI)
assert agent.llm.model_name == "gpt-4"
assert agent.llm.temperature == 0
assert agent.llm == "gpt-4"
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -65,7 +67,7 @@ def test_agent_execution():
)
output = agent.execute_task(task)
assert output == "The result of the math operation 1 + 1 is 2."
assert output == "1 + 1 = 2"
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -89,7 +91,7 @@ def test_agent_execution_with_tools():
expected_output="The result of the multiplication.",
)
output = agent.execute_task(task)
assert output == "The result of 3 times 4 is 12."
assert output == "The result of the multiplication of 3 times 4 is 12."
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -104,7 +106,6 @@ def test_logging_tool_usage():
goal="test goal",
backstory="test backstory",
tools=[multiplier],
allow_delegation=False,
verbose=True,
)
@@ -120,7 +121,7 @@ def test_logging_tool_usage():
tool_usage = InstructorToolCalling(
tool_name=multiplier.name, arguments={"first_number": 3, "second_number": 4}
)
assert output == "12"
assert output == "The result of the multiplication is 12."
assert agent.tools_handler.last_used_tool.tool_name == tool_usage.tool_name
assert agent.tools_handler.last_used_tool.arguments == tool_usage.arguments
@@ -181,7 +182,7 @@ def test_cache_hitting():
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
agent=agent,
expected_output="The result of the multiplication.",
expected_output="The number that is the result of the multiplication.",
)
output = agent.execute_task(task)
assert output == "0"
@@ -277,10 +278,64 @@ def test_agent_execution_with_specific_tools():
assert output == "The result of the multiplication is 12."
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_powered_by_new_o_model_family_that_allows_skipping_tool():
@tool
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm="o1-preview",
max_iter=3,
use_system_prompt=False,
allow_delegation=False,
use_stop_words=False,
)
task = Task(
description="What is 3 times 4?",
agent=agent,
expected_output="The result of the multiplication.",
)
output = agent.execute_task(task=task, tools=[multiplier])
assert output == "12"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_powered_by_new_o_model_family_that_uses_tool():
@tool
def comapny_customer_data() -> float:
"""Useful for getting customer related data."""
return "The company has 42 customers"
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm="o1-preview",
max_iter=3,
use_system_prompt=False,
allow_delegation=False,
use_stop_words=False,
)
task = Task(
description="How many customers does the company have?",
agent=agent,
expected_output="The number of customers",
)
output = agent.execute_task(task=task, tools=[comapny_customer_data])
assert output == "The company has 42 customers"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_custom_max_iterations():
@tool
def get_final_answer(numbers) -> float:
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
@@ -294,7 +349,7 @@ def test_agent_custom_max_iterations():
)
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
LLM, "call", wraps=LLM("gpt-4o", stop=["\nObservation:"]).call
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
@@ -304,13 +359,13 @@ def test_agent_custom_max_iterations():
task=task,
tools=[get_final_answer],
)
private_mock.assert_called_once()
assert private_mock.call_count == 2
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_repeated_tool_usage(capsys):
@tool
def get_final_answer(anything: str) -> float:
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
@@ -320,7 +375,7 @@ def test_agent_repeated_tool_usage(capsys):
goal="test goal",
backstory="test backstory",
max_iter=4,
llm=ChatOpenAI(model="gpt-4"),
llm="gpt-4",
allow_delegation=False,
verbose=True,
)
@@ -357,7 +412,7 @@ def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
goal="test goal",
backstory="test backstory",
max_iter=4,
llm=ChatOpenAI(model="gpt-4"),
llm="gpt-4",
allow_delegation=False,
verbose=True,
cache=False,
@@ -383,7 +438,7 @@ def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_moved_on_after_max_iterations():
@tool
def get_final_answer(numbers) -> float:
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
@@ -404,13 +459,13 @@ def test_agent_moved_on_after_max_iterations():
task=task,
tools=[get_final_answer],
)
assert output == "42"
assert output == "The final answer is 42."
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_respect_the_max_rpm_set(capsys):
@tool
def get_final_answer(anything: str) -> float:
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
@@ -444,11 +499,10 @@ def test_agent_respect_the_max_rpm_set(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
from unittest.mock import patch
from langchain.tools import tool
from crewai_tools import tool
@tool
def get_final_answer(numbers) -> float:
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
@@ -483,10 +537,10 @@ def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
def test_agent_without_max_rpm_respet_crew_rpm(capsys):
from unittest.mock import patch
from langchain.tools import tool
from crewai_tools import tool
@tool
def get_final_answer(numbers) -> float:
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
@@ -496,6 +550,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
goal="test goal",
backstory="test backstory",
max_rpm=10,
max_iter=2,
verbose=True,
allow_delegation=False,
)
@@ -504,7 +559,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
role="test role2",
goal="test goal2",
backstory="test backstory2",
max_iter=2,
max_iter=1,
verbose=True,
allow_delegation=False,
)
@@ -514,7 +569,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
description="Just say hi.", agent=agent1, expected_output="Your greeting."
),
Task(
description="NEVER give a Final Answer, instead keep using the `get_final_answer` tool non-stop",
description="NEVER give a Final Answer, unless you are told otherwise, instead keep using the `get_final_answer` tool non-stop, until you must give you best final answer",
expected_output="The final answer",
tools=[get_final_answer],
agent=agent2,
@@ -535,8 +590,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_error_on_parsing_tool(capsys):
from unittest.mock import patch
from langchain.tools import tool
from crewai_tools import tool
@tool
def get_final_answer() -> float:
@@ -548,6 +602,7 @@ def test_agent_error_on_parsing_tool(capsys):
role="test role",
goal="test goal",
backstory="test backstory",
max_iter=1,
verbose=True,
)
tasks = [
@@ -563,7 +618,7 @@ def test_agent_error_on_parsing_tool(capsys):
agents=[agent1],
tasks=tasks,
verbose=True,
function_calling_llm=ChatOpenAI(model="gpt-4-0125-preview"),
function_calling_llm="gpt-4o",
)
with patch.object(ToolUsage, "_render") as force_exception:
@@ -577,10 +632,10 @@ def test_agent_error_on_parsing_tool(capsys):
def test_agent_remembers_output_format_after_using_tools_too_many_times():
from unittest.mock import patch
from langchain.tools import tool
from crewai_tools import tool
@tool
def get_final_answer(anything: str) -> float:
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
@@ -611,7 +666,6 @@ def test_agent_remembers_output_format_after_using_tools_too_many_times():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_use_specific_tasks_output_as_context(capsys):
agent1 = Agent(role="test role", goal="test goal", backstory="test backstory")
agent2 = Agent(role="test role2", goal="test goal2", backstory="test backstory2")
say_hi_task = Task(
@@ -631,7 +685,7 @@ def test_agent_use_specific_tasks_output_as_context(capsys):
crew = Crew(agents=[agent1, agent2], tasks=tasks)
result = crew.kickoff()
print("LOWER RESULT", result.raw)
assert "bye" not in result.raw.lower()
assert "hi" in result.raw.lower() or "hello" in result.raw.lower()
@@ -640,12 +694,12 @@ def test_agent_use_specific_tasks_output_as_context(capsys):
def test_agent_step_callback():
class StepCallback:
def callback(self, step):
print(step)
pass
with patch.object(StepCallback, "callback") as callback:
@tool
def learn_about_AI(topic) -> str:
def learn_about_AI() -> str:
"""Useful for when you need to learn about AI to write an paragraph about it."""
return "AI is a very broad field."
@@ -672,36 +726,51 @@ def test_agent_step_callback():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_function_calling_llm():
from langchain_openai import ChatOpenAI
llm = "gpt-4o"
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
@tool
def learn_about_AI() -> str:
"""Useful for when you need to learn about AI to write an paragraph about it."""
return "AI is a very broad field."
with patch.object(llm.client, "create", wraps=llm.client.create) as private_mock:
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
tools=[learn_about_AI],
llm="gpt-4o",
max_iter=2,
function_calling_llm=llm,
)
@tool
def learn_about_AI(topic) -> str:
"""Useful for when you need to learn about AI to write an paragraph about it."""
return "AI is a very broad field."
essay = Task(
description="Write and then review an small paragraph on AI until it's AMAZING",
expected_output="The final paragraph.",
agent=agent1,
)
tasks = [essay]
crew = Crew(agents=[agent1], tasks=tasks)
from unittest.mock import patch, Mock
import instructor
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
tools=[learn_about_AI],
llm=ChatOpenAI(model="gpt-4-0125-preview"),
function_calling_llm=llm,
)
essay = Task(
description="Write and then review an small paragraph on AI until it's AMAZING",
expected_output="The final paragraph.",
agent=agent1,
)
tasks = [essay]
crew = Crew(agents=[agent1], tasks=tasks)
with patch.object(instructor, "from_litellm") as mock_from_litellm:
mock_client = Mock()
mock_from_litellm.return_value = mock_client
mock_chat = Mock()
mock_client.chat = mock_chat
mock_completions = Mock()
mock_chat.completions = mock_completions
mock_create = Mock()
mock_completions.create = mock_create
crew.kickoff()
private_mock.assert_called()
mock_from_litellm.assert_called()
mock_create.assert_called()
calls = mock_create.call_args_list
assert any(
call.kwargs.get("model") == "gpt-4o" for call in calls
), "Instructor was not created with the expected model"
def test_agent_count_formatting_error():
@@ -714,8 +783,7 @@ def test_agent_count_formatting_error():
verbose=True,
)
parser = CrewAgentParser()
parser.agent = agent1
parser = CrewAgentParser(agent=agent1)
with patch.object(Agent, "increment_formatting_errors") as mock_count_errors:
test_text = "This text does not match expected formats."
@@ -751,7 +819,6 @@ def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
crew = Crew(agents=[agent1], tasks=tasks)
result = crew.kickoff()
print("RESULT: ", result.raw)
assert result.raw == "Howdy!"
@@ -792,22 +859,6 @@ def test_tool_usage_information_is_appended_to_agent():
]
def test_agent_llm_uses_token_calc_handler_with_llm_has_model_name():
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
verbose=True,
)
assert len(agent1.llm.callbacks) == 1
assert agent1.llm.callbacks[0].__class__.__name__ == "TokenCalcHandler"
assert agent1.llm.callbacks[0].model_name == "gpt-4o"
assert (
agent1.llm.callbacks[0].token_cost_process.__class__.__name__ == "TokenProcess"
)
def test_agent_definition_based_on_dict():
config = {
"role": "test role",
@@ -846,7 +897,7 @@ def test_agent_human_input():
)
with patch.object(CrewAgentExecutor, "_ask_human_input") as mock_human_input:
mock_human_input.return_value = "Hello"
mock_human_input.return_value = "Don't say hi, say Hello instead!"
output = agent.execute_task(task)
mock_human_input.assert_called_once()
assert output == "Hello"
@@ -870,6 +921,31 @@ def test_interpolate_inputs():
assert agent.backstory == "I am the master of nothing"
def test_not_using_system_prompt():
agent = Agent(
role="{topic} specialist",
goal="Figure {goal} out",
backstory="I am the master of {role}",
use_system_prompt=False,
)
agent.create_agent_executor()
assert not agent.agent_executor.prompt.get("user")
assert not agent.agent_executor.prompt.get("system")
def test_using_system_prompt():
agent = Agent(
role="{topic} specialist",
goal="Figure {goal} out",
backstory="I am the master of {role}",
)
agent.create_agent_executor()
assert agent.agent_executor.prompt.get("user")
assert agent.agent_executor.prompt.get("system")
def test_system_and_prompt_template():
agent = Agent(
role="{topic} specialist",
@@ -886,13 +962,11 @@ def test_system_and_prompt_template():
{{ .Response }}<|eot_id|>""",
)
template = agent.agent_executor.agent.dict()["runnable"]["middle"][0]["template"]
assert (
template
== """<|start_header_id|>system<|end_header_id|>
expected_prompt = """<|start_header_id|>system<|end_header_id|>
You are {role}. {backstory}
Your personal goal is: {goal}To give my best complete final answer to the task use the exact following format:
Your personal goal is: {goal}
To give my best complete final answer to the task use the exact following format:
Thought: I now can give a great answer
Final Answer: my best complete final answer to the task.
@@ -906,12 +980,22 @@ Current Task: {input}
Begin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!
Thought:
{agent_scratchpad}<|eot_id|>
Thought:<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
"""
)
with patch.object(CrewAgentExecutor, "_format_prompt") as mock_format_prompt:
mock_format_prompt.return_value = expected_prompt
# Trigger the _format_prompt method
agent.agent_executor._format_prompt("dummy_prompt", {})
# Assert that _format_prompt was called
mock_format_prompt.assert_called_once()
# Assert that the returned prompt matches the expected prompt
assert mock_format_prompt.return_value == expected_prompt
@patch("crewai.agent.CrewTrainingHandler")
@@ -1000,16 +1084,18 @@ def test_agent_max_retry_limit():
[
mock.call(
{
"input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi \n you MUST return the actual complete content as the final answer, not a summary.",
"input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi\nyou MUST return the actual complete content as the final answer, not a summary.",
"tool_names": "",
"tools": "",
"ask_for_human_input": True,
}
),
mock.call(
{
"input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi \n you MUST return the actual complete content as the final answer, not a summary.",
"input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi\nyou MUST return the actual complete content as the final answer, not a summary.",
"tool_names": "",
"tools": "",
"ask_for_human_input": True,
}
),
]
@@ -1024,11 +1110,14 @@ def test_handle_context_length_exceeds_limit():
backstory="test backstory",
)
original_action = AgentAction(
tool="test_tool", tool_input="test_input", log="test_log"
tool="test_tool",
tool_input="test_input",
text="test_log",
thought="test_thought",
)
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
CrewAgentExecutor, "invoke", wraps=agent.agent_executor.invoke
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
@@ -1038,25 +1127,23 @@ def test_handle_context_length_exceeds_limit():
task=task,
)
private_mock.assert_called_once()
with patch("crewai.agents.executor.click") as mock_prompt:
mock_prompt.return_value = "y"
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.side_effect = ValueError(
"Context length limit exceeded"
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.side_effect = ValueError(
"Context length limit exceeded"
)
long_input = "This is a very long input. " * 10000
# Attempt to handle context length, expecting the mocked error
with pytest.raises(ValueError) as excinfo:
agent.agent_executor._handle_context_length(
[(original_action, long_input)]
)
long_input = "This is a very long input. " * 10000
# Attempt to handle context length, expecting the mocked error
with pytest.raises(ValueError) as excinfo:
agent.agent_executor._handle_context_length(
[(original_action, long_input)]
)
assert "Context length limit exceeded" in str(excinfo.value)
mock_handle_context.assert_called_once()
assert "Context length limit exceeded" in str(excinfo.value)
mock_handle_context.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -1065,11 +1152,12 @@ def test_handle_context_length_exceeds_limit_cli_no():
role="test role",
goal="test goal",
backstory="test backstory",
sliding_context_window=False,
)
task = Task(description="test task", agent=agent, expected_output="test output")
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
CrewAgentExecutor, "invoke", wraps=agent.agent_executor.invoke
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
@@ -1079,10 +1167,8 @@ def test_handle_context_length_exceeds_limit_cli_no():
task=task,
)
private_mock.assert_called_once()
with patch("crewai.agents.executor.click") as mock_prompt:
mock_prompt.return_value = "n"
pytest.raises(SystemExit)
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.assert_not_called()
pytest.raises(SystemExit)
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.assert_not_called()

View File

@@ -24,7 +24,7 @@ def test_delegate_work():
assert (
result
== "As a researcher, my opinions are based on facts and extensive study. Regarding AI Agents, they are a fundamental part of the advancement in technology. AI agents are essentially the entities that perceive their environment and take actions to maximize their chances of success. They have a wide range of applications from self-driving cars to intelligent personal assistants like Siri and Alexa. They have the potential to greatly improve our lives by automating mundane tasks, helping us make better decisions, and even potentially solving complex problems. However, like any technology, they have their own set of challenges such as the risk of job displacement and the ethical implications of their use. My goal as a researcher is not to love or hate AI agents, but to understand them, their benefits, and their implications. It's about maintaining an objective view in order to provide the most accurate and comprehensive analysis."
== "While it's a common misconception that I might \"hate\" AI agents, the reality is much more nuanced. As an expert in technology research, especially in the realm of AI, I have a deep appreciation for both the potential and the challenges that AI agents present.\n\nAI agents, which can be broadly defined as autonomous software entities that perform tasks on behalf of users or other programs, are transforming numerous aspects of our daily lives and industries. Here are a few key points to consider:\n\n1. **Advantages of AI Agents**:\n - **Automation and Efficiency**: AI agents can handle repetitive and time-consuming tasks efficiently, freeing up human workers to focus on more complex and creative activities.\n - **Availability**: Unlike humans, AI agents can operate 24/7 without breaks, providing continuous service and support.\n - **Scalability**: AI agents can be deployed across different platforms and industries, scaling solutions quickly and effectively.\n - **Data Analysis and Insights**: They can process vast amounts of data rapidly, providing insights that would be difficult, if not impossible, for humans to derive on their own.\n\n2. **Challenges and Concerns**:\n - **Ethical Implications**: There are significant concerns regarding the ethical use of AI agents, particularly related to privacy, bias, and decision-making transparency.\n - **Job Displacement**: While AI agents increase efficiency, they also raise concerns about potential job displacement in certain sectors.\n - **Dependence and Reliability**: Over-reliance on AI agents could lead to vulnerabilities if the technology fails or is compromised.\n\n3. **Looking Forward**:\n - **Collaboration Between Humans and AI**: The best outcomes are likely to come from a hybrid approach where AI agents augment human capabilities rather than replace them outright.\n - **Regulation and Standards**: There is a growing need for clear regulations and standards to ensure the ethical development and deployment of AI agents.\n - **Continuous Improvement**: The technology behind AI agents is constantly evolving. Continuous research and development are essential to address current limitations and maximize benefits.\n\nIn conclusion, my perspective on AI agents is not one of disdain but rather a cautious optimism. They hold incredible promise for transforming various sectors and improving efficiencies, but this potential comes with significant responsibilities. It's essential to address the ethical, social, and technical challenges to harness the full benefits of AI agents while mitigating potential downsides. Thus, thoughtful consideration and ongoing dialogue about the role of AI agents in society are crucial."
)
@@ -38,7 +38,7 @@ def test_delegate_work_with_wrong_co_worker_variable():
assert (
result
== "AI Agents are essentially computer programs that are designed to perform tasks autonomously, with the ability to adapt and learn from their environment. These tasks range from simple ones such as setting alarms, to more complex ones like diagnosing diseases or driving cars. AI agents have the potential to revolutionize many industries, making processes more efficient and accurate. \n\nHowever, like any technology, AI agents have their downsides. They can be susceptible to biases based on the data they're trained on and they can also raise privacy concerns. Moreover, the widespread adoption of AI agents could result in significant job displacement in certain industries.\n\nDespite these concerns, it's important to note that the development and use of AI agents are heavily dependent on human decisions and policies. Therefore, the key to harnessing the benefits of AI agents while mitigating the risks lies in responsible and thoughtful development and implementation.\n\nWhether one 'loves' or 'hates' AI agents often comes down to individual perspectives and experiences. But as a researcher, it is my job to provide balanced and factual information, so I hope this explanation helps you understand better what AI Agents are and the implications they have."
== "It's not accurate to say that I hate AI agents. My stance is more nuanced and rooted in a deep understanding of their capabilities, limitations, and the potential impact on various sectors.\n\nAI agents are software entities that perform tasks autonomously on behalf of users. They are designed to mimic human decision-making and problem-solving abilities. The advances in machine learning, natural language processing, and data analytics have significantly improved the performance of AI agents, making them valuable tools for automation, customer service, data analysis, and more.\n\nHowever, my concerns about AI agents stem from several key points:\n\n1. **Ethical Considerations**: AI agents can perpetuate biases present in their training data. If the data used to train these agents contain biases, whether related to gender, race, or other factors, the AI can replicate and even amplify these biases in its operations. This has serious ethical implications for fairness and equality.\n\n2. **Job Displacement**: While AI agents can enhance efficiency and productivity, they can also displace human workers. Many routine and repetitive tasks previously performed by humans are now automated, leading to job losses in certain sectors. This aspect calls for a balanced approach where human roles are redefined rather than completely eliminated.\n\n3. **Transparency and Accountability**: AI agents often operate as \"black boxes,\" meaning their decision-making processes are not transparent. If an AI agent makes an error—whether in financial transactions, medical diagnoses, or legal decisions—it can be challenging to understand why the error occurred and who is responsible. Ensuring transparency and accountability is crucial for trust and reliability.\n\n4. **Security and Privacy**: AI agents often deal with vast amounts of personal data. Ensuring the security and privacy of this data is paramount. There are risks associated with data breaches and the misuse of personal information, which can have significant repercussions for individuals and organizations.\n\nDespite these concerns, I also recognize the immense potential of AI agents to drive innovation, improve efficiency, and provide solutions to complex problems. The key is to develop and deploy AI responsibly, with robust regulatory frameworks and ethical guidelines.\n\nIn conclusion, I don't hate AI agents. Instead, I advocate for a balanced perspective that acknowledges their benefits while addressing their challenges. By doing so, we can harness the power of AI agents for the greater good, ensuring they contribute positively to society."
)
@@ -52,7 +52,7 @@ def test_ask_question():
assert (
result
== "As an AI researcher, I don't have personal feelings or emotions like love or hate. However, I recognize the importance of AI Agents in today's technological landscape. They have the potential to greatly enhance our lives and make tasks more efficient. At the same time, it is crucial to consider the ethical implications and societal impacts that come with their use. My role is to provide objective research and analysis on these topics."
== "As a researcher specialized in technology with a particular focus on AI and AI agents, my stance on AI agents isn't rooted in emotional responses like hate or love. Rather, it is grounded in objective analysis and thoughtful consideration of their capabilities, ethics, and impact on society.\n\nAI agents are powerful tools that can transform various industries, from healthcare and finance to customer service and manufacturing. They have the potential to increase efficiency, improve decision-making, and even solve complex problems that were previously insurmountable. For instance, AI agents can analyze vast amounts of data far more quickly and accurately than humans, leading to advancements in medical research and diagnostics.\n\nHowever, it is equally important to recognize the ethical and societal implications of AI agents. Issues such as data privacy, algorithmic bias, and the potential displacement of jobs need to be carefully managed. As a researcher, my role is to study these aspects comprehensively, provide balanced insights, and advocate for responsible development and deployment of AI technologies.\n\nSo, to address your question directly: I don't hate AI agents. I appreciate their potential and am keenly aware of their challenges. My goal is to contribute to a future where AI agents are designed and used ethically and effectively to benefit humanity."
)
@@ -66,7 +66,7 @@ def test_ask_question_with_wrong_co_worker_variable():
assert (
result
== "No, I don't hate AI agents. In fact, I find them quite fascinating. They are powerful tools that can greatly assist in various tasks, including my research. As a technology researcher, AI and AI agents are subjects of interest to me due to their potential in advancing our understanding and capabilities in various fields. My supposed love for them stems from this professional interest and the potential they hold."
== "As an expert researcher specialized in technology, I have a profound appreciation for AI and AI agents. They represent a pinnacle of human innovation and have the potential to greatly enhance various aspects of our lives. AI agents can assist in automating mundane tasks, providing insights through data analysis, and even offering companionship to those in need. However, it's also essential to approach AI with a balanced perspective, acknowledging both its strengths and the ethical considerations it raises. In short, I don't hate AI agents; I recognize their immense potential and value while being mindful of the responsibilities that come with their development and deployment."
)
@@ -80,7 +80,7 @@ def test_delegate_work_withwith_coworker_as_array():
assert (
result
== "AI Agents are software entities which operate in an environment to achieve a particular goal. They can perceive their environment, reason about it, and take actions to fulfill their objectives. This includes everything from chatbots to self-driving cars. They are designed to act autonomously to a certain extent and are capable of learning from their experiences to improve their performance over time.\n\nDespite some people's fears or dislikes, AI Agents are not inherently good or bad. They are tools, and like any tool, their value depends on how they are used. For instance, AI Agents can be used to automate repetitive tasks, provide customer support, or analyze vast amounts of data far more quickly and accurately than a human could. They can also be used in ways that invade privacy or replace jobs, which is often where the apprehension comes from.\n\nThe key is to create regulations and ethical guidelines for the use of AI Agents, and to continue researching and developing them in a way that maximizes their benefits and minimizes their potential harm. From a research perspective, there's a lot of potential in AI Agents, and it's a fascinating field to be a part of."
== "In addressing your query about AI agents, it's crucial to clarify my stance. While I don't \"hate\" AI agents, I have a nuanced perspective that balances both their potential benefits and inherent drawbacks. This balanced view is essential for understanding the complex role that AI agents play in today's technology landscape.\n\nAI agents are remarkable tools that can automate tasks, provide intelligent recommendations, and enhance user experiences. They harness advancements in machine learning, natural language processing, and big data analytics to perform a wide range of functions, from personal assistants like Siri and Alexa to sophisticated customer service bots and autonomous systems in various industries.\n\nThe Potential Benefits of AI Agents:\n1. **Automation and Efficiency**: AI agents can handle repetitive and mundane tasks, freeing up human workers to focus on more creative and high-value activities. This can lead to increased productivity and operational efficiency in businesses.\n2. **Data-Driven Insights**: By analyzing vast amounts of data, AI agents can provide actionable insights that inform decision-making processes in areas like finance, healthcare, and marketing.\n3. **Personalization**: AI agents can tailor recommendations and services based on individual user preferences and behaviors, enhancing user satisfaction and engagement.\n4. **24/7 Availability**: Unlike human workers, AI agents can operate continuously without the need for breaks, providing round-the-clock support and services.\n\nHowever, there are also several challenges and concerns associated with AI agents that cannot be overlooked:\n\n1. **Ethical Considerations**: The deployment of AI agents raises ethical questions about privacy, surveillance, and the potential for bias in their algorithms. Ensuring that AI systems are transparent and fair is a critical issue.\n2. **Job Displacement**: While AI agents can increase efficiency, they also have the potential to displace human workers, leading to concerns about unemployment and the need for workforce reskilling.\n3. **Trust and Reliability**: Building trust in AI systems is essential. Users need to be confident that AI agents will perform their tasks accurately and reliably, without unexpected failures or errors.\n4. **Control and Accountability**: Determining who is accountable when AI agents make errors or cause harm is a complex issue, especially when these systems operate autonomously or with minimal human oversight.\n\nIn summary, my view on AI agents is not one of antagonism but of cautious optimism. I recognize the transformative potential of AI agents in various domains, but I also emphasize the importance of addressing the ethical, economic, and technical challenges they present. Our goal should be to develop and deploy AI agents in a way that maximizes their benefits while mitigating their risks. This balanced approach will ensure that AI agents can be a positive force in society, driving innovation and improving quality of life without compromising ethical standards or societal values."
)
@@ -94,7 +94,7 @@ def test_ask_question_with_coworker_as_array():
assert (
result
== "I don't hate or love AI agents. My passion lies in understanding them, researching about their capabilities, implications, and potential for development. As a researcher, my feelings toward AI are more of fascination and interest rather than personal love or hate."
== "No, I do not hate AI agents. In fact, I appreciate their potential and the positive impact they can have in various fields. AI agents can perform tasks that are either too mundane or too complex for humans, thereby increasing productivity and allowing us to focus on more creative and strategic activities. Moreover, they can process and analyze vast amounts of data much more quickly and accurately than humans, leading to better decision-making and innovations. As a researcher specialized in technology, I am always excited to see advancements in AI and AI agents, as they push the boundaries of what technology can achieve. So, yes, I do love the possibilities they bring to our world."
)

View File

@@ -1,26 +1,18 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are researcher.\nYou''re
an expert researcher, specialized in technology\n\nYour personal goal is: make
the best research and analysis on content about AI and AI agentsTOOLS:\n------\nYou
have access to only the following tools:\n\n\n\nTo use a tool, please use the
exact following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction:
the tool you wanna use, should be one of [], just the name.\nAction Input: Any
and all relevant information input and context for using the tool\nObservation:
the result of using the tool\n```\n\nWhen you have a response for your task,
or if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]```This is the
summary of your work so far:\nThe human asks the AI''s opinion on AI Agents,
suggesting that the AI dislikes them. The AI, identifying as a researcher, clarifies
that its opinions are based on research and study. It views AI Agents as a key
part of technological advancement, with potential to improve lives through automation
and decision-making assistance. However, it also acknowledges challenges, including
job displacement risk and ethical implications. The AI aims to maintain an objective
view for accurate analysis, rather than loving or hating AI Agents.Begin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: do you hate
AI Agents?\nThis is the context you''re working with:\nI heard you LOVE them\n"}],
"model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature":
0.7}'
body: '{"messages": [{"role": "system", "content": "You are researcher. You''re
an expert researcher, specialized in technology\nYour personal goal is: make
the best research and analysis on content about AI and AI agents\nTo give my
best complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
Task: do you hate AI Agents?\n\nThis is the expect criteria for your final answer:
Your best answer to your coworker asking you this, accounting for the context
shared.\nyou MUST return the actual complete content as the final answer, not
a summary.\n\nThis is the context you''re working with:\nI heard you LOVE them\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
@@ -29,16 +21,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1607'
- '1049'
content-type:
- application/json
cookie:
- __cf_bm=h0bOt9YDs6yXE_oMKhs3agQtymHaKWVUaKmUU1JTjF0-1707817571-1-ASLnLk23NWsEgocWyiwez3Ekvu0XRYdiq/aaJBeVWO+FtIxo1aStNrgbDNkJJMN6zBfVppkCrs6/YvM5SPomQkw=;
_cfuvid=JXBcxKdSP7U2jrK3OVg2NRCw5efh3.IEvakR_W8ac90-1707817571102-0-604800000
- __cf_bm=.8.5x0rqghNdtUCxfmwblfhuXiKyPm_cRq.NIa0heL0-1726642369-1.0.1.1-VbsAGqjOt45ZD50K2QW2oZrByLLIh6w8K4nAkLthTbXgU5qTgdf0mhvFId8Q2F3DcHUw9E72lVIZEN.YVlYINQ;
_cfuvid=o5s1Ux898x0eFNtzUCnG996iqyX3LgXDXq99C5oqnVE-1726642369248-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
- OpenAI/Python 1.46.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -48,7 +40,9 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
- 1.46.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -56,523 +50,37 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
As"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
an"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
researcher"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
don"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"''t"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
have"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
personal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
feelings"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
emotions"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
like"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
love"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
hate"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
However"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
recognize"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
importance"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
of"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Agents"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
today"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"''s"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
technological"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
landscape"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
They"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
have"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
potential"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
greatly"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
enhance"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
our"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
lives"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
make"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tasks"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
more"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
efficient"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
At"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
same"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
time"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
it"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
crucial"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
consider"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
ethical"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
implications"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
societal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
impacts"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
that"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
come"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
with"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
their"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
My"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
role"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
provide"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
objective"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
research"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
analysis"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
on"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
these"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
topics"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
content: "{\n \"id\": \"chatcmpl-A8itGodu0IQ3bZNxRStGoqFBPnObt\",\n \"object\":
\"chat.completion\",\n \"created\": 1726642546,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
Answer: As a researcher specialized in technology with a particular focus on
AI and AI agents, my stance on AI agents isn't rooted in emotional responses
like hate or love. Rather, it is grounded in objective analysis and thoughtful
consideration of their capabilities, ethics, and impact on society.\\n\\nAI
agents are powerful tools that can transform various industries, from healthcare
and finance to customer service and manufacturing. They have the potential to
increase efficiency, improve decision-making, and even solve complex problems
that were previously insurmountable. For instance, AI agents can analyze vast
amounts of data far more quickly and accurately than humans, leading to advancements
in medical research and diagnostics.\\n\\nHowever, it is equally important to
recognize the ethical and societal implications of AI agents. Issues such as
data privacy, algorithmic bias, and the potential displacement of jobs need
to be carefully managed. As a researcher, my role is to study these aspects
comprehensively, provide balanced insights, and advocate for responsible development
and deployment of AI technologies.\\n\\nSo, to address your question directly:
I don't hate AI agents. I appreciate their potential and am keenly aware of
their challenges. My goal is to contribute to a future where AI agents are designed
and used ethically and effectively to benefit humanity.\\n\\n\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 199,\n \"completion_tokens\":
257,\n \"total_tokens\": 456,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_a5d11b2ef2\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854c250fdf816803-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 13 Feb 2024 09:46:33 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '447'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299621'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 75ms
x-request-id:
- req_b28e4962a1ae3878134d248ab1257c30
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks
of artificial intelligence. The AI thinks artificial intelligence is a force
for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial
intelligence is a force for good?\nAI: Because artificial intelligence will
help humans reach their full potential.\n\nNew summary:\nThe human asks what
the AI thinks of artificial intelligence. The AI thinks artificial intelligence
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\nThe human asks the AI''s opinion on AI Agents,
suggesting that the AI dislikes them. The AI, identifying as a researcher, clarifies
that its opinions are based on research and study. It views AI Agents as a key
part of technological advancement, with potential to improve lives through automation
and decision-making assistance. However, it also acknowledges challenges, including
job displacement risk and ethical implications. The AI aims to maintain an objective
view for accurate analysis, rather than loving or hating AI Agents.\n\nNew lines
of conversation:\nHuman: do you hate AI Agents?\nThis is the context you''re
working with:\nI heard you LOVE them\nAI: As an AI researcher, I don''t have
personal feelings or emotions like love or hate. However, I recognize the importance
of AI Agents in today''s technological landscape. They have the potential to
greatly enhance our lives and make tasks more efficient. At the same time, it
is crucial to consider the ethical implications and societal impacts that come
with their use. My role is to provide objective research and analysis on these
topics.\n\nNew summary:"}], "model": "gpt-4", "n": 1, "stream": false, "temperature":
0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1909'
content-type:
- application/json
cookie:
- __cf_bm=h0bOt9YDs6yXE_oMKhs3agQtymHaKWVUaKmUU1JTjF0-1707817571-1-ASLnLk23NWsEgocWyiwez3Ekvu0XRYdiq/aaJBeVWO+FtIxo1aStNrgbDNkJJMN6zBfVppkCrs6/YvM5SPomQkw=;
_cfuvid=JXBcxKdSP7U2jrK3OVg2NRCw5efh3.IEvakR_W8ac90-1707817571102-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA1RTTVMbMQy951do9sJlyRCgfN1oL+XQC+3Qr+kwildZi3gtj6UNDQz/vWNvIOXi
g6Tn96QnPc8AGu6aK2icR3NDCocX+WH14euXix+Ln+Pt7V16ePoUju6W4+LX5cdl0xaELB/I2Stq
7mRIgYwlTmmXCY3Kr4vzo/OLxfmHy8uaGKSjUGB9ssPTw6OzxckO4YUdaXMFv2cAAM/1LdpiR3+b
KzhqXyMDqWJPzdVbEUCTJZRIg6qshtGadp90Eo1ilfvNE/hxwAioawXzBNc3BwqSOLJEkAjXN3Dd
UzRtQce+JzWOPZhH25VDxxp4TRU+zOFbjbbAHUXj1baUowJCJiXMzlNuwQXMvOIKQgO2N04FzARL
VOoK/SsIMHagNnbbOdwYbJgeda9tIljTFhJmA1mBkfNRgvTsMAB2G4yOBorWwiObhyRlBowBTICH
lGVDEHhTFWUZew84mgxYbKzkHTlWlng44HrqaZqtozl8lkfalL7YAIMKoFtHeQzU9aTgPIZAsSdt
gaMLY1fwD7Iso0sBJ2GQWdeVicxX1TykwK4q0Dl891Rtog54VYiCFLWSwaOR/m/UzphMbJRrsgw4
oFuX0STKKhED0CD1b1iO9l6xeeJc+CXXDoEjmHS4PdA6WQgYO3WYqAUakkflp2ktCCJRByvJMF0F
b+i9iRgxbJW1uDvxvPZbLRbHZFPz6EznzW5xX942PkifsizLdcQxhLf4iiOrv8+EKrFst5qkCf4y
A/hTL2t8dyxNyjIkuzdZUywfnpzuLqvZH/E+uzg+3mVNDMM+cXp2PNtJbHSrRsP9imNPOWWul1aE
zl5m/wAAAP//AwDqMNM4YAQAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854c253c3dad6803-SJC
Cache-Control:
- no-cache, must-revalidate
- 8c4f6ea85a70a67a-MIA
Connection:
- keep-alive
Content-Encoding:
@@ -580,40 +88,39 @@ interactions:
Content-Type:
- application/json
Date:
- Tue, 13 Feb 2024 09:46:49 GMT
- Wed, 18 Sep 2024 06:55:49 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
- crewai-iuxna1
openai-processing-ms:
- '10426'
- '2905'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
- max-age=15552000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299539'
- '29999755'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 92ms
- 0s
x-request-id:
- req_60e80bfacb6ee3f3056f9b0120f941e6
status:
code: 200
message: OK
- req_ee404d560531f43e473f76818a52432f
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,37 +1,36 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are researcher. You''re
body: '{"messages": [{"role": "system", "content": "You are researcher. You''re
an expert researcher, specialized in technology\nYour personal goal is: make
the best research and analysis on content about AI and AI agentsTo give my best
complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: my best complete final answer to
the task.\nYour final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!\nCurrent Task: do you hate AI Agents?\n\nThis is the expect criteria for
your final answer: Your best answer to your coworker asking you this, accounting
for the context shared. \n you MUST return the actual complete content as the
final answer, not a summary.\n\nThis is the context you''re working with:\nI
heard you LOVE them\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:\n"}], "model":
"gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
the best research and analysis on content about AI and AI agents\nTo give my
best complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
Task: do you hate AI Agents?\n\nThis is the expect criteria for your final answer:
Your best answer to your coworker asking you this, accounting for the context
shared.\nyou MUST return the actual complete content as the final answer, not
a summary.\n\nThis is the context you''re working with:\nI heard you LOVE them\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
- gzip, deflate
connection:
- keep-alive
content-length:
- '1103'
- '1049'
content-type:
- application/json
cookie:
- _cfuvid=r7d0l_hEOM639ffTY.owt.2ZeXQN7IgsM0JmFM6FEiE-1715576040584-0.0.1.1-604800000;
__cf_bm=bHXznnX6IEnPWC4UFqVw22jiTLBlLvClKsTW4F99UPc-1715613132-1.0.1.1-5k2TYkm6.lsjrA1MkQ4uD2GxUGKPmwNVeYL_sKTpAPJ_trVvN3.uNZS4HljKfVPlku1XDNCYfU4y43hlt3e.RQ
- __cf_bm=.8.5x0rqghNdtUCxfmwblfhuXiKyPm_cRq.NIa0heL0-1726642369-1.0.1.1-VbsAGqjOt45ZD50K2QW2oZrByLLIh6w8K4nAkLthTbXgU5qTgdf0mhvFId8Q2F3DcHUw9E72lVIZEN.YVlYINQ;
_cfuvid=o5s1Ux898x0eFNtzUCnG996iqyX3LgXDXq99C5oqnVE-1726642369248-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.25.1
- OpenAI/Python 1.46.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -41,7 +40,9 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.25.1
- 1.46.0
x-stainless-raw-response:
- 'true'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -49,379 +50,68 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"As"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
researcher"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
specialized"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
technology"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
specifically"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
agents"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
my"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
personal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
feelings"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
towards"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
them"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
are"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
not"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
based"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
on"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
emotion"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
but"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
on"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
professional"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
interest"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
intellectual"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
curiosity"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\n\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
don"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"''t"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
hate"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
love"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
agents"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
My"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
passion"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
lies"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
understanding"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
them"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
researching"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
about"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
their"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
capabilities"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
implications"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
potential"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
for"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
development"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
As"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
researcher"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
my"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
feelings"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
toward"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
are"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
more"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
of"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
fascination"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
interest"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
rather"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
than"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
personal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
love"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
hate"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ORdkG4FJphn6RbqXUgZlUbRtXBIO","object":"chat.completion.chunk","created":1715613148,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
content: "{\n \"id\": \"chatcmpl-A8itTPbch8sPUbRd8BJdzF4NXLPhg\",\n \"object\":
\"chat.completion\",\n \"created\": 1726642559,\n \"model\": \"gpt-4o-2024-05-13\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
Answer: No, I do not hate AI agents. In fact, I appreciate their potential and
the positive impact they can have in various fields. AI agents can perform tasks
that are either too mundane or too complex for humans, thereby increasing productivity
and allowing us to focus on more creative and strategic activities. Moreover,
they can process and analyze vast amounts of data much more quickly and accurately
than humans, leading to better decision-making and innovations. As a researcher
specialized in technology, I am always excited to see advancements in AI and
AI agents, as they push the boundaries of what technology can achieve. So, yes,
I do love the possibilities they bring to our world.\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 199,\n \"completion_tokens\":
144,\n \"total_tokens\": 343,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_a5d11b2ef2\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 883396429922624c-GRU
- 8c4f6efe3d94a67a-MIA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- text/event-stream; charset=utf-8
- application/json
Date:
- Mon, 13 May 2024 15:12:29 GMT
- Wed, 18 Sep 2024 06:56:01 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '514'
- '1836'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
- max-age=15552000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299745'
- '29999755'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 50ms
- 0s
x-request-id:
- req_5a62449ef9052c3a350f1b47f268bbcc
status:
code: 200
message: OK
- req_51693c1b5a37ada8922542e71cae13c2
http_version: HTTP/1.1
status_code: 200
version: 1

File diff suppressed because it is too large Load Diff

0
tests/agent_tools/lol.py Normal file
View File

Some files were not shown because too many files have changed in this diff Show More