Compare commits

...

61 Commits

Author SHA1 Message Date
Lorenze Jay
0cc37e0d72 WIP conditional tasks, added test and the logic flow, need to improve things within sequential since DRY best practices can be improved 2024-07-05 08:40:58 -07:00
Lorenze Jay
bb33e1813d WIP: sync with tasks 2024-07-03 14:17:57 -07:00
Lorenze Jay
96dc96d13c Merge branch 'pr-847' into lj/conditional-tasks-feat 2024-07-03 12:53:00 -07:00
Lorenze Jay
6efbe8c5a5 WIP: conditional task 2024-07-03 12:52:52 -07:00
Brandon Hancock
a3bdc09f2d Merge branch 'bugfix/kickoff-for-each-usage-metrics' into feature/kickoff-consistent-output 2024-07-03 11:36:06 -04:00
Brandon Hancock
bae9c70730 Merge branch 'main' into bugfix/kickoff-for-each-usage-metrics 2024-07-03 11:22:42 -04:00
Brandon Hancock
55af7e0f15 WIP. Needing team to review change 2024-07-03 11:09:19 -04:00
Braelyn Boynton
f47904134b Add back AgentOps as Optional Dependency (#543)
* implements agentops with a langchain handler, agent tracking and tool call recording

* track tool usage

* end session after completion

* track tool usage time

* better tool and llm tracking

* code cleanup

* make agentops optional

* optional dependency usage

* remove telemetry code

* optional agentops

* agentops version bump

* remove org key

* true dependency

* add crew org key to agentops

* cleanup

* Update pyproject.toml

* Revert "true dependency"

This reverts commit e52e8e9568.

* Revert "cleanup"

This reverts commit 7f5635fb9e.

* optional parent key

* agentops 0.1.5

* Revert "Revert "cleanup""

This reverts commit cea33d9a5d.

* Revert "Revert "true dependency""

This reverts commit 4d1b460b

* cleanup

* Forcing version 0.1.5

* Update pyproject.toml

* agentops update

* noop

* add crew tag

* black formatting

* use langchain callback handler to support all LLMs

* agentops version bump

* track task evaluator

* merge upstream

* Fix typo in instruction en.json (#676)

* Enable search in docs (#663)

* Clarify text in docstring (#662)

* Update agent.py (#655)

Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.

* Update README.md (#652)

Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.

* Update BrowserbaseLoadTool.md (#647)

* Update crew.py (#644)

Fixed Type on line 53

* fixes #665 (#666)

* Added timestamp to logger (#646)

* Added timestamp to logger

Updated the logger.py file to include timestamps when logging output. For example:

 [2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
 [2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
 [2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:

* Update tool_usage.py

* Revert "Update tool_usage.py"

This reverts commit 95d18d5b6f.

incorrect bramch for this commit

* support skip auto end session

* conditional protect agentops use

* fix crew logger bug

* fix crew logger bug

* Update crew.py

* Update tool_usage.py

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Howard Gil <howardbgil@gmail.com>
Co-authored-by: Olivier Roberdet <niox5199@gmail.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Anudeep Kolluri <50168940+Anudeep-Kolluri@users.noreply.github.com>
Co-authored-by: Mike Heavers <heaversm@users.noreply.github.com>
Co-authored-by: Mish Ushakov <10400064+mishushakov@users.noreply.github.com>
Co-authored-by: theCyberTech - Rip&Tear <84775494+theCyberTech@users.noreply.github.com>
Co-authored-by: Saif Mahmud <60409889+vmsaif@users.noreply.github.com>
2024-07-02 21:52:15 -03:00
Salman Faroz
d72b00af3c Update Sequential.md (#849)
To Resolve : 
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value=, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing

"Expected Output" is mandatory now as it forces people to be specific about the expected result and get better result


refer : https://github.com/joaomdmoura/crewAI/issues/308
2024-07-02 21:17:53 -03:00
Taleb
bd053a98c7 Enhanced documentation for readability and clarity (#855)
- Added a "Parameters" column to attribute tables. Improved overall document formatting for enhanced readability and ease of use.

Thank you to the author for the great project and the excellent foundation provided!
2024-07-02 21:17:04 -03:00
Lorenze Jay
c18208ca59 fixed mixin (#831)
* fixed mixin

* WIP: fixing types

* type fixes on mixin
2024-07-02 21:16:26 -03:00
Lorenze Jay
4e8f69a7b0 Merge branch 'main' of github.com:joaomdmoura/crewAI into lj/conditional-tasks-feat 2024-07-02 15:39:08 -07:00
Brandon Hancock
e745094d73 Fixing missing function. Working on tests. 2024-07-02 15:31:32 -04:00
Lorenze Jay
60d0f56e2d WIP for conditional tasks 2024-07-02 09:06:15 -07:00
João Moura
acbe5af8ce preparing new version 2024-07-02 09:03:20 -07:00
Eduardo Chiarotti
c81146505a docs: Update training feature/code interpreter docs (#852)
* docs: remove training docs from README

* docs: add CodeinterpreterTool to docs and update docs

* docs: fix name of tool
2024-07-02 13:00:37 -03:00
João Moura
6b9a1d4040 adding link to docs 2024-07-01 18:41:31 -07:00
João Moura
508fbd49e9 preparing new version 2024-07-01 18:28:11 -07:00
Brandon Hancock
053d8a0449 Merge branch 'bugfix/kickoff-for-each-usage-metrics' into feature/kickoff-consistent-output 2024-07-01 18:27:05 -04:00
João Moura
e18a6c6bb8 updatign tools 2024-07-01 15:25:29 -07:00
João Moura
16237ef393 rollback update to new version 2024-07-01 15:25:10 -07:00
João Moura
5332d02f36 preparing new version 2024-07-01 15:12:22 -07:00
João Moura
7258120a0d preparing new version 2024-07-01 15:10:13 -07:00
Brandon Hancock
0bfa549477 use BaseAgent instead of Agent where applicable 2024-07-01 17:22:46 -04:00
Brandon Hancock
5334e9e585 Fix linting errors 2024-07-01 17:09:50 -04:00
Brandon Hancock
68de393534 Fix renaming issue 2024-07-01 16:13:21 -04:00
Brandon Hancock
f36f73e035 Moving copy functionality from Agent to BaseAgent 2024-07-01 16:06:02 -04:00
Brandon Hancock
1f9166f61b Final cleanup. Ready for review. 2024-07-01 15:34:06 -04:00
Brandon Hancock
5a5276eb5d Add new tests 2024-07-01 15:29:08 -04:00
Brandon Hancock
60c8f86345 Clean up code for review 2024-07-01 14:56:42 -04:00
Brandon Hancock
6a47eb4f9e Merge branch 'main' into bugfix/kickoff-for-each-usage-metrics 2024-07-01 14:09:32 -04:00
Brandon Hancock
2efe16eac9 Merge in main to bugfix/kickoff-for-each-usage-metrics 2024-07-01 14:00:13 -04:00
Brandon Hancock
1d2827e9a5 Update parent crew who is managing for_each loop 2024-07-01 12:16:59 -04:00
João Moura
8b7bc69ba1 preparing new version 2024-07-01 08:41:13 -07:00
Brandon Hancock
5091712a2d WIP. It looks like usage metrics has always been broken for async 2024-07-01 11:28:50 -04:00
João Moura
5a807eb93f preparing new version 2024-07-01 06:08:46 -07:00
João Moura
130682c93b preparing new version 2024-07-01 05:48:47 -07:00
João Moura
02e29e4681 new docs 2024-07-01 05:32:22 -07:00
João Moura
6943eb4463 small formatting details 2024-07-01 05:32:22 -07:00
João Moura
939a18a4d2 Updating docs 2024-07-01 05:32:22 -07:00
João Moura
ccbe415315 updating docs 2024-07-01 05:32:22 -07:00
João Moura
511af98dea small refractoring for new version 2024-07-01 05:32:22 -07:00
gpu7
a9d94112f5 bugfix in python script sample code (#787)
Add the line:

process = Process.sequential
2024-07-01 00:23:06 -03:00
JoePro
1bca6029fe Update LLM-Connections.md (#796)
Revised to utilize Ollama from langchain.llms instead as the functionality from the other method simply doesn't work when delegating.

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-01 00:22:38 -03:00
Eelke van den Bos
c027aa8bf6 Set manager verbosity to crew verbosity by default (#797)
Fixes #793
2024-07-01 00:20:39 -03:00
finecwg
ce7d86e0df Update tool_usage.py (#828)
fixed error for some cases with Pandas DataFrame:

ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
2024-07-01 00:19:36 -03:00
Bruno Tanabe
5dfaf866c9 fix: Fix grammar error in documentation 'Crew Attributes' (#836)
Correction of grammar error in the CrewAI documentation, on the page 'https://docs.crewai.com/core-concepts/Crews/' it says 'ustom' instead of 'Custom'.
2024-07-01 00:16:06 -03:00
Gui Vieira
5b66e87621 Improve telemetry (#818)
* Improve telemetry

* Minor adjustments

* Try to fix typing error

* Try to fix typing error [2]
2024-06-28 20:05:47 -03:00
João Moura
851dd0f84f preparing new version 2024-06-27 11:04:08 -07:00
Eduardo Chiarotti
2188358f13 docs: add docs for training (#824) 2024-06-27 14:56:32 -03:00
Lorenze Jay
10997dd175 Lorenzejay/byoa (#776)
* better spacing

* works with llama index

* works on langchain custom just need delegation to work

* cleanup for custom_agent class

* works with different argument expectations for agent_executor

* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page

* removed code examples for langchain + llama index, added to docs instead

* added key output if return is not a str for and added some tests

* added hinting for CustomAgent class

* removed pass as it was not needed

* closer just need to figuire ou agentTools

* running agents - llamaindex and langchain with base agent

* some cleanup on baseAgent

* minimum for agent to run for base class and ensure it works with hierarchical process

* cleanup for original agent to take on BaseAgent class

* Agent takes on langchainagent and cleanup across

* token handling working for usage_metrics to continue working

* installed llama-index, updated docs and added better name

* fixed some type errors

* base agent holds token_process

* heirarchail process uses proper tools and no longer relies on hasattr for token_processes

* removal of test_custom_agent_executions

* this fixes copying agents

* leveraging an executor class for trigger llamaindex agent

* llama index now has ask_human

* executor mixins added

* added output converter base class

* type listed

* cleanup for output conversions and tokenprocess eliminated redundancy

* properly handling tokens

* simplified token calc handling

* original agent with base agent builder structure setup

* better docs

* no more llama-index dep

* cleaner docs

* test fixes

* poetry reverts and better docs

* base_agent_tools set for third party agents

* updated task and test fix
2024-06-27 14:56:08 -03:00
Brandon Hancock
764234c426 more wip. 2024-06-26 20:18:23 -07:00
Brandon Hancock
be0a4c2fe5 Cleaned up logs now that I've isolated the issue to the LLM 2024-06-25 16:22:56 -07:00
Brandon Hancock
cc1c97e87d WIP. Figuring out disconnect issue. 2024-06-25 15:23:32 -07:00
Brandon Hancock
5775ed3fcb working on tests. WIP 2024-06-23 09:42:33 -04:00
Brandon Hancock
5f820cedcc Encountering issues with callback. Need to test on main. WIP 2024-06-21 16:38:09 -04:00
Brandon Hancock
f86e4a1990 Merge branch 'main' into feature/kickoff-consistent-output 2024-06-21 16:15:16 -04:00
Brandon Hancock
ee4a996de3 Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput. 2024-06-21 16:13:59 -04:00
Brandon Hancock
5c504f4087 outline tests I need to create going forward 2024-06-20 15:39:59 -04:00
Brandon Hancock
26489ced1a Consistently storing async and sync output for context 2024-06-20 13:47:37 -04:00
Brandon Hancock
ea5a784877 Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output 2024-06-20 12:11:27 -04:00
65 changed files with 668105 additions and 606575 deletions

5
.gitignore vendored
View File

@@ -11,4 +11,7 @@ chroma.sqlite3
old_en.json
db/
test.py
rc-tests/*
rc-tests/*
*.pkl
temp/*
.vscode/*

View File

@@ -127,6 +127,7 @@ crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2, # You can set it to 1 or 2 to different logging levels
process = Process.sequential
)
# Get your crew to work!
@@ -195,6 +196,7 @@ Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:

View File

@@ -16,24 +16,24 @@ description: What are crewAI Agents and how to use them.
## Agent Attributes
| Attribute | Description |
| :------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | `max_iter` is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | `max_rpm` is Tte maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` is the Maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | Specifies the response format for the agent. Default is `None`. |
| Attribute | Parameter | Description |
| :------------------------- | :---- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the Maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
## Creating an Agent
@@ -97,5 +97,53 @@ agent = Agent(
)
```
## Bring your Third Party Agents
!!! note "Extend your Third Party Agents like LlamaIndex, Langchain, Autogen or fully custom agents using the the crewai's BaseAgent class."
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
CrewAI is a universal multi agent framework that allows for all agents to work together to automate tasks and solve problems.
```py
from crewai import Agent, Task, Crew
from custom_agent import CustomAgent # You need to build and extend your own agent logic with the CrewAI BaseAgent class then import it here.
from langchain.agents import load_tools
langchain_tools = load_tools(["google-serper"], llm=llm)
agent1 = CustomAgent(
role="backstory agent",
goal="who is {input}?",
backstory="agent backstory",
verbose=True,
)
task1 = Task(
expected_output="a short biography of {input}",
description="a short biography of {input}",
agent=agent1,
)
agent2 = Agent(
role="bio agent",
goal="summarize the short bio for {input} and if needed do more research",
backstory="agent backstory",
verbose=True,
)
task2 = Task(
description="a tldr summary of the short biography",
expected_output="5 bullet point summary of the biography",
agent=agent2,
context=[task1],
)
my_crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
```
## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.

View File

@@ -8,29 +8,29 @@ A crew in crewAI represents a collaborative group of agents working together to
## Crew Attributes
| Attribute | Description |
| :-------------------------- | :----------------------------------------------------------- |
| **Tasks** | A list of tasks assigned to the crew. |
| **Agents** | A list of agents that are part of the crew. |
| **Process** *(optional)* | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** *(optional)* | The verbosity level for logging during execution. |
| **Manager LLM** *(optional)*| The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** *(optional)* | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** *(optional)* | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** *(optional)* | Maximum requests per minute the crew adheres to during execution. |
| **Language** *(optional)* | Language used for the crew, defaults to English. |
| **Language File** *(optional)* | Path to the language file to be used for the crew. |
| **Memory** *(optional)* | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** *(optional)* | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** *(optional)* | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
| **Full Output** *(optional)*| Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** *(optional)* | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** *(optional)* | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** *(optional)* | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** *(optional)* | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** *(optional)* | `manager` sets a ustom agent that will be used as a manager. |
| **Manager Callbacks** *(optional)* | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** *(optional)* | Path to the prompt JSON file to be used for the crew. |
| Attribute | Parameters | Description |
| :-------------------------- | :------------------ | :------------------------------------------------------------------------------------------------------- |
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
| **Agents** | `agents` | A list of agents that are part of the crew. |
| **Process** *(optional)* | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** *(optional)* | `verbose` | The verbosity level for logging during execution. |
| **Manager LLM** *(optional)*| `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** *(optional)* | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** *(optional)* | `max_rpm` | Maximum requests per minute the crew adheres to during execution. |
| **Language** *(optional)* | `language` | Language used for the crew, defaults to English. |
| **Language File** *(optional)* | `language_file` | Path to the language file to be used for the crew. |
| **Memory** *(optional)* | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** *(optional)* | `cache` | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** *(optional)* | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
| **Full Output** *(optional)*| `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** *(optional)* | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** *(optional)* | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** *(optional)* | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** *(optional)* | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** *(optional)* | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** *(optional)* | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -123,7 +123,7 @@ result = my_crew.kickoff()
print(result)
```
### Kicking Off a Crew
### Different wayt to Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
@@ -155,4 +155,4 @@ for async_result in async_results:
print(async_result)
```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs

View File

@@ -11,20 +11,20 @@ Tasks within crewAI can be collaborative, requiring multiple agents to work toge
## Task Attributes
| Attribute | Description |
| :----------------------| :-------------------------------------------------------------------------------------------- |
| **Description** | A clear, concise statement of what the task entails. |
| **Agent** | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | A detailed description of what the task's completion looks like. |
| **Tools** *(optional)* | The functions or capabilities the agent can utilize to perform the task. |
| **Async Execution** *(optional)* | If set, the task executes asynchronously, allowing progression without waiting for completion.|
| **Context** *(optional)* | Specifies tasks whose outputs are used as context for this task. |
| **Config** *(optional)* | Additional configuration details for the agent executing the task, allowing further customization. |
| **Output JSON** *(optional)* | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** *(optional)* | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** *(optional)* | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Callback** *(optional)* | A Python callable that is executed with the task's output upon completion. |
| **Human Input** *(optional)* | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. |
| Attribute | Parameters | Description |
| :----------------------| :------------------- | :-------------------------------------------------------------------------------------------- |
| **Description** | `description` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | A detailed description of what the task's completion looks like. |
| **Tools** *(optional)* | `tools` | The functions or capabilities the agent can utilize to perform the task. |
| **Async Execution** *(optional)* | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion.|
| **Context** *(optional)* | `context` | Specifies tasks whose outputs are used as context for this task. |
| **Config** *(optional)* | `config` | Additional configuration details for the agent executing the task, allowing further customization. |
| **Output JSON** *(optional)* | `output_json` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** *(optional)* | `output_pydantic` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** *(optional)* | `output_file` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Callback** *(optional)* | `callback` | A Python callable that is executed with the task's output upon completion. |
| **Human Input** *(optional)* | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. |
## Creating a Task

View File

@@ -0,0 +1,33 @@
---
title: crewAI Train
description: Learn how to train your crewAI agents by giving them feedback early on and get consistent results.
---
## Introduction
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI). By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback. This helps the agents improve their understanding, decision-making, and problem-solving abilities.
To use the training feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
```shell
crewai train -n <n_iterations>
```
Replace `<n_iterations>` with the desired number of training iterations. This determines how many times the agents will go through the training process.
### Key Points to Note:
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.
Happy training with CrewAI!

View File

@@ -0,0 +1,76 @@
---
title: Coding Agents
description: Learn how to enable your crewAI Agents to write and execute code, and explore advanced features for enhanced functionality.
---
## Introduction
crewAI Agents now have the powerful ability to write and execute code, significantly enhancing their problem-solving capabilities. This feature is particularly useful for tasks that require computational or programmatic solutions.
## Enabling Code Execution
To enable code execution for an agent, set the `allow_code_execution` parameter to `True` when creating the agent. Here's an example:
```python
from crewai import Agent
coding_agent = Agent(
role="Senior Python Developer",
goal="Craft well-designed and thought-out code",
backstory="You are a senior Python developer with extensive experience in software architecture and best practices.",
allow_code_execution=True
)
```
## Important Considerations
1. **Model Selection**: It is strongly recommended to use more capable models like Claude 3.5 Sonnet and GPT-4 when enabling code execution. These models have a better understanding of programming concepts and are more likely to generate correct and efficient code.
2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or provide alternative solutions.
3. **Dependencies**: To use the code execution feature, you need to install the `crewai_tools` package. If not installed, the agent will log an info message: "Coding tools not available. Install crewai_tools."
## Code Execution Process
When an agent with code execution enabled encounters a task requiring programming:
1. The agent analyzes the task and determines that code execution is necessary.
2. It formulates the Python code needed to solve the problem.
3. The code is sent to the internal code execution tool (`CodeInterpreterTool`).
4. The tool executes the code in a controlled environment and returns the result.
5. The agent interprets the result and incorporates it into its response or uses it for further problem-solving.
## Example Usage
Here's a detailed example of creating an agent with code execution capabilities and using it in a task:
```python
from crewai import Agent, Task, Crew
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants.",
agent=coding_agent
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Execute the crew
result = analysis_crew.kickoff()
print(result)
```
In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks.

View File

@@ -1,11 +1,10 @@
---
title: Assembling and Activating Your CrewAI Team
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, and more.
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, code execution, integration with third-party agents, and improved task management.
---
## Introduction
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience.
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience, including code execution capabilities, integration with third-party agents, and advanced task management.
## Step 0: Installation
Install CrewAI and any necessary packages for your project. CrewAI is compatible with Python >=3.10,<=3.13.
@@ -16,46 +15,51 @@ pip install 'crewai[tools]'
```
## Step 1: Assemble Your Agents
Define your agents with distinct roles, backstories, and enhanced capabilities like verbose mode, memory usage, and the ability to set specific agents as managers. These elements add depth and guide their task execution and interaction within the crew.
Define your agents with distinct roles, backstories, and enhanced capabilities. The Agent class now supports a wide range of attributes for fine-tuned control over agent behavior and interactions, including code execution and integration with third-party agents.
```python
import os
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
from langchain.llms import OpenAI
from crewai import Agent
from crewai_tools import SerperDevTool, BrowserbaseTool, ExaSearchTool
os.environ["OPENAI_API_KEY"] = "Your OpenAI Key"
os.environ["SERPER_API_KEY"] = "Your Serper Key"
search_tool = SerperDevTool()
browser_tool = BrowserbaseTool()
exa_search_tool = ExaSearchTool()
# Creating a senior researcher agent with memory and verbose mode
# Creating a senior researcher agent with advanced configurations
researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
verbose=True,
memory=True,
backstory=(
"Driven by curiosity, you're at the forefront of"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
tools=[search_tool, browser_tool],
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
backstory=("Driven by curiosity, you're at the forefront of innovation, "
"eager to explore and share knowledge that could change the world."),
memory=True,
verbose=True,
allow_delegation=False,
tools=[search_tool, browser_tool],
allow_code_execution=False, # New attribute for enabling code execution
max_iter=15, # Maximum number of iterations for task execution
max_rpm=100, # Maximum requests per minute
max_execution_time=3600, # Maximum execution time in seconds
system_template="Your custom system template here", # Custom system template
prompt_template="Your custom prompt template here", # Custom prompt template
response_template="Your custom response template here", # Custom response template
)
# Creating a writer agent with custom tools and delegation capability
# Creating a writer agent with custom tools and specific configurations
writer = Agent(
role='Writer',
goal='Narrate compelling tech stories about {topic}',
verbose=True,
memory=True,
backstory=(
"With a flair for simplifying complex topics, you craft"
"engaging narratives that captivate and educate, bringing new"
"discoveries to light in an accessible manner."
),
tools=[exa_search_tool],
allow_delegation=False
role='Writer',
goal='Narrate compelling tech stories about {topic}',
backstory=("With a flair for simplifying complex topics, you craft engaging "
"narratives that captivate and educate, bringing new discoveries to light."),
verbose=True,
allow_delegation=False,
memory=True,
tools=[exa_search_tool],
function_calling_llm=OpenAI(model_name="gpt-3.5-turbo"), # Separate LLM for function calling
)
# Setting a specific manager agent
@@ -64,73 +68,16 @@ manager = Agent(
goal='Ensure the smooth operation and coordination of the team',
verbose=True,
backstory=(
"As a seasoned project manager, you excel in organizing"
"As a seasoned project manager, you excel in organizing "
"tasks, managing timelines, and ensuring the team stays on track."
)
)
```
## Step 2: Define the Tasks
Detail the specific objectives for your agents, including new features for asynchronous execution and output customization. These tasks ensure a targeted approach to their roles.
```python
from crewai import Task
# Research task
research_task = Task(
description=(
"Identify the next big trend in {topic}."
"Focus on identifying pros and cons and the overall narrative."
"Your final report should clearly articulate the key points,"
"its market opportunities, and potential risks."
),
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
tools=[search_tool],
agent=researcher,
callback="research_callback", # Example of task callback
human_input=True
)
# Writing task with language model configuration
write_task = Task(
description=(
"Compose an insightful article on {topic}."
"Focus on the latest trends and how it's impacting the industry."
"This article should be easy to understand, engaging, and positive."
),
expected_output='A 4 paragraph article on {topic} advancements formatted as markdown.',
tools=[exa_search_tool],
agent=writer,
output_file='new-blog-post.md', # Example of output customization
allow_code_execution=True, # Enable code execution for the manager
)
```
## Step 3: Form the Crew
Combine your agents into a crew, setting the workflow process they'll follow to accomplish the tasks. Now with options to configure language models for enhanced interaction and additional configurations for optimizing performance, such as creating directories when saving files.
### New Agent Attributes and Features
```python
from crewai import Crew, Process
# Forming the tech-focused crew with some enhanced configurations
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential, # Optional: Sequential task execution is default
memory=True,
cache=True,
max_rpm=100,
manager_agent=manager
)
```
## Step 4: Kick It Off
Initiate the process with your enhanced crew ready. Observe as your agents collaborate, leveraging their new capabilities for a successful project outcome. Input variables will be interpolated into the agents and tasks for a personalized approach.
```python
# Starting the task execution process with enhanced feedback
result = crew.kickoff(inputs={'topic': 'AI in healthcare'})
print(result)
```
## Conclusion
Building and activating a crew in CrewAI has evolved with new functionalities. By incorporating verbose mode, memory capabilities, asynchronous task execution, output customization, language model configuration, and enhanced crew configurations, your AI team is more equipped than ever to tackle challenges efficiently. The depth of agent backstories and the precision of their objectives enrich collaboration, leading to successful project outcomes. This guide aims to provide you with a clear and detailed understanding of setting up and utilizing the CrewAI framework to its full potential.
1. `allow_code_execution`: Enable or disable code execution capabilities for the agent (default is False).
2. `max_execution_time`: Set a maximum execution time (in seconds) for the agent to complete a task.
3. `function_calling_llm`: Specify a separate language model for function calling.
4

View File

@@ -0,0 +1,40 @@
---
title: Kickoff Async
description: Kickoff a Crew Asynchronously
---
## Introduction
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner. This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
## Asynchronous Crew Execution
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
Here's an example of how to kickoff a crew asynchronously:
```python
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Execute the crew
result = analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
```

View File

@@ -0,0 +1,45 @@
---
title: Kickoff For Each
description: Kickoff a Crew for a List
---
## Introduction
CrewAI provides the ability to kickoff a crew for each item in a list, allowing you to execute the crew for each item in the list. This feature is particularly useful when you need to perform the same set of tasks for multiple items.
## Kicking Off a Crew for Each Item
To kickoff a crew for each item in a list, use the `kickoff_for_each()` method. This method executes the crew for each item in the list, allowing you to process multiple items efficiently.
Here's an example of how to kickoff a crew for each item in a list:
```python
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
datasets = [
{ "ages": [25, 30, 35, 40, 45] },
{ "ages": [20, 25, 30, 35, 40] },
{ "ages": [30, 35, 40, 45, 50] }
]
# Execute the crew
result = analysis_crew.kickoff_for_each(inputs=datasets)
```

View File

@@ -1,16 +1,18 @@
---
title: Connect CrewAI to LLMs
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes and methods.
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes, methods, and configuration options.
---
## Connect CrewAI to LLMs
!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4 model for language processing. You can configure your agents to use a different model or API. This guide shows how to connect your agents to various LLMs through environment variables and direct instantiation.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
## CrewAI Agent Overview
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's an updated overview reflecting the latest codebase changes:
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's a comprehensive overview of the Agent class attributes and methods:
- **Attributes**:
- `role`: Defines the agent's role within the solution.
@@ -50,54 +52,24 @@ Ollama is preferred for local LLM integration, offering customization and privac
### Setting Up Ollama
- **Environment Variables Configuration**: To integrate Ollama, set the following environment variables:
```sh
OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_BASE='http://localhost:11434'
OPENAI_MODEL_NAME='llama2' # Adjust based on available model
OPENAI_API_KEY=''
```
## Ollama Integration (ex. for using Llama 2 locally)
1. [Download Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama2 by typing following lines into the terminal ```ollama pull llama2```.
3. Create a ModelFile similar the one below in your project directory.
1. [Download Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama2 by typing following lines into the terminal ```ollama pull llama2```.
3. Enjoy your free Llama2 model that powered up by excellent agents from crewai.
```
FROM llama2
# Set parameters
PARAMETER temperature 0.8
PARAMETER stop Result
# Sets a custom system message to specify the behavior of the chat assistant
# Leaving it blank for now.
SYSTEM """"""
```
4. Create a script to get the base model, which in our case is llama2, and create a model on top of that with ModelFile above. PS: this will be ".sh" file.
```
#!/bin/zsh
# variables
model_name="llama2"
custom_model_name="crewai-llama2"
#get the base model
ollama pull $model_name
#create the model file
ollama create $custom_model_name -f ./Llama2ModelFile
```
5. Go into the directory where the script file and ModelFile is located and run the script.
6. Enjoy your free Llama2 model that is powered up by excellent agents from CrewAI.
```python
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
from langchain.llms import Ollama
import os
os.environ["OPENAI_API_KEY"] = "NA"
llm = ChatOpenAI(
model = "crewai-llama2",
base_url = "http://localhost:11434/v1")
llm = Ollama(
model = "llama2",
base_url = "http://localhost:11434")
general_agent = Agent(role = "Math Professor",
goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",

View File

@@ -1,44 +1,89 @@
---
title: CrewAI Agent Monitoring with Langtrace
description: How to monitor cost, latency, and performance of CrewAI Agents using Langtrace.
description: How to monitor cost, latency, and performance of CrewAI Agents using Langtrace, an external observability tool.
---
# Langtrace Overview
Langtrace is an open-source tool that helps you set up observability and evaluations for LLMs, LLM frameworks, and VectorDB. With Langtrace, you can get deep visibility into the cost, latency, and performance of your CrewAI Agents. Additionally, you can log the hyperparameters and monitor for any performance regressions and set up a process to continuously improve your Agents.
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
## Setup Instructions
1. Sign up for [Langtrace](https://langtrace.ai/) by going to [https://langtrace.ai/signup](https://langtrace.ai/signup).
1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
2. Create a project and generate an API key.
3. Install Langtrace in your code using the following commands.
**Note**: For detailed instructions on integrating Langtrace, you can check out the official docs from [here](https://docs.langtrace.ai/supported-integrations/llm-frameworks/crewai).
3. Install Langtrace in your CrewAI project using the following commands:
```
```bash
# Install the SDK
pip install langtrace-python-sdk
# Import it into your project
from langtrace_python_sdk import langtrace # Must precede any llm module imports
langtrace.init(api_key = '<LANGTRACE_API_KEY>')
```
### Features
- **LLM Token and Cost tracking**
- **Trace graph showing detailed execution steps with latency and logs**
- **Dataset curation using manual annotation**
- **Prompt versioning and management**
- **Prompt Playground with comparison views between models**
- **Testing and Evaluations**
## Using Langtrace with CrewAI
![Langtrace Cost and Usage Tracking](..%2Fassets%2Fcrewai-langtrace-stats.png)
![Langtrace Span Graph and Logs Dashboard](..%2Fassets%2Fcrewai-langtrace-spans.png)
To integrate Langtrace with your CrewAI project, follow these steps:
#### Extra links
1. Import and initialize Langtrace at the beginning of your script, before any CrewAI imports:
<a href="https://x.com/langtrace_ai">🐦 Twitter</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://discord.com/invite/EaSATwtr4t">📢 Discord</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://langtrace.ai/">🖇 Website</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://docs.langtrace.ai/introduction">📙 Documentation</a>
```python
from langtrace_python_sdk import langtrace
langtrace.init(api_key='<LANGTRACE_API_KEY>')
# Now import CrewAI modules
from crewai import Agent, Task, Crew
```
2. Create your CrewAI agents and tasks as usual.
3. Use Langtrace's tracking functions to monitor your CrewAI operations. For example:
```python
with langtrace.trace("CrewAI Task Execution"):
result = crew.kickoff()
```
### Features and Their Application to CrewAI
1. **LLM Token and Cost Tracking**
- Monitor the token usage and associated costs for each CrewAI agent interaction.
- Example:
```python
with langtrace.trace("Agent Interaction"):
agent_response = agent.execute(task)
```
2. **Trace Graph for Execution Steps**
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
- Useful for identifying bottlenecks in your agent workflows.
3. **Dataset Curation with Manual Annotation**
- Create datasets from your CrewAI task outputs for future training or evaluation.
- Example:
```python
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
```
4. **Prompt Versioning and Management**
- Keep track of different versions of prompts used in your CrewAI agents.
- Useful for A/B testing and optimizing agent performance.
5. **Prompt Playground with Model Comparisons**
- Test and compare different prompts and models for your CrewAI agents before deployment.
6. **Testing and Evaluations**
- Set up automated tests for your CrewAI agents and tasks.
- Example:
```python
langtrace.evaluate(agent_output, expected_output, "accuracy")
```
## Monitoring New CrewAI Features
CrewAI has introduced several new features that can be monitored using Langtrace:
1. **Code Execution**: Monitor the performance and output of code executed by agents.
```python
with langtrace.trace("Agent Code Execution"):
code_output = agent.execute_code(code_snippet)
```
2. **Third-party Agent Integration**: Track interactions with LlamaIndex, LangChain, and Autogen agents.

View File

@@ -15,7 +15,7 @@ The sequential process ensures tasks are executed one after the other, following
- **Easy Monitoring**: Facilitates easy tracking of task completion and project progress.
## Implementing the Sequential Process
Assemble your crew and define tasks in the order they need to be executed.
To use the sequential process, assemble your crew and define tasks in the order they need to be executed.
```python
from crewai import Crew, Process, Agent, Task
@@ -37,10 +37,9 @@ writer = Agent(
backstory='A skilled writer with a talent for crafting compelling narratives'
)
# Define the tasks in sequence
research_task = Task(description='Gather relevant data...', agent=researcher)
analysis_task = Task(description='Analyze the data...', agent=analyst)
writing_task = Task(description='Compose the report...', agent=writer)
research_task = Task(description='Gather relevant data...', agent=researcher, expected_output='Raw Data')
analysis_task = Task(description='Analyze the data...', agent=analyst, expected_output='Data Insights')
writing_task = Task(description='Compose the report...', agent=writer, expected_output='Final Report')
# Form the crew with a sequential process
report_crew = Crew(
@@ -48,6 +47,9 @@ report_crew = Crew(
tasks=[research_task, analysis_task, writing_task],
process=Process.sequential
)
# Execute the crew
result = report_crew.kickoff()
```
### Workflow in Action
@@ -55,5 +57,29 @@ report_crew = Crew(
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager directives guiding their execution.
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
## Conclusion
The sequential and hierarchical processes in CrewAI offer clear, adaptable paths for task execution. They are well-suited for projects requiring logical progression and dynamic decision-making, ensuring each step is completed effectively, thereby facilitating a cohesive final product.
## Advanced Features
### Task Delegation
In sequential processes, if an agent has `allow_delegation` set to `True`, they can delegate tasks to other agents in the crew. This feature is automatically set up when there are multiple agents in the crew.
### Asynchronous Execution
Tasks can be executed asynchronously, allowing for parallel processing when appropriate. To create an asynchronous task, set `async_execution=True` when defining the task.
### Memory and Caching
CrewAI supports both memory and caching features:
- **Memory**: Enable by setting `memory=True` when creating the Crew. This allows agents to retain information across tasks.
- **Caching**: By default, caching is enabled. Set `cache=False` to disable it.
### Callbacks
You can set callbacks at both the task and step level:
- `task_callback`: Executed after each task completion.
- `step_callback`: Executed after each step in an agent's execution.
### Usage Metrics
CrewAI tracks token usage across all tasks and agents. You can access these metrics after execution.
## Best Practices for Sequential Processes
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones

View File

@@ -1,12 +1,12 @@
---
title: Ability to Set a Specific Agent as Manager in CrewAI
description: Introducing the ability to set a specific agent as a manager instead of having CrewAI create one automatically.
title: Setting a Specific Agent as Manager in CrewAI
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
---
# Ability to Set a Specific Agent as Manager in CrewAI
# Setting a Specific Agent as Manager in CrewAI
CrewAI now allows users to set a specific agent as the manager of the crew, providing more control over the management and coordination of tasks. This feature enables the customization of the managerial role to better fit the project's requirements.
CrewAI allows users to set a specific agent as the manager of the crew, providing more control over the management and coordination of tasks. This feature enables the customization of the managerial role to better fit your project's requirements.
## Using the `manager_agent` Attribute
@@ -23,46 +23,65 @@ from crewai import Agent, Task, Crew, Process
# Define your agents
researcher = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
goal="Conduct thorough research and analysis on AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently researching for a new client.",
allow_delegation=False,
)
writer = Agent(
role="Senior Writer",
goal="Write the best content about AI and AI agents.",
backstory="You're a senior writer, specialized in technology, software engineering, AI and startups. You work as a freelancer and are now working on writing content for a new customer.",
goal="Create compelling content about AI and AI agents",
backstory="You're a senior writer, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently writing content for a new client.",
allow_delegation=False,
)
# Define your task
task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.",
description="Generate a list of 5 interesting ideas for an article, then write one captivating paragraph for each idea that showcases the potential of a full article on this topic. Return the list of ideas with their paragraphs and your notes.",
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
)
# Define the manager agent
manager = Agent(
role="Manager",
goal="Manage the crew and ensure the tasks are completed efficiently.",
backstory="You're an experienced manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
allow_delegation=False,
role="Project Manager",
goal="Efficiently manage the crew and ensure high-quality task completion",
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
allow_delegation=True,
)
# Instantiate your crew with a custom manager
crew = Crew(
agents=[researcher, writer],
process=Process.hierarchical,
manager_agent=manager,
tasks=[task],
manager_agent=manager,
process=Process.hierarchical,
)
# Get your crew to work!
crew.kickoff()
# Start the crew's work
result = crew.kickoff()
```
## Benefits of a Custom Manager Agent
- **Enhanced Control**: Allows for a more tailored management approach, fitting the specific needs of the project.
- **Improved Coordination**: Ensures that the tasks are efficiently coordinated and managed by an experienced agent.
- **Customizable Management**: Provides the flexibility to define managerial roles and responsibilities that align with the project's goals.
- **Enhanced Control**: Tailor the management approach to fit the specific needs of your project.
- **Improved Coordination**: Ensure efficient task coordination and management by an experienced agent.
- **Customizable Management**: Define managerial roles and responsibilities that align with your project's goals.
## Setting a Manager LLM
If you're using the hierarchical process and don't want to set a custom manager agent, you can specify the language model for the manager:
```python
from langchain_openai import ChatOpenAI
manager_llm = ChatOpenAI(model_name="gpt-4")
crew = Crew(
agents=[researcher, writer],
tasks=[task],
process=Process.hierarchical,
manager_llm=manager_llm
)
```
Note: Either `manager_agent` or `manager_llm` must be set when using the hierarchical process.

View File

@@ -33,6 +33,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Crews
</a>
</li>
<li>
<a href="./core-concepts/Training-Crew">
Training
</a>
</li>
<li>
<a href="./core-concepts/Memory">
Memory
@@ -78,16 +83,36 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Customizing Agents
</a>
</li>
<li>
<a href="./how-to/Coding-Agents">
Coding Agents
</a>
</li>
<li>
<a href="./how-to/Human-Input-on-Execution">
Human Input on Execution
</a>
</li>
<li>
<a href="./how-to/Kickoff-async">
Kickoff a Crew Asynchronously
</a>
</li>
<li>
<a href="./how-to/Kickoff-for-each">
Kickoff a Crew for a List
</a>
</li>
<li>
<a href="./how-to/AgentOps-Observability">
Agent Monitoring with AgentOps
</a>
</li>
<li>
<a href="./how-to/Langtrace-Observability">
Agent Monitoring with LangTrace
</a>
</li>
</ul>
</div>
<div style="width:30%">

View File

@@ -20,6 +20,7 @@ pip install 'crewai[tools]'
Remember that when using this tool, the code must be generated by the Agent itself. The code must be a Python3 code. And it will take some time for the first time to run because it needs to build the Docker image.
```python
from crewai import Agent
from crewai_tools import CodeInterpreterTool
Agent(
@@ -27,3 +28,14 @@ Agent(
tools=[CodeInterpreterTool()],
)
```
We also provide a simple way to use it directly from the Agent.
```python
from crewai import Agent
agent = Agent(
...
allow_code_execution=True,
)
```

View File

@@ -0,0 +1,72 @@
# ComposioTool Documentation
## Description
This tools is a wrapper around the composio toolset and gives your agent access to a wide variety of tools from the composio SDK.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install composio-core
pip install 'crewai[tools]'
```
after the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`.
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize toolset
```python
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER)]
```
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
```python
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
```
or use `use_case` to search relevant actions
```python
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
```
2. Define agent
```python
crewai_agent = Agent(
role="Github Agent",
goal="You take action on Github using Github APIs",
backstory=(
"You are AI agent that is responsible for taking actions on Github "
"on users behalf. You need to take action on Github using Github APIs"
),
verbose=True,
tools=tools,
)
```
3. Execute task
```python
task = Task(
description="Star a repo ComposioHQ/composio on GitHub",
agent=crewai_agent,
expected_output="if the star happened",
)
task.execute()
```
* More detailed list of tools can be found [here](https://app.composio.dev)

View File

@@ -126,6 +126,7 @@ nav:
- Processes: 'core-concepts/Processes.md'
- Crews: 'core-concepts/Crews.md'
- Collaboration: 'core-concepts/Collaboration.md'
- Training: 'core-concepts/Training-Crew.md'
- Memory: 'core-concepts/Memory.md'
- Using LangChain Tools: 'core-concepts/Using-LangChain-Tools.md'
- Using LlamaIndex Tools: 'core-concepts/Using-LlamaIndex-Tools.md'
@@ -138,12 +139,17 @@ nav:
- Create your own Manager Agent: 'how-to/Your-Own-Manager-Agent.md'
- Connecting to any LLM: 'how-to/LLM-Connections.md'
- Customizing Agents: 'how-to/Customizing-Agents.md'
- Coding Agents: 'how-to/Coding-Agents.md'
- Human Input on Execution: 'how-to/Human-Input-on-Execution.md'
- Kickoff a Crew Asynchronously: 'how-to/Kickoff-async.md'
- Kickoff a Crew for a List: 'how-to/Kickoff-for-each.md'
- Agent Monitoring with AgentOps: 'how-to/AgentOps-Observability.md'
- Agent Monitoring with LangTrace: 'how-to/Langtrace-Observability.md'
- Tools Docs:
- Google Serper Search: 'tools/SerperDevTool.md'
- Browserbase Web Loader: 'tools/BrowserbaseLoadTool.md'
- Composio Tools: 'tools/ComposioTool.md'
- Code Interpreter: 'tools/CodeInterpreterTool.md'
- Scrape Website: 'tools/ScrapeWebsiteTool.md'
- Directory Read: 'tools/DirectoryReadTool.md'
- Exa Serch Web Loader: 'tools/EXASearchTool.md'

1401
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "crewai"
version = "0.32.2"
version = "0.35.8"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
@@ -14,22 +14,24 @@ Repository = "https://github.com/joaomdmoura/crewai"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
pydantic = "^2.4.2"
langchain = "^0.1.10"
langchain = ">=0.1.4,<0.2.0"
openai = "^1.13.3"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "1.3.3"
regex = "^2023.12.25"
crewai-tools = { version = "^0.3.0", optional = true }
crewai-tools = { version = "^0.4.6", optional = true }
click = "^8.1.7"
python-dotenv = "^1.0.0"
embedchain = "0.1.109"
appdirs = "^1.4.4"
jsonref = "^1.1.0"
agentops = { version = "^0.1.9", optional = true }
embedchain = "^0.1.113"
[tool.poetry.extras]
tools = ["crewai-tools"]
agentops = ["agentops"]
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
@@ -43,7 +45,7 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai-tools = "^0.3.0"
crewai-tools = "^0.4.6"
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"

View File

@@ -2,3 +2,5 @@ from crewai.agent import Agent
from crewai.crew import Crew
from crewai.process import Process
from crewai.task import Task
__all__ = ["Agent", "Crew", "Process", "Task"]

View File

@@ -1,7 +1,5 @@
import os
import uuid
from copy import deepcopy
from typing import Any, Dict, List, Optional, Tuple
from typing import Any, List, Optional, Tuple
from langchain.agents.agent import RunnableAgent
from langchain.agents.tools import tool as LangChainTool
@@ -9,27 +7,32 @@ from langchain.tools.render import render_text_description
from langchain_core.agents import AgentAction
from langchain_core.callbacks import BaseCallbackHandler
from langchain_openai import ChatOpenAI
from pydantic import (
UUID4,
BaseModel,
ConfigDict,
Field,
InstanceOf,
PrivateAttr,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError
from pydantic import Field, InstanceOf, model_validator
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser, ToolsHandler
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.utilities import I18N, Logger, Prompts, RPMController
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.token_counter_callback import TokenCalcHandler, TokenProcess
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
try:
import agentops
from agentops import track_agent
except ImportError:
class Agent(BaseModel):
def track_agent():
def noop(f):
return f
return noop
@track_agent()
class Agent(BaseAgent):
"""Represents an agent in a system.
Each agent has a role, a goal, a backstory, and an optional language model (llm).
@@ -53,57 +56,12 @@ class Agent(BaseModel):
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
"""
__hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
_token_process: TokenProcess = TokenProcess()
formatting_errors: int = 0
model_config = ConfigDict(arbitrary_types_allowed=True)
id: UUID4 = Field(
default_factory=uuid.uuid4,
frozen=True,
description="Unique identifier for the object, not set by user.",
)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
cache: bool = Field(
default=True,
description="Whether the agent should use a cache for tool usage.",
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent",
default=None,
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the agent execution to be respected.",
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
allow_delegation: bool = Field(
default=True, description="Allow delegation of tasks to agents"
)
tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents disposal"
)
max_iter: Optional[int] = Field(
default=25, description="Maximum iterations for an agent to execute a task"
)
max_execution_time: Optional[int] = Field(
default=None,
description="Maximum execution time for an agent to execute a task",
)
agent_executor: InstanceOf[CrewAgentExecutor] = Field(
default=None, description="An instance of the CrewAgentExecutor class."
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class."
)
agent_ops_agent_name: str = None
agent_ops_agent_id: str = None
cache_handler: InstanceOf[CacheHandler] = Field(
default=None, description="An instance of the CacheHandler class."
)
@@ -111,7 +69,6 @@ class Agent(BaseModel):
default=None,
description="Callback to be executed after each step of the agent execution.",
)
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
llm: Any = Field(
default_factory=lambda: ChatOpenAI(
model=os.environ.get("OPENAI_MODEL_NAME", "gpt-4o")
@@ -137,43 +94,14 @@ class Agent(BaseModel):
default=False, description="Enable code execution for the agent."
)
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@field_validator("id", mode="before")
@classmethod
def _deny_user_set_id(cls, v: Optional[UUID4]) -> None:
if v:
raise PydanticCustomError(
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Agent":
"""Set attributes based on the agent configuration."""
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@model_validator(mode="after")
def set_private_attrs(self):
"""Set private attributes."""
self._logger = Logger(self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
return self
__pydantic_self__.agent_ops_agent_name = __pydantic_self__.role
@model_validator(mode="after")
def set_agent_executor(self) -> "Agent":
"""set agent executor is set."""
"""Ensure agent executor and token process are set."""
if hasattr(self.llm, "model_name"):
token_handler = TokenCalcHandler(self.llm.model_name, self._token_process)
@@ -187,6 +115,12 @@ class Agent(BaseModel):
):
self.llm.callbacks.append(token_handler)
if agentops and not any(
isinstance(handler, agentops.LangchainCallbackHandler) for handler in self.llm.callbacks
):
agentops.stop_instrumenting()
self.llm.callbacks.append(agentops.LangchainCallbackHandler())
if not self.agent_executor:
if not self.cache_handler:
self.cache_handler = CacheHandler()
@@ -232,8 +166,7 @@ class Agent(BaseModel):
tools = tools or self.tools
# type: ignore # Argument 1 to "_parse_tools" of "Agent" has incompatible type "list[Any] | None"; expected "list[Any]"
parsed_tools = self._parse_tools(tools)
parsed_tools = self._parse_tools(tools or [])
self.create_agent_executor(tools=tools)
self.agent_executor.tools = parsed_tools
self.agent_executor.task = task
@@ -253,33 +186,22 @@ class Agent(BaseModel):
"tools": self.agent_executor.tools_description,
}
)["output"]
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
return result
def set_cache_handler(self, cache_handler: CacheHandler) -> None:
"""Set the cache handler for the agent.
Args:
cache_handler: An instance of the CacheHandler class.
"""
self.tools_handler = ToolsHandler()
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
"""Set the rpm controller for the agent.
Args:
rpm_controller: An instance of the RPMController class.
"""
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self.create_agent_executor()
def format_log_to_str(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
observation_prefix: str = "Observation: ",
llm_prefix: str = "",
) -> str:
"""Construct the scratchpad that lets the agent continue its thought process."""
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\n{observation_prefix}{observation}\n{llm_prefix}"
return thoughts
def create_agent_executor(self, tools=None) -> None:
"""Create an agent executor for the agent.
@@ -335,76 +257,42 @@ class Agent(BaseModel):
)
stop_words = [self.i18n.slice("observation")]
if self.response_template:
stop_words.append(
self.response_template.split("{{ .Response }}")[1].strip()
)
bind = self.llm.bind(stop=stop_words)
inner_agent = agent_args | execution_prompt | bind | CrewAgentParser(agent=self)
self.agent_executor = CrewAgentExecutor(
agent=RunnableAgent(runnable=inner_agent), **executor_args
)
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
if self._original_goal is None:
self._original_goal = self.goal
if self._original_backstory is None:
self._original_backstory = self.backstory
def get_delegation_tools(self, agents: List[BaseAgent]):
agent_tools = AgentTools(agents=agents)
tools = agent_tools.tools()
return tools
if inputs:
self.role = self._original_role.format(**inputs)
self.goal = self._original_goal.format(**inputs)
self.backstory = self._original_backstory.format(**inputs)
def get_code_execution_tools(self):
try:
from crewai_tools import CodeInterpreterTool
def increment_formatting_errors(self) -> None:
"""Count the formatting errors of the agent."""
self.formatting_errors += 1
return [CodeInterpreterTool()]
except ModuleNotFoundError:
self._logger.log(
"info", "Coding tools not available. Install crewai_tools. "
)
def format_log_to_str(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
observation_prefix: str = "Observation: ",
llm_prefix: str = "",
) -> str:
"""Construct the scratchpad that lets the agent continue its thought process."""
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\n{observation_prefix}{observation}\n{llm_prefix}"
return thoughts
def get_output_converter(self, llm, text, model, instructions):
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def copy(self):
"""Create a deep copy of the Agent."""
exclude = {
"id",
"_logger",
"_rpm_controller",
"_request_within_rpm_limit",
"_token_process",
"agent_executor",
"tools",
"tools_handler",
"cache_handler",
}
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = Agent(**copied_data)
copied_agent.tools = deepcopy(self.tools)
return copied_agent
# type: ignore # Function "langchain_core.tools.tool" is not valid as a type
def _parse_tools(self, tools: List[Any]) -> List[LangChainTool]:
"""Parse tools to be used for the task."""
# tentatively try to import from crewai_tools import BaseTool as CrewAITool
tools_list = []
try:
# tentatively try to import from crewai_tools import BaseTool as CrewAITool
from crewai_tools import BaseTool as CrewAITool
for tool in tools:
@@ -412,13 +300,8 @@ class Agent(BaseModel):
tools_list.append(tool.to_langchain())
else:
tools_list.append(tool)
if self.allow_code_execution:
from crewai_tools.code_interpreter_tool import CodeInterpreterTool
tools_list.append(CodeInterpreterTool)
except ModuleNotFoundError:
tools_list = []
for tool in tools:
tools_list.append(tool)
return tools_list

View File

@@ -0,0 +1,256 @@
import uuid
from abc import ABC, abstractmethod
from copy import copy as shallow_copy
from typing import Any, Dict, List, Optional, TypeVar
from pydantic import (
UUID4,
BaseModel,
ConfigDict,
Field,
InstanceOf,
PrivateAttr,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import I18N, Logger, RPMController
T = TypeVar("T", bound="BaseAgent")
class BaseAgent(ABC, BaseModel):
"""Abstract Base Class for all third party agents compatible with CrewAI.
Attributes:
id (UUID4): Unique identifier for the agent.
role (str): Role of the agent.
goal (str): Objective of the agent.
backstory (str): Backstory of the agent.
cache (bool): Whether the agent should use a cache for tool usage.
config (Optional[Dict[str, Any]]): Configuration for the agent.
verbose (bool): Verbose mode for the Agent Execution.
max_rpm (Optional[int]): Maximum number of requests per minute for the agent execution.
allow_delegation (bool): Allow delegation of tasks to agents.
tools (Optional[List[Any]]): Tools at the agent's disposal.
max_iter (Optional[int]): Maximum iterations for an agent to execute a task.
agent_executor (InstanceOf): An instance of the CrewAgentExecutor class.
llm (Any): Language model that will run the agent.
crew (Any): Crew to which the agent belongs.
i18n (I18N): Internationalization settings.
cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class.
tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class.
Methods:
execute_task(task: Any, context: Optional[str] = None, tools: Optional[List[Any]] = None) -> str:
Abstract method to execute a task.
create_agent_executor(tools=None) -> None:
Abstract method to create an agent executor.
_parse_tools(tools: List[Any]) -> List[Any]:
Abstract method to parse tools.
get_delegation_tools(agents: List["BaseAgent"]):
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
get_output_converter(llm, model, instructions):
Abstract method to get the converter class for the agent to create json/pydantic outputs.
interpolate_inputs(inputs: Dict[str, Any]) -> None:
Interpolate inputs into the agent description and backstory.
set_cache_handler(cache_handler: CacheHandler) -> None:
Set the cache handler for the agent.
increment_formatting_errors() -> None:
Increment formatting errors.
copy() -> "BaseAgent":
Create a copy of the agent.
set_rpm_controller(rpm_controller: RPMController) -> None:
Set the rpm controller for the agent.
set_private_attrs() -> "BaseAgent":
Set private attributes.
"""
__hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
formatting_errors: int = 0
model_config = ConfigDict(arbitrary_types_allowed=True)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
cache: bool = Field(
default=True, description="Whether the agent should use a cache for tool usage."
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent", default=None
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the agent execution to be respected.",
)
allow_delegation: bool = Field(
default=True, description="Allow delegation of tasks to agents"
)
tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents' disposal"
)
max_iter: Optional[int] = Field(
default=25, description="Maximum iterations for an agent to execute a task"
)
agent_executor: InstanceOf = Field(
default=None, description="An instance of the CrewAgentExecutor class."
)
llm: Any = Field(
default=None, description="Language model that will run the agent."
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
cache_handler: InstanceOf[CacheHandler] = Field(
default=None, description="An instance of the CacheHandler class."
)
tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class."
)
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
_token_process: TokenProcess = TokenProcess()
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@model_validator(mode="after")
def set_config_attributes(self):
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@field_validator("id", mode="before")
@classmethod
def _deny_user_set_id(cls, v: Optional[UUID4]) -> None:
if v:
raise PydanticCustomError(
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "BaseAgent":
"""Set attributes based on the agent configuration."""
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@model_validator(mode="after")
def set_private_attrs(self):
"""Set private attributes."""
self._logger = Logger(self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
if not self._token_process:
self._token_process = TokenProcess()
return self
@abstractmethod
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> str:
pass
@abstractmethod
def create_agent_executor(self, tools=None) -> None:
pass
@abstractmethod
def _parse_tools(self, tools: List[Any]) -> List[Any]:
pass
@abstractmethod
def get_delegation_tools(self, agents: List["BaseAgent"]):
"""Set the task tools that init BaseAgenTools class."""
pass
@abstractmethod
def get_output_converter(
self, llm: Any, text: str, model: type[BaseModel] | None, instructions: str
):
"""Get the converter class for the agent to create json/pydantic outputs."""
pass
def copy(self: T) -> T:
"""Create a deep copy of the Agent."""
exclude = {
"id",
"_logger",
"_rpm_controller",
"_request_within_rpm_limit",
"_token_process",
"agent_executor",
"tools",
"tools_handler",
"cache_handler",
"llm",
}
# Copy llm and clear callbacks
existing_llm = shallow_copy(self.llm)
existing_llm.callbacks = []
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(**copied_data, llm=existing_llm, tools=self.tools)
return copied_agent
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
if self._original_goal is None:
self._original_goal = self.goal
if self._original_backstory is None:
self._original_backstory = self.backstory
if inputs:
self.role = self._original_role.format(**inputs)
self.goal = self._original_goal.format(**inputs)
self.backstory = self._original_backstory.format(**inputs)
def set_cache_handler(self, cache_handler: CacheHandler) -> None:
"""Set the cache handler for the agent.
Args:
cache_handler: An instance of the CacheHandler class.
"""
self.tools_handler = ToolsHandler()
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def increment_formatting_errors(self) -> None:
self.formatting_errors += 1
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
"""Set the rpm controller for the agent.
Args:
rpm_controller: An instance of the RPMController class.
"""
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self.create_agent_executor()

View File

@@ -0,0 +1,109 @@
import time
from typing import TYPE_CHECKING, Optional
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities import I18N
if TYPE_CHECKING:
from crewai.crew import Crew
from crewai.task import Task
from crewai.agents.agent_builder.base_agent import BaseAgent
class CrewAgentExecutorMixin:
crew: Optional["Crew"]
crew_agent: Optional["BaseAgent"]
task: Optional["Task"]
iterations: int
force_answer_max_iterations: int
have_forced_answer: bool
_i18n: I18N
def _should_force_answer(self) -> bool:
"""Determine if a forced answer is required based on iteration count."""
return (
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
"""Create and save a short-term memory item if conditions are met."""
if (
self.crew
and self.crew_agent
and self.task
and "Action: Delegate work to coworker" not in output.log
):
try:
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
if (
hasattr(self.crew, "_short_term_memory")
and self.crew._short_term_memory
):
self.crew._short_term_memory.save(memory)
except Exception as e:
print(f"Failed to add to short term memory: {e}")
pass
def _create_long_term_memory(self, output) -> None:
"""Create and save long-term and entity memory items based on evaluation."""
if (
self.crew
and self.crew.memory
and self.crew._long_term_memory
and self.crew._entity_memory
and self.task
and self.crew_agent
):
try:
ltm_agent = TaskEvaluator(self.crew_agent)
evaluation = ltm_agent.evaluate(self.task, output.log)
if isinstance(evaluation, ConverterError):
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.crew_agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": evaluation.suggestions,
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join(
[f"- {r}" for r in entity.relationships]
),
)
self.crew._entity_memory.save(entity_memory)
except AttributeError as e:
print(f"Missing attributes for long term memory: {e}")
pass
except Exception as e:
print(f"Failed to add to long term memory: {e}")
pass
def _ask_human_input(self, final_answer: dict) -> str:
"""Prompt human input for final decision making."""
return input(
self._i18n.slice("getting_input").format(final_answer=final_answer)
)

View File

@@ -0,0 +1,81 @@
from abc import ABC, abstractmethod
from typing import List, Optional, Union
from pydantic import BaseModel, Field
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task
from crewai.utilities import I18N
class BaseAgentTools(BaseModel, ABC):
"""Default tools around agent delegation"""
agents: List[BaseAgent] = Field(description="List of agents in this crew.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
@abstractmethod
def tools(self):
pass
def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]:
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
if coworker:
is_list = coworker.startswith("[") and coworker.endswith("]")
if is_list:
coworker = coworker[1:-1].split(",")[0]
return coworker
def delegate_work(
self, task: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, task, context)
def ask_question(
self, question: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to ask a question, opinion or take from a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, question, context)
def _execute(self, agent: Union[str, None], task: str, context: Union[str, None]):
"""Execute the command."""
try:
if agent is None:
agent = ""
# It is important to remove the quotes from the agent name.
# The reason we have to do this is because less-powerful LLM's
# have difficulty producing valid JSON.
# As a result, we end up with invalid JSON that is truncated like this:
# {"task": "....", "coworker": "....
# when it should look like this:
# {"task": "....", "coworker": "...."}
agent_name = agent.casefold().replace('"', "").replace("\n", "")
agent = [
available_agent
for available_agent in self.agents
if available_agent.role.casefold().replace("\n", "") == agent_name
]
except Exception as _:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
agent = agent[0]
task = Task(
description=task,
agent=agent,
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
)
return agent.execute_task(task, context)

View File

@@ -0,0 +1,48 @@
from abc import ABC, abstractmethod
from typing import Any, Optional
from pydantic import BaseModel, Field, PrivateAttr
class OutputConverter(BaseModel, ABC):
"""
Abstract base class for converting task results into structured formats.
This class provides a framework for converting unstructured text into
either Pydantic models or JSON, tailored for specific agent requirements.
It uses a language model to interpret and structure the input text based
on given instructions.
Attributes:
text (str): The input text to be converted.
llm (Any): The language model used for conversion.
model (Any): The target model for structuring the output.
instructions (str): Specific instructions for the conversion process.
max_attempts (int): Maximum number of conversion attempts (default: 3).
"""
_is_gpt: bool = PrivateAttr(default=True)
text: str = Field(description="Text to be converted.")
llm: Any = Field(description="The language model to be used to convert the text.")
model: Any = Field(description="The model to be used to convert the text.")
instructions: str = Field(description="Conversion instructions to the LLM.")
max_attemps: Optional[int] = Field(
description="Max number of attemps to try to get the output formated.",
default=3,
)
@abstractmethod
def to_pydantic(self, current_attempt=1):
"""Convert text to pydantic."""
pass
@abstractmethod
def to_json(self, current_attempt=1):
"""Convert text to json."""
pass
@abstractmethod
def _is_gpt(self, llm):
"""Return if llm provided is of gpt from openai."""
pass

View File

@@ -0,0 +1,27 @@
from typing import Any, Dict
class TokenProcess:
total_tokens: int = 0
prompt_tokens: int = 0
completion_tokens: int = 0
successful_requests: int = 0
def sum_prompt_tokens(self, tokens: int):
self.prompt_tokens = self.prompt_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_completion_tokens(self, tokens: int):
self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests
def get_summary(self) -> Dict[str, Any]:
return {
"total_tokens": self.total_tokens,
"prompt_tokens": self.prompt_tokens,
"completion_tokens": self.completion_tokens,
"successful_requests": self.successful_requests,
}

View File

@@ -1,30 +1,36 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
from typing import (
Any,
Dict,
Iterator,
List,
Optional,
Tuple,
Union,
)
from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain_core.agents import AgentAction, AgentFinish, AgentStep
from langchain_core.exceptions import OutputParserException
from langchain_core.pydantic_v1 import root_validator
from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from crewai.agents.agent_builder.base_agent_executor_mixin import (
CrewAgentExecutorMixin,
)
from crewai.agents.tools_handler import ToolsHandler
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities import I18N
class CrewAgentExecutor(AgentExecutor):
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
_i18n: I18N = I18N()
should_ask_for_human_input: bool = False
llm: Any = None
@@ -46,61 +52,6 @@ class CrewAgentExecutor(AgentExecutor):
prompt_template: Optional[str] = None
response_template: Optional[str] = None
@root_validator()
def set_force_answer_max_iterations(cls, values: Dict) -> Dict:
values["force_answer_max_iterations"] = values["max_iterations"] - 2
return values
def _should_force_answer(self) -> bool:
return (
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
if (
self.crew
and self.crew.memory
and "Action: Delegate work to coworker" not in output.log
):
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
self.crew._short_term_memory.save(memory)
def _create_long_term_memory(self, output) -> None:
if self.crew and self.crew.memory:
ltm_agent = TaskEvaluator(self.crew_agent)
evaluation = ltm_agent.evaluate(self.task, output.log)
if isinstance(evaluation, ConverterError):
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.crew_agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": evaluation.suggestions,
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join([f"- {r}" for r in entity.relationships]),
)
self.crew._entity_memory.save(entity_memory)
def _call(
self,
inputs: Dict[str, str],
@@ -310,12 +261,6 @@ class CrewAgentExecutor(AgentExecutor):
)
yield AgentStep(action=agent_action, observation=observation)
def _ask_human_input(self, final_answer: dict) -> str:
"""Get human input."""
return input(
self._i18n.slice("getting_input").format(final_answer=final_answer)
)
def _handle_crew_training_output(
self, output: AgentFinish, human_feedback: str | None = None
) -> None:

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = "^0.32.2" }
crewai = { extras = ["tools"], version = "^0.35.8" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run"

View File

@@ -0,0 +1,37 @@
from typing import Callable, Optional, Any
from pydantic import BaseModel
from crewai.task import Task
class ConditionalTask(Task):
"""
A task that can be conditionally executed based on the output of another task.
Note: This cannot be the only task you have in your crew and cannot be the first since its needs context from the previous task.
"""
condition: Optional[Callable[[BaseModel], bool]] = None
def __init__(
self,
*args,
condition: Optional[Callable[[BaseModel], bool]] = None,
**kwargs,
):
super().__init__(*args, **kwargs)
self.condition = condition
def should_execute(self, context: Any) -> bool:
"""
Determines whether the conditional task should be executed based on the provided context.
Args:
context (Any): The context or output from the previous task that will be evaluated by the condition.
Returns:
bool: True if the task should be executed, False otherwise.
"""
if self.condition:
return self.condition(context)
return True

View File

@@ -1,7 +1,8 @@
import asyncio
import json
import uuid
from typing import Any, Dict, List, Optional, Union
from concurrent.futures import Future
from typing import Any, Dict, List, Optional, Tuple, Union
from langchain_core.callbacks import BaseCallbackHandler
from pydantic import (
@@ -18,18 +19,28 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache import CacheHandler
from crewai.conditional_task import ConditionalTask
from crewai.crews.crew_output import CrewOutput
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.formatter import aggregate_raw_outputs_from_task_outputs
from crewai.utilities.training_handler import CrewTrainingHandler
try:
import agentops
except ImportError:
agentops = None
class Crew(BaseModel):
"""
@@ -50,7 +61,6 @@ class Crew(BaseModel):
max_rpm: Maximum number of requests per minute for the crew execution to be respected.
prompt_file: Path to the prompt json file to be used for the crew.
id: A unique identifier for the crew instance.
full_output: Whether the crew should return the full output with all tasks outputs and token usage metrics or just the final output.
task_callback: Callback to be executed after each task for every agents execution.
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew information and execution with crewAI to make the library better, and allow us to train models.
@@ -71,7 +81,7 @@ class Crew(BaseModel):
cache: bool = Field(default=True)
model_config = ConfigDict(arbitrary_types_allowed=True)
tasks: List[Task] = Field(default_factory=list)
agents: List[Agent] = Field(default_factory=list)
agents: List[BaseAgent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential)
verbose: Union[int, bool] = Field(default=0)
memory: bool = Field(
@@ -86,14 +96,10 @@ class Crew(BaseModel):
default=None,
description="Metrics for the LLM usage during all tasks execution.",
)
full_output: Optional[bool] = Field(
default=False,
description="Whether the crew should return the full output with all tasks outputs and token usage metrics or just the final output.",
)
manager_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
manager_agent: Optional[Any] = Field(
manager_agent: Optional[BaseAgent] = Field(
description="Custom agent that will be used as manager.", default=None
)
manager_callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
@@ -218,6 +224,17 @@ class Crew(BaseModel):
agent.set_rpm_controller(self._rpm_controller)
return self
@model_validator(mode="after")
def validate_first_task(self) -> "Crew":
"""Ensure the first task is not a ConditionalTask."""
if self.tasks and isinstance(self.tasks[0], ConditionalTask):
raise PydanticCustomError(
"invalid_first_task",
"The first task cannot be a ConditionalTask.",
{},
)
return self
def _setup_from_config(self):
assert self.config is not None, "Config should not be None."
@@ -277,22 +294,29 @@ class Crew(BaseModel):
def kickoff(
self,
inputs: Optional[Dict[str, Any]] = {},
) -> Union[str, Dict[str, Any]]:
inputs: Optional[Dict[str, Any]] = None,
) -> CrewOutput:
"""Starts the crew to work on its assigned tasks."""
self._execution_span = self._telemetry.crew_execution_span(self)
# type: ignore # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
self._execution_span = self._telemetry.crew_execution_span(self, inputs)
if inputs is not None:
self._interpolate_inputs(inputs)
self._interpolate_inputs(inputs)
self._set_tasks_callbacks()
i18n = I18N(prompt_file=self.prompt_file)
for agent in self.agents:
# type: ignore # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
agent.i18n = i18n
agent.crew = self
# type: ignore[attr-defined] # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
agent.crew = self # type: ignore[attr-defined]
# TODO: Create an AgentFunctionCalling protocol for future refactoring
if not agent.function_calling_llm:
agent.function_calling_llm = self.function_calling_llm
if agent.allow_code_execution:
agent.tools += agent.get_code_execution_tools()
if not agent.step_callback:
agent.step_callback = self.step_callback
@@ -311,66 +335,106 @@ class Crew(BaseModel):
raise NotImplementedError(
f"The process '{self.process}' is not implemented yet."
)
metrics += [agent._token_process.get_summary() for agent in self.agents]
metrics = metrics + [
agent._token_process.get_summary() for agent in self.agents
]
self.usage_metrics = {
key: sum([m[key] for m in metrics if m is not None]) for key in metrics[0]
}
return result
def kickoff_for_each(self, inputs: List[Dict[str, Any]]) -> List:
def kickoff_for_each(self, inputs: List[Dict[str, Any]]) -> List[CrewOutput]:
"""Executes the Crew's workflow for each input in the list and aggregates results."""
results = []
results: List[CrewOutput] = []
# Initialize the parent crew's usage metrics
total_usage_metrics = {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
}
for input_data in inputs:
crew = self.copy()
for task in crew.tasks:
task.interpolate_inputs(input_data)
for agent in crew.agents:
agent.interpolate_inputs(input_data)
output = crew.kickoff(inputs=input_data)
if crew.usage_metrics:
for key in total_usage_metrics:
total_usage_metrics[key] += crew.usage_metrics.get(key, 0)
output = crew.kickoff()
results.append(output)
self.usage_metrics = total_usage_metrics
return results
async def kickoff_async(
self, inputs: Optional[Dict[str, Any]] = {}
self, inputs: Optional[CrewOutput] = {}
) -> Union[str, Dict]:
"""Asynchronous kickoff method to start the crew execution."""
return await asyncio.to_thread(self.kickoff, inputs)
async def kickoff_for_each_async(self, inputs: List[Dict]) -> List[Any]:
async def run_crew(input_data):
crew = self.copy()
async def kickoff_for_each_async(self, inputs: List[Dict]) -> List[CrewOutput]:
crew_copies = [self.copy() for _ in inputs]
for task in crew.tasks:
task.interpolate_inputs(input_data)
for agent in crew.agents:
agent.interpolate_inputs(input_data)
async def run_crew(crew, input_data):
return await crew.kickoff_async(inputs=input_data)
return await crew.kickoff_async()
tasks = [asyncio.create_task(run_crew(input_data)) for input_data in inputs]
tasks = [
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
for i in range(len(inputs))
]
results = await asyncio.gather(*tasks)
total_usage_metrics = {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
}
for crew in crew_copies:
if crew.usage_metrics:
for key in total_usage_metrics:
total_usage_metrics[key] += crew.usage_metrics.get(key, 0)
self.usage_metrics = total_usage_metrics
return results
def _run_sequential_process(self) -> str:
def _run_sequential_process(self) -> CrewOutput:
"""Executes tasks sequentially and returns the final output."""
task_output = ""
task_outputs: List[TaskOutput] = []
futures: List[Tuple[Task, Future[TaskOutput]]] = []
for task in self.tasks:
if task.agent.allow_delegation: # type: ignore # Item "None" of "Agent | None" has no attribute "allow_delegation"
if isinstance(task, ConditionalTask):
if futures:
task_outputs = []
for future_task, future in futures:
task_output = future.result()
task_outputs.append(task_output)
self._process_task_result(future_task, task_output)
futures.clear()
previous_output = task_outputs[-1] if task_outputs else None
if previous_output is not None and not task.should_execute(
previous_output.result()
):
self._logger.log(
"info",
f"Skipping conditional task: {task.description}",
color="yellow",
)
continue
if task.agent and task.agent.allow_delegation:
agents_for_delegation = [
agent for agent in self.agents if agent != task.agent
]
if len(self.agents) > 1 and len(agents_for_delegation) > 0:
task.tools += AgentTools(agents=agents_for_delegation).tools()
task.tools += task.agent.get_delegation_tools(agents_for_delegation)
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"== Working Agent: {role}", color="bold_purple")
@@ -383,33 +447,67 @@ class Crew(BaseModel):
agent=role, task=task.description, status="started"
)
output = task.execute(context=task_output)
if task.async_execution:
context = aggregate_raw_outputs_from_task_outputs(task_outputs)
future = task.execute_async(
agent=task.agent, context=context, tools=task.tools
)
futures.append((task, future))
else:
# Before executing a synchronous task, wait for all async tasks to complete
if futures:
# Clear task_outputs before processing async tasks
task_outputs = []
for future_task, future in futures:
task_output = future.result()
task_outputs.append(task_output)
self._process_task_result(future_task, task_output)
if not task.async_execution:
task_output = output
# Clear the futures list after processing all async results
futures.clear()
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"== [{role}] Task output: {task_output}\n\n")
context = aggregate_raw_outputs_from_task_outputs(task_outputs)
task_output = task.execute_sync(
agent=task.agent, context=context, tools=task.tools
)
task_outputs.append(task_output)
self._process_task_result(task, task_output)
if futures:
# Clear task_outputs before processing async tasks
task_outputs = []
for future_task, future in futures:
task_output = future.result()
task_outputs.append(task_output)
self._process_task_result(future_task, task_output)
if self.output_log_file:
self._file_handler.log(agent=role, task=task_output, status="completed")
final_string_output = aggregate_raw_outputs_from_task_outputs(task_outputs)
self._finish_execution(final_string_output)
# TODO: need to revert
# token_usage = self.calculate_usage_metrics()
token_usage = {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
}
self._finish_execution(task_output)
# type: ignore # Item "None" of "Agent | None" has no attribute "_token_process"
token_usage = task.agent._token_process.get_summary()
# type: ignore # Incompatible return value type (got "tuple[str, Any]", expected "str")
return self._format_output(task_output, token_usage)
return self._format_output(task_outputs, token_usage)
def _run_hierarchical_process(self) -> Union[str, Dict[str, Any]]:
def _process_task_result(self, task: Task, output: TaskOutput) -> None:
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"== [{role}] Task output: {output}\n\n")
if self.output_log_file:
self._file_handler.log(agent=role, task=output, status="completed")
def _run_hierarchical_process(self) -> Tuple[CrewOutput, Dict[str, Any]]:
"""Creates and assigns a manager agent to make sure the crew completes the tasks."""
i18n = I18N(prompt_file=self.prompt_file)
if self.manager_agent is not None:
self.manager_agent.allow_delegation = True
manager = self.manager_agent
if len(manager.tools) > 0:
raise Exception("Manager agent should not have tools")
manager.tools = AgentTools(agents=self.agents).tools()
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else:
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
@@ -417,10 +515,13 @@ class Crew(BaseModel):
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
tools=AgentTools(agents=self.agents).tools(),
llm=self.manager_llm,
verbose=True,
verbose=self.verbose,
)
self.manager_agent = manager
task_outputs: List[TaskOutput] = []
futures: List[Tuple[Task, Future[TaskOutput]]] = []
task_output = ""
for task in self.tasks:
self._logger.log("debug", f"Working Agent: {manager.role}")
self._logger.log("info", f"Starting Task: {task.description}")
@@ -430,23 +531,50 @@ class Crew(BaseModel):
agent=manager.role, task=task.description, status="started"
)
task_output = task.execute(
agent=manager, context=task_output, tools=manager.tools
)
self._logger.log("debug", f"[{manager.role}] Task output: {task_output}")
if self.output_log_file:
self._file_handler.log(
agent=manager.role, task=task_output, status="completed"
if task.async_execution:
context = aggregate_raw_outputs_from_task_outputs(task_outputs)
future = task.execute_async(
agent=manager, context=context, tools=manager.tools
)
futures.append((task, future))
else:
# Before executing a synchronous task, wait for all async tasks to complete
if futures:
# Clear task_outputs before processing async tasks
task_outputs = []
for future_task, future in futures:
task_output = future.result()
task_outputs.append(task_output)
self._process_task_result(future_task, task_output)
self._finish_execution(task_output)
# type: ignore # Incompatible return value type (got "tuple[str, Any]", expected "str")
manager_token_usage = manager._token_process.get_summary()
return self._format_output(
task_output, manager_token_usage
), manager_token_usage
# Clear the futures list after processing all async results
futures.clear()
context = aggregate_raw_outputs_from_task_outputs(task_outputs)
task_output = task.execute_sync(
agent=manager, context=context, tools=manager.tools
)
task_outputs = [task_output]
self._process_task_result(task, task_output)
# Process any remaining async results
if futures:
# Clear task_outputs before processing async tasks
task_outputs = []
for future_task, future in futures:
task_output = future.result()
task_outputs.append(task_output)
self._process_task_result(future_task, task_output)
final_string_output = aggregate_raw_outputs_from_task_outputs(task_outputs)
self._finish_execution(final_string_output)
token_usage = self.calculate_usage_metrics()
return (
self._format_output(task_outputs, token_usage),
token_usage,
)
def copy(self):
"""Create a deep copy of the Crew."""
@@ -461,12 +589,13 @@ class Crew(BaseModel):
"_short_term_memory",
"_long_term_memory",
"_entity_memory",
"_telemetry",
"agents",
"tasks",
}
cloned_agents = [agent.copy() for agent in self.agents]
cloned_tasks = [task.copy() for task in self.tasks]
cloned_tasks = [task.copy(cloned_agents) for task in self.tasks]
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
@@ -494,28 +623,61 @@ class Crew(BaseModel):
for task in self.tasks
]
# type: ignore # "interpolate_inputs" of "Agent" does not return a value (it only ever returns None)
[agent.interpolate_inputs(inputs) for agent in self.agents]
for agent in self.agents:
agent.interpolate_inputs(inputs)
def _format_output(
self, output: str, token_usage: Optional[Dict[str, Any]]
) -> Union[str, Dict[str, Any]]:
self, output: List[TaskOutput], token_usage: Optional[Dict[str, Any]]
) -> CrewOutput:
"""
Formats the output of the crew execution.
If full_output is True, then returned data type will be a dictionary else returned outputs are string
"""
if self.full_output:
return { # type: ignore # Incompatible return value type (got "dict[str, Sequence[str | TaskOutput | None]]", expected "str")
"final_output": output,
"tasks_outputs": [task.output for task in self.tasks if task],
"usage_metrics": token_usage,
}
else:
return output
def _finish_execution(self, output) -> None:
# breakpoint()
task_output = []
for task in self.tasks:
if task.output:
# print("task.output", task.output)
task_output.append(task.output.result())
return CrewOutput(
output=output,
# tasks_output=[task.output for task in self.tasks if task],
tasks_output=task_output,
token_usage=token_usage,
)
def _finish_execution(self, final_string_output: str) -> None:
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
self._telemetry.end_crew(self, output)
if agentops:
agentops.end_session(
end_state="Success",
end_state_reason="Finished Execution",
is_auto_end=True,
)
self._telemetry.end_crew(self, final_string_output)
def calculate_usage_metrics(self) -> Dict[str, int]:
"""Calculates and returns the usage metrics."""
total_usage_metrics = {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
}
for agent in self.agents:
if hasattr(agent, "_token_process"):
token_sum = agent._token_process.get_summary()
for key in total_usage_metrics:
total_usage_metrics[key] += token_sum.get(key, 0)
if self.manager_agent and hasattr(self.manager_agent, "_token_process"):
token_sum = self.manager_agent._token_process.get_summary()
for key in total_usage_metrics:
total_usage_metrics[key] += token_sum.get(key, 0)
return total_usage_metrics
def __repr__(self):
return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})"

View File

@@ -0,0 +1 @@
from .crew_output import CrewOutput

View File

@@ -0,0 +1,48 @@
from typing import Any, Dict, List, Union
from pydantic import BaseModel, Field
from crewai.tasks.task_output import TaskOutput
from crewai.utilities.formatter import aggregate_raw_outputs_from_task_outputs
class CrewOutput(BaseModel):
output: List[TaskOutput] = Field(description="Result of the final task")
# NOTE HERE
# tasks_output: list[TaskOutput] = Field(
# description="Output of each task", default=[]
# )
tasks_output: list[Union[str, BaseModel, Dict[str, Any]]] = Field(
description="Output of each task", default=[]
)
token_usage: Dict[str, Any] = Field(
description="Processed token summary", default={}
)
# TODO: Ask @joao what is the desired behavior here
def result(
self,
) -> List[str | BaseModel | Dict[str, Any]]:
"""Return the result of the task based on the available output."""
results = [output.result() for output in self.output]
return results
def raw_output(self) -> str:
"""Return the raw output of the task."""
return aggregate_raw_outputs_from_task_outputs(self.output)
def to_output_dict(self) -> List[Dict[str, Any]]:
output_dict = [output.to_output_dict() for output in self.output]
return output_dict
def __getitem__(self, key: str) -> Any:
if len(self.output) == 0:
return None
elif len(self.output) == 1:
return self.output[0][key]
else:
return [output[key] for output in self.output]
# TODO: Confirm with Joao that we want to print the raw output and not the object
def __str__(self):
return str(self.raw_output())

View File

@@ -1,17 +1,22 @@
from copy import deepcopy
import os
import re
import threading
import uuid
from typing import Any, Dict, List, Optional, Type
from concurrent.futures import Future
from copy import copy
from typing import Any, Dict, List, Optional, Type, Union
from langchain_openai import ChatOpenAI
from opentelemetry.trace import Span
from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.task_output import TaskOutput
from crewai.utilities import I18N, Converter, ConverterError, Printer
from crewai.telemetry.telemetry import Telemetry
from crewai.utilities.converter import Converter, ConverterError
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
@@ -42,7 +47,6 @@ class Task(BaseModel):
tools_errors: int = 0
delegations: int = 0
i18n: I18N = I18N()
thread: Optional[threading.Thread] = None
prompt_context: Optional[str] = None
description: str = Field(description="Description of the actual task.")
expected_output: str = Field(
@@ -55,7 +59,7 @@ class Task(BaseModel):
callback: Optional[Any] = Field(
description="Callback to be executed after the task is completed.", default=None
)
agent: Optional[Agent] = Field(
agent: Optional[BaseAgent] = Field(
description="Agent responsible for execution the task.", default=None
)
context: Optional[List["Task"]] = Field(
@@ -95,8 +99,11 @@ class Task(BaseModel):
default=False,
)
_telemetry: Telemetry
_execution_span: Span | None = None
_original_description: str | None = None
_original_expected_output: str | None = None
_thread: threading.Thread | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
@@ -118,6 +125,12 @@ class Task(BaseModel):
return value[1:]
return value
@model_validator(mode="after")
def set_private_attrs(self) -> "Task":
"""Set private attributes."""
self._telemetry = Telemetry()
return self
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Task":
"""Set attributes based on the agent configuration."""
@@ -145,17 +158,47 @@ class Task(BaseModel):
)
return self
def execute( # type: ignore # Missing return statement
def execute_sync(
self,
agent: Agent | None = None,
agent: Optional[BaseAgent] = None,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> str:
"""Execute the task.
) -> TaskOutput:
"""Execute the task synchronously."""
return self._execute_core(agent, context, tools)
Returns:
Output of the task.
"""
def execute_async(
self,
agent: BaseAgent | None = None,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> Future[TaskOutput]:
"""Execute the task asynchronously."""
future = Future()
threading.Thread(
target=self._execute_task_async, args=(agent, context, tools, future)
).start()
return future
def _execute_task_async(
self,
agent: Optional[BaseAgent],
context: Optional[str],
tools: Optional[List[Any]],
future: Future[TaskOutput],
) -> None:
"""Execute the task asynchronously with context handling."""
result = self._execute_core(agent, context, tools)
future.set_result(result)
def _execute_core(
self,
agent: Optional[BaseAgent],
context: Optional[str],
tools: Optional[List[Any]],
) -> TaskOutput:
"""Run the core execution logic of the task."""
self._execution_span = self._telemetry.task_started(self)
agent = agent or self.agent
if not agent:
@@ -164,54 +207,41 @@ class Task(BaseModel):
)
if self.context:
# type: ignore # Incompatible types in assignment (expression has type "list[Never]", variable has type "str | None")
context = []
context_list = []
for task in self.context:
if task.async_execution:
task.thread.join() # type: ignore # Item "None" of "Thread | None" has no attribute "join"
if task.async_execution and task._thread:
task._thread.join()
if task and task.output:
# type: ignore # Item "str" of "str | None" has no attribute "append"
context.append(task.output.raw_output)
# type: ignore # Argument 1 to "join" of "str" has incompatible type "str | None"; expected "Iterable[str]"
context = "\n".join(context)
context_list.append(task.output.raw_output)
context = "\n".join(context_list)
self.prompt_context = context
tools = tools or self.tools
if self.async_execution:
self.thread = threading.Thread(
target=self._execute, args=(agent, self, context, tools)
)
self.thread.start()
else:
result = self._execute(
task=self,
agent=agent,
context=context,
tools=tools,
)
return result
def _execute(self, agent, task, context, tools):
result = agent.execute_task(
task=task,
task=self,
context=context,
tools=tools,
)
exported_output = self._export_output(result)
self.output = TaskOutput(
task_output = TaskOutput(
description=self.description,
exported_output=exported_output,
raw_output=result,
pydantic_output=exported_output["pydantic"],
json_output=exported_output["json"],
agent=agent.role,
)
self.output = task_output
if self.callback:
self.callback(self.output)
return exported_output
if self._execution_span:
self._telemetry.task_ended(self._execution_span, self)
self._execution_span = None
return task_output
def prompt(self) -> str:
"""Prompt the task.
@@ -246,7 +276,7 @@ class Task(BaseModel):
"""Increment the delegations counter."""
self.delegations += 1
def copy(self):
def copy(self, agents: Optional[List["BaseAgent"]] = None) -> "Task":
"""Create a deep copy of the Task."""
exclude = {
"id",
@@ -261,8 +291,12 @@ class Task(BaseModel):
cloned_context = (
[task.copy() for task in self.context] if self.context else None
)
cloned_agent = self.agent.copy() if self.agent else None
cloned_tools = deepcopy(self.tools) if self.tools else None
def get_agent_by_role(role: str) -> Union["BaseAgent", None]:
return next((agent for agent in agents if agent.role == role), None)
cloned_agent = get_agent_by_role(self.agent.role) if self.agent else None
cloned_tools = copy(self.tools) if self.tools else []
copied_task = Task(
**copied_data,
@@ -270,70 +304,95 @@ class Task(BaseModel):
agent=cloned_agent,
tools=cloned_tools,
)
return copied_task
def _export_output(self, result: str) -> Any:
exported_result = result
instructions = "I'm gonna convert this raw text into valid JSON."
def _export_output(
self, result: str
) -> Dict[str, Union[BaseModel, Dict[str, Any]]]:
output = {
"pydantic": None,
"json": None,
}
if self.output_pydantic or self.output_json:
model = self.output_pydantic or self.output_json
model_output = self._convert_to_model(result)
output["pydantic"] = (
model_output if isinstance(model_output, BaseModel) else None
)
output["json"] = model_output if isinstance(model_output, dict) else None
# try to convert task_output directly to pydantic/json
if self.output_file:
self._save_output(output["raw"])
return output
def _convert_to_model(self, result: str) -> Union[dict, BaseModel, str]:
model = self.output_pydantic or self.output_json
try:
return self._validate_model(result, model)
except Exception:
return self._handle_partial_json(result, model)
def _validate_model(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel]:
exported_result = model.model_validate_json(result)
if self.output_json:
return exported_result.model_dump()
return exported_result
def _handle_partial_json(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel, str]:
match = re.search(r"({.*})", result, re.DOTALL)
if match:
try:
# type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "model_validate_json"
exported_result = model.model_validate_json(result)
exported_result = model.model_validate_json(match.group(0))
if self.output_json:
# type: ignore # "str" has no attribute "model_dump"
return exported_result.model_dump()
return exported_result
except Exception:
# sometimes the response contains valid JSON in the middle of text
match = re.search(r"({.*})", result, re.DOTALL)
if match:
try:
# type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "model_validate_json"
exported_result = model.model_validate_json(match.group(0))
if self.output_json:
# type: ignore # "str" has no attribute "model_dump"
return exported_result.model_dump()
return exported_result
except Exception:
pass
pass
# type: ignore # Item "None" of "Agent | None" has no attribute "function_calling_llm"
llm = self.agent.function_calling_llm or self.agent.llm
return self._convert_with_instructions(result, model)
if not self._is_gpt(llm):
# type: ignore # Argument "model" to "PydanticSchemaParser" has incompatible type "type[BaseModel] | None"; expected "type[BaseModel]"
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
def _convert_with_instructions(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel, str]:
llm = self.agent.function_calling_llm or self.agent.llm
instructions = self._get_conversion_instructions(model, llm)
converter = Converter(
llm=llm, text=result, model=model, instructions=instructions
converter = Converter(
llm=llm, text=result, model=model, instructions=instructions
)
exported_result = (
converter.to_pydantic() if self.output_pydantic else converter.to_json()
)
if isinstance(exported_result, ConverterError):
Printer().print(
content=f"{exported_result.message} Using raw output instead.",
color="red",
)
if self.output_pydantic:
exported_result = converter.to_pydantic()
elif self.output_json:
exported_result = converter.to_json()
if isinstance(exported_result, ConverterError):
Printer().print(
content=f"{exported_result.message} Using raw output instead.",
color="red",
)
exported_result = result
if self.output_file:
content = (
# type: ignore # "str" has no attribute "json"
exported_result if not self.output_pydantic else exported_result.json()
)
self._save_file(content)
return result
return exported_result
def _get_conversion_instructions(self, model: Type[BaseModel], llm: Any) -> str:
instructions = "I'm gonna convert this raw text into valid JSON."
if not self._is_gpt(llm):
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
return instructions
def _save_output(self, content: str) -> None:
directory = os.path.dirname(self.output_file)
if directory and not os.path.exists(directory):
os.makedirs(directory)
with open(self.output_file, "w", encoding="utf-8") as file:
file.write(content)
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None

View File

@@ -1,24 +1,56 @@
from typing import Optional, Union
from typing import Any, Dict, Optional, Union
from pydantic import BaseModel, Field, model_validator
# TODO: This is a breaking change. Confirm with @joao
class TaskOutput(BaseModel):
"""Class that represents the result of a task."""
description: str = Field(description="Description of the task")
summary: Optional[str] = Field(description="Summary of the task", default=None)
exported_output: Union[str, BaseModel] = Field(
description="Output of the task", default=None
raw_output: str = Field(description="Result of the task")
pydantic_output: Optional[BaseModel] = Field(
description="Pydantic model output", default=None
)
json_output: Optional[Dict[str, Any]] = Field(
description="JSON output", default=None
)
agent: str = Field(description="Agent that executed the task")
raw_output: str = Field(description="Result of the task")
@model_validator(mode="after")
def set_summary(self):
"""Set the summary field based on the description."""
excerpt = " ".join(self.description.split(" ")[:10])
self.summary = f"{excerpt}..."
return self
def result(self):
return self.exported_output
# TODO: Ask @joao what is the desired behavior here
def result(self) -> Union[str, BaseModel, Dict[str, Any]]:
"""Return the result of the task based on the available output."""
if self.pydantic_output:
return self.pydantic_output
elif self.json_output:
return self.json_output
else:
return self.raw_output
def __getitem__(self, key: str) -> Any:
"""Retrieve a value from the pydantic_output or json_output based on the key."""
if self.pydantic_output and hasattr(self.pydantic_output, key):
return getattr(self.pydantic_output, key)
if self.json_output and key in self.json_output:
return self.json_output[key]
raise KeyError(f"Key '{key}' not found in pydantic_output or json_output")
def to_output_dict(self) -> Dict[str, Any]:
"""Convert json_output and pydantic_output to a dictionary."""
output_dict = {}
if self.json_output:
output_dict.update(self.json_output)
if self.pydantic_output:
output_dict.update(self.pydantic_output.model_dump())
return output_dict
def __str__(self) -> str:
return self.raw_output

View File

@@ -1,8 +1,10 @@
from __future__ import annotations
import asyncio
import json
import os
import platform
from typing import Any
from typing import TYPE_CHECKING, Any
import pkg_resources
from opentelemetry import trace
@@ -10,7 +12,11 @@ from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExport
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import Status, StatusCode
from opentelemetry.trace import Span, Status, StatusCode
if TYPE_CHECKING:
from crewai.crew import Crew
from crewai.task import Task
class Telemetry:
@@ -88,9 +94,6 @@ class Telemetry:
self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "crew_process", crew.process)
self._add_attribute(
span, "crew_language", crew.prompt_file if crew.i18n else "None"
)
self._add_attribute(span, "crew_memory", crew.memory)
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
@@ -102,6 +105,8 @@ class Telemetry:
{
"id": str(agent.id),
"role": agent.role,
"goal": agent.goal,
"backstory": agent.backstory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
@@ -123,8 +128,16 @@ class Telemetry:
[
{
"id": str(task.id),
"description": task.description,
"expected_output": task.expected_output,
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": task.agent.role if task.agent else "None",
"context": (
[task.description for task in task.context]
if task.context
else None
),
"tools_names": [
tool.name.casefold() for tool in task.tools
],
@@ -143,6 +156,38 @@ class Telemetry:
except Exception:
pass
def task_started(self, task: Task) -> Span | None:
"""Records task started in a crew."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Task Execution")
self._add_attribute(span, "task_id", str(task.id))
self._add_attribute(span, "formatted_description", task.description)
self._add_attribute(
span, "formatted_expected_output", task.expected_output
)
return span
except Exception:
pass
return None
def task_ended(self, span: Span, task: Task):
"""Records task execution in a crew."""
if self.ready:
try:
self._add_attribute(
span, "output", task.output.raw_output if task.output else ""
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the repeated usage 'error' of a tool by an agent."""
if self.ready:
@@ -207,7 +252,7 @@ class Telemetry:
except Exception:
pass
def crew_execution_span(self, crew):
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
"""
@@ -221,6 +266,7 @@ class Telemetry:
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "inputs", json.dumps(inputs))
self._add_attribute(
span,
"crew_agents",
@@ -238,7 +284,7 @@ class Telemetry:
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [
tool.name.casefold() for tool in agent.tools
tool.name.casefold() for tool in agent.tools or []
],
}
for agent in crew.agents
@@ -253,16 +299,17 @@ class Telemetry:
{
"id": str(task.id),
"description": task.description,
"expected_output": task.expected_output,
"async_execution?": task.async_execution,
"output": task.expected_output,
"human_input?": task.human_input,
"agent_role": task.agent.role if task.agent else "None",
"context": (
[task.description for task in task.context]
if task.context
else "None"
else None
),
"tools_names": [
tool.name.casefold() for tool in task.tools
tool.name.casefold() for tool in task.tools or []
],
}
for task in crew.tasks
@@ -273,7 +320,7 @@ class Telemetry:
except Exception:
pass
def end_crew(self, crew, output):
def end_crew(self, crew, final_string_output):
if (self.ready) and (crew.share_crew):
try:
self._add_attribute(
@@ -281,7 +328,9 @@ class Telemetry:
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(crew._execution_span, "crew_output", output)
self._add_attribute(
crew._execution_span, "crew_output", final_string_output
)
self._add_attribute(
crew._execution_span,
"crew_tasks_output",

View File

@@ -1,106 +1,25 @@
from typing import List, Union
from langchain.tools import StructuredTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.task import Task
from crewai.utilities import I18N
from crewai.agents.agent_builder.utilities.base_agent_tool import BaseAgentTools
class AgentTools(BaseModel):
class AgentTools(BaseAgentTools):
"""Default tools around agent delegation"""
agents: List[Agent] = Field(description="List of agents in this crew.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
def tools(self):
coworkers = f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
tools = [
StructuredTool.from_function(
func=self.delegate_work,
name="Delegate work to coworker",
description=self.i18n.tools("delegate_work").format(
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
coworkers=coworkers
),
),
StructuredTool.from_function(
func=self.ask_question,
name="Ask question to coworker",
description=self.i18n.tools("ask_question").format(
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
),
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
),
]
return tools
def delegate_work(
self,
task: str,
context: Union[str, None] = None,
coworker: Union[str, None] = None,
**kwargs,
):
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
if coworker:
is_list = coworker.startswith("[") and coworker.endswith("]")
if is_list:
coworker = coworker[1:-1].split(",")[0]
return self._execute(coworker, task, context)
def ask_question(
self,
question: str,
context: Union[str, None] = None,
coworker: Union[str, None] = None,
**kwargs,
):
"""Useful to ask a question, opinion or take from a coworker passing all necessary context and names."""
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
if coworker:
is_list = coworker.startswith("[") and coworker.endswith("]")
if is_list:
coworker = coworker[1:-1].split(",")[0]
return self._execute(coworker, question, context)
def _execute(self, agent: Union[str, None], task: str, context: Union[str, None]):
"""Execute the command."""
try:
if agent is None:
agent = ""
# It is important to remove the quotes from the agent name.
# The reason we have to do this is because less-powerful LLM's
# have difficulty producing valid JSON.
# As a result, we end up with invalid JSON that is truncated like this:
# {"task": "....", "coworker": "....
# when it should look like this:
# {"task": "....", "coworker": "...."}
agent_name = agent.casefold().replace('"', "").replace("\n", "")
agent = [
available_agent
for available_agent in self.agents
if available_agent.role.casefold().replace("\n", "") == agent_name
]
except Exception as _:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
agent = agent[0]
task = Task(
description=task,
agent=agent,
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
)
return agent.execute_task(task, context)

View File

@@ -11,6 +11,12 @@ from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.utilities import I18N, Converter, ConverterError, Printer
agentops = None
try:
import agentops
except ImportError:
pass
OPENAI_BIGGER_MODELS = ["gpt-4"]
@@ -91,15 +97,16 @@ class ToolUsage:
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
def _use(
self,
tool_string: str,
tool: BaseTool,
calling: Union[ToolCalling, InstructorToolCalling],
) -> None: # TODO: Fix this return type
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
) -> str: # TODO: Fix this return type
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
try:
result = self._i18n.errors("task_repeated_usage").format(
tool_names=self.tools_names
@@ -110,20 +117,20 @@ class ToolUsage:
tool_name=tool.name,
attempts=self._run_attempts,
)
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
return result # type: ignore # Fix the reutrn type of this function
except Exception:
self.task.increment_tools_errors()
result = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
result = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
if self.tools_handler.cache:
result = self.tools_handler.cache.read( # type: ignore # Incompatible types in assignment (expression has type "str | None", variable has type "str")
tool=calling.tool_name, input=calling.arguments
)
if not result:
if result is None: #! finecwg: if not result --> if result is None
try:
if calling.tool_name in [
"Delegate work to coworker",
@@ -133,7 +140,7 @@ class ToolUsage:
if calling.arguments:
try:
acceptable_args = tool.args_schema.schema()["properties"].keys() # type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "schema"
acceptable_args = tool.args_schema.schema()["properties"].keys() # type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "schema"
arguments = {
k: v
for k, v in calling.arguments.items()
@@ -145,7 +152,7 @@ class ToolUsage:
arguments = calling.arguments
result = tool._run(**arguments)
else:
arguments = calling.arguments.values() # type: ignore # Incompatible types in assignment (expression has type "dict_values[str, Any]", variable has type "dict[str, Any]")
arguments = calling.arguments.values() # type: ignore # Incompatible types in assignment (expression has type "dict_values[str, Any]", variable has type "dict[str, Any]")
result = tool._run(*arguments)
else:
result = tool._run()
@@ -164,6 +171,10 @@ class ToolUsage:
return error # type: ignore # No return value expected
self.task.increment_tools_errors()
if agentops:
agentops.record(
agentops.ErrorEvent(exception=e, trigger_event=tool_event)
)
return self.use(calling=calling, tool_string=tool_string) # type: ignore # No return value expected
if self.tools_handler:
@@ -184,18 +195,20 @@ class ToolUsage:
)
self._printer.print(content=f"\n\n{result}\n", color="purple")
if agentops:
agentops.record(tool_event)
self._telemetry.tool_usage(
llm=self.function_calling_llm,
tool_name=tool.name,
attempts=self._run_attempts,
)
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
)
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
return result # type: ignore # No return value expected
def _format_result(self, result: Any) -> None:
self.task.used_tools += 1
if self._should_remember_format(): # type: ignore # "_should_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
return result
def _should_remember_format(self) -> None:

View File

@@ -1,9 +1,11 @@
import json
from typing import Any, Optional
from langchain.schema import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field, PrivateAttr, model_validator
from pydantic import model_validator
from crewai.agents.agent_builder.utilities.base_output_converter_base import (
OutputConverter,
)
class ConverterError(Exception):
@@ -14,19 +16,9 @@ class ConverterError(Exception):
self.message = message
class Converter(BaseModel):
class Converter(OutputConverter):
"""Class that converts text into either pydantic or json."""
_is_gpt: bool = PrivateAttr(default=True)
text: str = Field(description="Text to be converted.")
llm: Any = Field(description="The language model to be used to convert the text.")
model: Any = Field(description="The model to be used to convert the text.")
instructions: str = Field(description="Conversion instructions to the LLM.")
max_attemps: Optional[int] = Field(
description="Max number of attemps to try to get the output formated.",
default=3,
)
@model_validator(mode="after")
def check_llm_provider(self):
if not self._is_gpt(self.llm):

View File

@@ -5,6 +5,17 @@ from pydantic import BaseModel, Field
from crewai.utilities import Converter
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
agentops = None
try:
import agentops
from agentops import track_agent
except ImportError:
def track_agent(name):
def noop(f):
return f
return noop
class Entity(BaseModel):
@@ -38,6 +49,7 @@ class TrainingTaskEvaluation(BaseModel):
)
@track_agent(name="Task Evaluator")
class TaskEvaluator:
def __init__(self, original_agent):
self.llm = original_agent.llm

View File

@@ -0,0 +1,12 @@
from typing import List
from crewai.tasks.task_output import TaskOutput
def aggregate_raw_outputs_from_task_outputs(task_outputs: List[TaskOutput]) -> str:
"""Generate string context from the task outputs."""
dividers = "\n\n----------\n\n"
# Join task outputs with dividers
context = dividers.join(output.raw_output for output in task_outputs)
return context

View File

@@ -1,7 +1,7 @@
from datetime import datetime
from crewai.utilities.printer import Printer
from datetime import datetime
class Logger:
_printer = Printer()

View File

@@ -8,6 +8,8 @@ class Printer:
self._print_bold_green(content)
elif color == "bold_purple":
self._print_bold_purple(content)
elif color == "yellow":
self._print_yellow(content)
else:
print(content)
@@ -22,3 +24,6 @@ class Printer:
def _print_red(self, content):
print("\033[91m {}\033[00m".format(content))
def _print_yellow(self, content):
print("\033[93m {}\033[00m".format(content))

View File

@@ -4,46 +4,22 @@ import tiktoken
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import LLMResult
class TokenProcess:
total_tokens: int = 0
prompt_tokens: int = 0
completion_tokens: int = 0
successful_requests: int = 0
def sum_prompt_tokens(self, tokens: int):
self.prompt_tokens = self.prompt_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_completion_tokens(self, tokens: int):
self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests
def get_summary(self) -> Dict[str, Any]:
return {
"total_tokens": self.total_tokens,
"prompt_tokens": self.prompt_tokens,
"completion_tokens": self.completion_tokens,
"successful_requests": self.successful_requests,
}
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
class TokenCalcHandler(BaseCallbackHandler):
model: str = ""
model_name: str = ""
token_cost_process: TokenProcess
def __init__(self, model, token_cost_process):
self.model = model
def __init__(self, model_name, token_cost_process):
self.model_name = model_name
self.token_cost_process = token_cost_process
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
try:
encoding = tiktoken.encoding_for_model(self.model)
encoding = tiktoken.encoding_for_model(self.model_name)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")

View File

@@ -733,7 +733,7 @@ def test_agent_llm_uses_token_calc_handler_with_llm_has_model_name():
assert len(agent1.llm.callbacks) == 1
assert agent1.llm.callbacks[0].__class__.__name__ == "TokenCalcHandler"
assert agent1.llm.callbacks[0].model == "gpt-4o"
assert agent1.llm.callbacks[0].model_name == "gpt-4o"
assert (
agent1.llm.callbacks[0].token_cost_process.__class__.__name__ == "TokenProcess"
)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,591 @@
interactions:
- request:
body: '{"messages": [{"content": "You are dog Researcher. You have a lot of experience
with dog.\nYour personal goal is: Express hot takes on dog.To give my best complete
final answer to the task use the exact following format:\n\nThought: I now can
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
final answer must be the great and the most complete as possible, it must be
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
Task: Give me an analysis around dog.\n\nThis is the expect criteria for your
final answer: 1 bullet point about dog that''s under 15 words. \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '951'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Dogs"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
are"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
incredibly"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
loyal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
provide"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
unmatched"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
companionship"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
humans"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89d0fa4e7abf53db-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Tue, 02 Jul 2024 19:17:45 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=6Xl2nvdsXT4uSfQ3C1ZK.LWKGYekVs5ErrLDZOdI.50-1719947865-1.0.1.1-6RQoTCznxe7H868MoxghRegIZaElbG_bN_jbs94hmnsnuR1P9bptoj8o2DbOSvj48ubewyvy8L16mOZHlMLw_A;
path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=kPTMOkGHQp0ytgVUrm3jFNiB9I.DDI2ONPRTr6IMTeo-1719947865623-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '102'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9997'
x-ratelimit-remaining-tokens:
- '15999783'
x-ratelimit-reset-requests:
- 14ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_2c5219e228ce79f0131c497230904013
status:
code: 200
message: OK
- request:
body: '{"messages": [{"content": "You are apple Researcher. You have a lot of
experience with apple.\nYour personal goal is: Express hot takes on apple.To
give my best complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: my best complete final answer to
the task.\nYour final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!\nCurrent Task: Give me an analysis around apple.\n\nThis is the expect criteria
for your final answer: 1 bullet point about apple that''s under 15 words. \n
you MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '961'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Apple"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
revolution"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"izes"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
technology"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
with"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
sleek"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
designs"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
seamless"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
integration"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
innovative"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
user"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
experiences"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89d0fa4e7ca907e6-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Tue, 02 Jul 2024 19:17:45 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=wf2ozMjr46sG0EhuZjpiDNagwTxC05ct3Hn7Y9Rs5AI-1719947865-1.0.1.1-uckxTTr7Yfe6sv4ZznqqrGTEz9E3_Cpp7OAWBIEeNz1Smdjwijw8YV5oYPe_6W4DrEtwVzRDxaqIHlWP55O0QA;
path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=F9pWw4TeoPa8puOm5RN9Gp2oY0lRoN53ChZ1qFYx1S8-1719947865726-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '168'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9998'
x-ratelimit-remaining-tokens:
- '15999780'
x-ratelimit-reset-requests:
- 10ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_e6dfeda5935eae030bcc2da526234635
status:
code: 200
message: OK
- request:
body: '{"messages": [{"content": "You are cat Researcher. You have a lot of experience
with cat.\nYour personal goal is: Express hot takes on cat.To give my best complete
final answer to the task use the exact following format:\n\nThought: I now can
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
final answer must be the great and the most complete as possible, it must be
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
Task: Give me an analysis around cat.\n\nThis is the expect criteria for your
final answer: 1 bullet point about cat that''s under 15 words. \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '951'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Cats"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
are"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
master"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"ful"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
hunters"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
brilliant"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
problem"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"-sol"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"vers"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89d0fa4e7ae912d7-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Tue, 02 Jul 2024 19:17:45 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=y7JNZ8WEp.q5pMXLi79ajfcI.F6MfE0GeYLw34Apkf0-1719947865-1.0.1.1-QKklGeYuOnsQROgqMs42XwqKNvW.mPrmcbtaxMnUg3eSgI7TRnRq4qPuSan0ynDt4Hd9NMuls2FR.Caa1MVr9Q;
path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=FVQoSgcvVyiB_o43X6y5MGYgzGojmsQqS.nPObW3JYU-1719947865679-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '132'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '15999783'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a06bde4044d3ee75edf08f333139679c
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,585 @@
interactions:
- request:
body: '{"messages": [{"content": "You are dog Researcher. You have a lot of experience
with dog.\nYour personal goal is: Express hot takes on dog.To give my best complete
final answer to the task use the exact following format:\n\nThought: I now can
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
final answer must be the great and the most complete as possible, it must be
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
Task: Give me an analysis around dog.\n\nThis is the expect criteria for your
final answer: 1 bullet point about dog that''s under 15 words. \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '951'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Dogs"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
are"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
incredibly"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
loyal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
creatures"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
make"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
excellent"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
companions"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGly5pkMQFPEoB5vefeCguR5lZg5","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89c8b85b1adac009-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 01 Jul 2024 19:14:38 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=EKr3m8uVLAQymRGlOvcrrYSniXqH.I6nlooc.HtxR58-1719861278-1.0.1.1-ShT9PH0Sv.qvbXjw_BhtziPUPaOIFBxlzXEIk_MXnfJ5PxggSkkaN25IKMZglSd3N2X.U2pWvFwywNQXiXlRnQ;
path=/; expires=Mon, 01-Jul-24 19:44:38 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=NdoJw5c7TqrbsjEH.ABD06WhM3d1BUh2BfsxOwuclSY-1719861278155-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '110'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9998'
x-ratelimit-remaining-tokens:
- '15999741'
x-ratelimit-reset-requests:
- 11ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_266fff38f6e0a997187154e25d6615e8
status:
code: 200
message: OK
- request:
body: '{"messages": [{"content": "You are apple Researcher. You have a lot of
experience with apple.\nYour personal goal is: Express hot takes on apple.To
give my best complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: my best complete final answer to
the task.\nYour final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!\nCurrent Task: Give me an analysis around apple.\n\nThis is the expect criteria
for your final answer: 1 bullet point about apple that''s under 15 words. \n
you MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '961'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Apple''s"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
ecosystem"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
fosters"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
seamless"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
integration"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
but"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
limit"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
user"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
flexibility"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
choice"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyTsxWJNVf7aQLLtKEYroHrIXk","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89c8b85b19501351-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 01 Jul 2024 19:14:38 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=R2LwekshY1B9Z5rF_RyOforiY4rLQreEXx6gOzhnZCI-1719861278-1.0.1.1-y_WShmsjsavKJEOt9Yw8nwjv05e4WdVaZGu8pYcS4z0wF9heVD.0C9W2aQodYxaWIQvkXiPm7y93ma7WCxUdBQ;
path=/; expires=Mon, 01-Jul-24 19:44:38 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=TLDVkWpa_eP62.b4QTrhowr4J_DsXwMZ2nGDaWD4ebU-1719861278245-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '99'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '15999780'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_39f52bd8b06ab4ada2c16853345eb6dc
status:
code: 200
message: OK
- request:
body: '{"messages": [{"content": "You are cat Researcher. You have a lot of experience
with cat.\nYour personal goal is: Express hot takes on cat.To give my best complete
final answer to the task use the exact following format:\n\nThought: I now can
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
final answer must be the great and the most complete as possible, it must be
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
Task: Give me an analysis around cat.\n\nThis is the expect criteria for your
final answer: 1 bullet point about cat that''s under 15 words. \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '951'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Cats"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
are"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
natural"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
hunters"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
with"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
unique"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
personalities"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
strong"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
territorial"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
instincts"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGlyAPhJWtbWCzqHs7gAID5K4T0X","object":"chat.completion.chunk","created":1719861278,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89c8b85b1fde4546-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 01 Jul 2024 19:14:38 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=KdRY51WYOZSedMQIfG_vPmrPdO67._RkSjV0nq.khUk-1719861278-1.0.1.1-r0uJtNVxaGm2OCkQliG.tsX3vekPCDQb2IET3ywu41Igu1Qfz01rhz_WlvKIllsZlXIyd6rvHT7NxLo.UOtD7Q;
path=/; expires=Mon, 01-Jul-24 19:44:38 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=Qqu1FBant56uM9Al0JmDl7QMk9cRcojzFUt8volvWPo-1719861278308-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '156'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '15999783'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a0c6e5796af340d6720b9cfd55df703e
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,314 @@
interactions:
- request:
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
goal is: test goalTo give my best complete final answer to the task use the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\nYour final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!\nCurrent Task: just say hi!\n\nThis is
the expect criteria for your final answer: your greeting \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '853'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
Hi"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvixvu2sSTjML4oN3fsoMeTHbew","object":"chat.completion.chunk","created":1719861882,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89c8c71ccad81823-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 01 Jul 2024 19:24:42 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=uU.2MR0L4Mv3xs4DzFlWOQLVId1dJXQBlWffhr9mqxU-1719861882-1.0.1.1-JSKN2_O9iYj8QCZjy0IGiunZxvXimz5Kzv5wQJedVua5E6WIl1UvP.wguXbK0cds7ayJReYnR8v8oAN2rmtnNQ;
path=/; expires=Mon, 01-Jul-24 19:54:42 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=yc5Q7WKbO5zoiGNQx86HpHNM3HeXi2HxCxw31lL_UuU-1719861882665-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '86'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '15999808'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_25d95f35048bf71e28d73fbed6576a6c
status:
code: 200
message: OK
- request:
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
goal is: test goalTo give my best complete final answer to the task use the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\nYour final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!\nCurrent Task: just say hello!\n\nThis
is the expect criteria for your final answer: your greeting \n you MUST return
the actual complete content as the final answer, not a summary.\n\nThis is the
context you''re working with:\nHi!\n\nBegin! This is VERY important to you,
use the tools available and give your best Final Answer, your job depends on
it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", "n": 1, "stop": ["\nObservation"],
"stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '905'
content-type:
- application/json
cookie:
- __cf_bm=uU.2MR0L4Mv3xs4DzFlWOQLVId1dJXQBlWffhr9mqxU-1719861882-1.0.1.1-JSKN2_O9iYj8QCZjy0IGiunZxvXimz5Kzv5wQJedVua5E6WIl1UvP.wguXbK0cds7ayJReYnR8v8oAN2rmtnNQ;
_cfuvid=yc5Q7WKbO5zoiGNQx86HpHNM3HeXi2HxCxw31lL_UuU-1719861882665-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
Hello"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gGvjRHciTPrlyXWRGu5z5C56L10c","object":"chat.completion.chunk","created":1719861883,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89c8c7202e0f1823-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 01 Jul 2024 19:24:43 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '82'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '15999794'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_754b5067e8f56d5c1182dc0f57be0e45
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,193 @@
interactions:
- request:
body: '{"messages": [{"content": "You are Researcher. You''re an expert researcher,
specialized in technology, software engineering, AI and startups. You work as
a freelancer and is now working on doing research and analysis for a new customer.\nYour
personal goal is: Make the best research and analysis on content about AI and
AI agentsTo give my best complete final answer to the task use the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
final answer to the task.\nYour final answer must be the great and the most
complete as possible, it must be outcome described.\n\nI MUST use these formats,
my job depends on it!\nCurrent Task: Look at the available data nd give me a
sense on the total number of sales.\n\nThis is the expect criteria for your
final answer: The total number of sales as an integer \n you MUST return the
actual complete content as the final answer, not a summary.\n\nBegin! This is
VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '1178'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
total"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
number"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
of"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
sales"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"150"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"0"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 89c9d0107c8abd30-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 01 Jul 2024 22:25:35 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=xIvvDveyc7bpEywphx5N4EscKoZiGAT_yDVu3aFAWZ4-1719872735-1.0.1.1-ZOUYc2kEes8fxrMFgGdVppzOh9nPbl4y1Syv73ORt38FBXePWFSTJrFZCZRU.zob6ks9nWzr2vBIZbBQdAOOGQ;
path=/; expires=Mon, 01-Jul-24 22:55:35 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=aG1BGRRkNAyxmctM98.DLqSNJ2Cx_OQYsMRQbd03.bo-1719872735091-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '80'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '16000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '15999725'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 1ms
x-request-id:
- req_c90015b7584729268f48a8b33ff7c5ea
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,333 @@
interactions:
- request:
body: '{"messages": [{"content": "You are Researcher. You''re an expert researcher,
specialized in technology, software engineering, AI and startups. You work as
a freelancer and is now working on doing research and analysis for a new customer.\nYour
personal goal is: Make the best research and analysis on content about AI and
AI agentsTo give my best complete final answer to the task use the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
final answer to the task.\nYour final answer must be the great and the most
complete as possible, it must be outcome described.\n\nI MUST use these formats,
my job depends on it!\nCurrent Task: Generate a list of 5 interesting ideas
to explore for an article, where each bulletpoint is under 15 words.\n\nThis
is the expect criteria for your final answer: Bullet point list of 5 important
events. No additional commentary. \n you MUST return the actual complete content
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
use the tools available and give your best Final Answer, your job depends on
it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", "n": 1, "stop": ["\nObservation"],
"stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '1237'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.34.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.34.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
impact"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
of"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
agents"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
on"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
remote"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
work"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
productivity"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
Ethical"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
considerations"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-driven"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
decision"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-making"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
How"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
agents"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
are"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
transforming"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
customer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
service"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
role"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
of"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
personalized"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
learning"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
experiences"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
advancements"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
healthcare"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
diagnostics"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 897653f3e8ba7ba2-ATL
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Fri, 21 Jun 2024 19:15:33 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=9ch02HraQXiYJx8jBtYzKXOBjm4nToP.1sBISDFt9Gc-1718997333-1.0.1.1-Ykz1rbMzc2Zo8VV5rBwixPedTuO8s_38psrpuLCSy2B.YIyCCXWMGI_JT5WGQVp2gacOcxjWMSVhOOY85gf9QQ;
path=/; expires=Fri, 21-Jun-24 19:45:33 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=0srdhmUvYEBaQ2xn7BzySIPRoIiEPWzmvngtQRdnpUY-1718997333518-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '165'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '12000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '11999712'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 1ms
x-request-id:
- 92f00e3ecc754086e0ddf2d998f6f671
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +1,22 @@
"""Test Agent creation and execution basic functionality."""
import json
from concurrent.futures import Future
from unittest import mock
from unittest.mock import patch
from unittest.mock import patch, MagicMock
import pydantic_core
import pytest
from crewai.agent import Agent
from crewai.agents.cache import CacheHandler
from crewai.conditional_task import ConditionalTask
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.utilities import Logger, RPMController
ceo = Agent(
@@ -136,11 +140,57 @@ def test_crew_creation():
tasks=tasks,
)
assert (
crew.kickoff()
== "1. **The Rise of AI in Healthcare**: The convergence of AI and healthcare is a promising frontier, offering unprecedented opportunities for disease diagnosis and patient outcome prediction. AI's potential to revolutionize healthcare lies in its capacity to synthesize vast amounts of data, generating precise and efficient results. This technological breakthrough, however, is not just about improving accuracy and efficiency; it's about saving lives. As we stand on the precipice of this transformative era, we must prepare for the complex challenges and ethical questions it poses, while embracing its ability to reshape healthcare as we know it.\n\n2. **Ethical Implications of AI**: As AI intertwines with our daily lives, it presents a complex web of ethical dilemmas. This fusion of technology, philosophy, and ethics is not merely academically intriguing but profoundly impacts the fabric of our society. The questions raised range from decision-making transparency to accountability, and from privacy to potential biases. As we navigate this ethical labyrinth, it is crucial to establish robust frameworks and regulations to ensure that AI serves humanity, and not the other way around.\n\n3. **AI and Data Privacy**: The rise of AI brings with it an insatiable appetite for data, spawning new debates around privacy rights. Balancing the potential benefits of AI with the right to privacy is a unique challenge that intersects technology, law, and human rights. In an increasingly digital world, where personal information forms the backbone of many services, we must grapple with these issues. It's time to redefine the concept of privacy and devise innovative solutions that ensure our digital footprints are not abused.\n\n4. **AI in Job Market**: The discourse around AI's impact on employment is a narrative of contrast, a tale of displacement and creation. On one hand, AI threatens to automate a multitude of jobs, on the other, it promises to create new roles that we cannot yet imagine. This intersection of technology, economics, and labor rights is a critical dialogue that will shape our future. As we stand at this crossroads, we must not only brace ourselves for the changes but also seize the opportunities that this technological wave brings.\n\n5. **Future of AI Agents**: The evolution of AI agents signifies a leap towards a future where AI is not just a tool, but a partner. These sophisticated AI agents, employed in customer service to personal assistants, are redefining our interactions with technology. As we gaze into the future of AI agents, we see a landscape of possibilities and challenges. This journey will be about harnessing the potential of AI agents while navigating the issues of trust, dependence, and ethical use."
result = crew.kickoff()
expected_string_output = "1. **The Rise of AI in Healthcare**: The convergence of AI and healthcare is a promising frontier, offering unprecedented opportunities for disease diagnosis and patient outcome prediction. AI's potential to revolutionize healthcare lies in its capacity to synthesize vast amounts of data, generating precise and efficient results. This technological breakthrough, however, is not just about improving accuracy and efficiency; it's about saving lives. As we stand on the precipice of this transformative era, we must prepare for the complex challenges and ethical questions it poses, while embracing its ability to reshape healthcare as we know it.\n\n2. **Ethical Implications of AI**: As AI intertwines with our daily lives, it presents a complex web of ethical dilemmas. This fusion of technology, philosophy, and ethics is not merely academically intriguing but profoundly impacts the fabric of our society. The questions raised range from decision-making transparency to accountability, and from privacy to potential biases. As we navigate this ethical labyrinth, it is crucial to establish robust frameworks and regulations to ensure that AI serves humanity, and not the other way around.\n\n3. **AI and Data Privacy**: The rise of AI brings with it an insatiable appetite for data, spawning new debates around privacy rights. Balancing the potential benefits of AI with the right to privacy is a unique challenge that intersects technology, law, and human rights. In an increasingly digital world, where personal information forms the backbone of many services, we must grapple with these issues. It's time to redefine the concept of privacy and devise innovative solutions that ensure our digital footprints are not abused.\n\n4. **AI in Job Market**: The discourse around AI's impact on employment is a narrative of contrast, a tale of displacement and creation. On one hand, AI threatens to automate a multitude of jobs, on the other, it promises to create new roles that we cannot yet imagine. This intersection of technology, economics, and labor rights is a critical dialogue that will shape our future. As we stand at this crossroads, we must not only brace ourselves for the changes but also seize the opportunities that this technological wave brings.\n\n5. **Future of AI Agents**: The evolution of AI agents signifies a leap towards a future where AI is not just a tool, but a partner. These sophisticated AI agents, employed in customer service to personal assistants, are redefining our interactions with technology. As we gaze into the future of AI agents, we see a landscape of possibilities and challenges. This journey will be about harnessing the potential of AI agents while navigating the issues of trust, dependence, and ethical use."
assert str(result) == expected_string_output
assert result.raw_output() == expected_string_output
assert isinstance(result, CrewOutput)
assert len(result.tasks_output) == len(tasks)
assert result.result() == [expected_string_output]
@pytest.mark.vcr(filter_headers=["authorization"])
def test_sync_task_execution():
from unittest.mock import patch
tasks = [
Task(
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
),
Task(
description="Write an amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
),
]
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=tasks,
)
mock_task_output = TaskOutput(
description="Mock description", raw_output="mocked output", agent="mocked agent"
)
# Because we are mocking execute_sync, we never hit the underlying _execute_core
# which sets the output attribute of the task
for task in tasks:
task.output = mock_task_output
with patch.object(
Task, "execute_sync", return_value=mock_task_output
) as mock_execute_sync:
crew.kickoff()
# Assert that execute_sync was called for each task
assert mock_execute_sync.call_count == len(tasks)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_process():
@@ -158,8 +208,10 @@ def test_hierarchical_process():
tasks=[task],
)
result = crew.kickoff()
assert (
crew.kickoff()
result.raw_output()
== "1. 'Demystifying AI: An in-depth exploration of Artificial Intelligence for the layperson' - In this piece, we will unravel the enigma of AI, simplifying its complexities into digestible information for the everyday individual. By using relatable examples and analogies, we will journey through the neural networks and machine learning algorithms that define AI, without the jargon and convoluted explanations that often accompany such topics.\n\n2. 'The Role of AI in Startups: A Game Changer?' - Startups today are harnessing the power of AI to revolutionize their businesses. This article will delve into how AI, as an innovative force, is shaping the startup ecosystem, transforming everything from customer service to product development. We'll explore real-life case studies of startups that have leveraged AI to accelerate their growth and disrupt their respective industries.\n\n3. 'AI and Ethics: Navigating the Complex Landscape' - AI brings with it not just technological advancements, but ethical dilemmas as well. This article will engage readers in a thought-provoking discussion on the ethical implications of AI, exploring issues like bias in algorithms, privacy concerns, job displacement, and the moral responsibility of AI developers. We will also discuss potential solutions and frameworks to address these challenges.\n\n4. 'Unveiling the AI Agents: The Future of Customer Service' - AI agents are poised to reshape the customer service landscape, offering businesses the ability to provide round-the-clock support and personalized experiences. In this article, we'll dive deep into the world of AI agents, examining how they work, their benefits and limitations, and how they're set to redefine customer interactions in the digital age.\n\n5. 'From Science Fiction to Reality: AI in Everyday Life' - AI, once a concept limited to the realm of sci-fi, has now permeated our daily lives. This article will highlight the ubiquitous presence of AI, from voice assistants and recommendation algorithms, to autonomous vehicles and smart homes. We'll explore how AI, in its various forms, is transforming our everyday experiences, making the future seem a lot closer than we imagined."
)
@@ -194,8 +246,10 @@ def test_crew_with_delegating_agents():
tasks=tasks,
)
result = crew.kickoff()
assert (
crew.kickoff()
result.raw_output()
== "AI Agents, simply put, are intelligent systems that can perceive their environment and take actions to reach specific goals. Imagine them as digital assistants that can learn, adapt and make decisions. They operate in the realms of software or hardware, like a chatbot on a website or a self-driving car. The key to their intelligence is their ability to learn from their experiences, making them better at their tasks over time. In today's interconnected world, AI agents are transforming our lives. They enhance customer service experiences, streamline business processes, and even predict trends in data. Vehicles equipped with AI agents are making transportation safer. In healthcare, AI agents are helping to diagnose diseases, personalizing treatment plans, and monitoring patient health. As we embrace the digital era, these AI agents are not just important, they're becoming indispensable, shaping a future where technology works intuitively and intelligently to meet our needs."
)
@@ -384,16 +438,18 @@ def test_crew_full_ouput():
result = crew.kickoff()
assert result == {
"final_output": "Hello!",
"tasks_outputs": [task1.output, task2.output],
"usage_metrics": {
"total_tokens": 348,
"prompt_tokens": 314,
"completion_tokens": 34,
"successful_requests": 2,
},
expected_usage_metrics = {
"total_tokens": 348,
"prompt_tokens": 314,
"completion_tokens": 34,
"successful_requests": 2,
}
print(result.output)
assert result.token_usage == expected_usage_metrics
expected_final_string_output = "Hello!"
assert result.tasks_output == [task1.output, task2.output]
assert result.result() == expected_final_string_output
assert result.raw_output() == expected_final_string_output
def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
@@ -416,11 +472,58 @@ def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
assert agent._rpm_controller is None
def test_async_task_execution():
import threading
from unittest.mock import patch
"""
Future tests:
TODO: 1 async task, 1 sync task. Make sure sync task waits for async to finish before starting.[]
TODO: 3 async tasks, 1 sync task. Make sure sync task waits for async to finish before starting.
TODO: 1 sync task, 1 async task. Make sure we wait for result from async before finishing crew.
from crewai.tasks.task_output import TaskOutput
TODO: 3 async tasks, 1 sync task. Make sure context from all 3 async tasks is passed to sync task.
TODO: 3 async tasks, 1 sync task. Pass in context from only 1 async task to sync task.
TODO: Test pydantic output of CrewOutput and test type in CrewOutput result
TODO: Test json output of CrewOutput and test type in CrewOutput result
TODO: TEST THE SAME THING BUT WITH HIERARCHICAL PROCESS
"""
@pytest.mark.vcr(filter_headers=["authorization"])
def test_sequential_async_task_execution_completion():
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True,
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events that shaped the technology.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True,
)
write_article = Task(
description="Write an article about the history of AI and its most important events.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history],
)
sequential_crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[list_ideas, list_important_history, write_article],
)
sequential_result = sequential_crew.kickoff()
assert sequential_result.raw_output().startswith(
"**The Evolution of Artificial Intelligence: A Journey Through Milestones**"
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_async_task_execution_completion():
from langchain_openai import ChatOpenAI
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
@@ -441,31 +544,437 @@ def test_async_task_execution():
context=[list_ideas, list_important_history],
)
hierarchical_crew = Crew(
agents=[researcher, writer],
process=Process.hierarchical,
tasks=[list_ideas, list_important_history, write_article],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"),
)
hierarchical_result = hierarchical_crew.kickoff()
assert hierarchical_result.raw_output().startswith(
"The history of artificial intelligence (AI) is a fascinating journey"
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_single_task_with_async_execution():
researcher_agent = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
allow_delegation=False,
)
list_ideas = Task(
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
expected_output="Bullet point list of 5 important events. No additional commentary.",
agent=researcher_agent,
async_execution=True,
)
crew = Crew(
agents=[researcher_agent],
process=Process.sequential,
tasks=[list_ideas],
)
result = crew.kickoff()
print(result.raw_output())
assert result.raw_output().startswith(
"- The impact of AI agents on remote work productivity."
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_three_task_with_async_execution():
researcher_agent = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
allow_delegation=False,
)
bullet_list = Task(
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
expected_output="Bullet point list of 5 important events. No additional commentary.",
agent=researcher_agent,
async_execution=True,
)
numbered_list = Task(
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
expected_output="Numbered list of 5 important events. No additional commentary.",
agent=researcher_agent,
async_execution=True,
)
letter_list = Task(
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
expected_output="Numbered list using [A), B), C)] list of 5 important events. No additional commentary.",
agent=researcher_agent,
async_execution=True,
)
crew = Crew(
agents=[researcher_agent],
process=Process.sequential,
tasks=[bullet_list, numbered_list, letter_list],
)
# Expected result is that we are going to concatenate the output from each async task.
# Because we add a buffer between each task, we should see a "----------" string
# after the first and second task in the final output.
result = crew.kickoff()
assert result.raw_output().count("\n\n----------\n\n") == 2
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.asyncio
async def test_crew_async_kickoff():
inputs = [
{"topic": "dog"},
{"topic": "cat"},
{"topic": "apple"},
]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task], full_output=True)
results = await crew.kickoff_for_each_async(inputs=inputs)
assert len(results) == len(inputs)
for result in results:
# Assert that all required keys are in usage_metrics and their values are not None
for key in [
"total_tokens",
"prompt_tokens",
"completion_tokens",
"successful_requests",
]:
assert key in result.token_usage
# TODO: FIX THIS WHEN USAGE METRICS ARE RE-DONE
# assert result.token_usage[key] > 0
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_task_execution_call_count():
from unittest.mock import MagicMock, patch
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True,
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events that shaped the technology.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True,
)
write_article = Task(
description="Write an article about the history of AI and its most important events.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
)
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[list_ideas, list_important_history, write_article],
)
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
with patch.object(threading.Thread, "start") as start:
thread = threading.Thread(target=lambda: None, args=()).start()
start.return_value = thread
with patch.object(threading.Thread, "join", wraps=thread.join()) as join:
list_ideas.output = TaskOutput(
description="A 4 paragraph article about AI.",
raw_output="ok",
agent="writer",
)
list_important_history.output = TaskOutput(
description="A 4 paragraph article about AI.",
raw_output="ok",
agent="writer",
)
crew.kickoff()
start.assert_called()
join.assert_called()
# Create a valid TaskOutput instance to mock the return value
mock_task_output = TaskOutput(
description="Mock description", raw_output="mocked output", agent="mocked agent"
)
# Create a MagicMock Future instance
mock_future = MagicMock(spec=Future)
mock_future.result.return_value = mock_task_output
# Directly set the output attribute for each task
list_ideas.output = mock_task_output
list_important_history.output = mock_task_output
write_article.output = mock_task_output
with patch.object(
Task, "execute_sync", return_value=mock_task_output
) as mock_execute_sync, patch.object(
Task, "execute_async", return_value=mock_future
) as mock_execute_async:
crew.kickoff()
assert mock_execute_async.call_count == 2
assert mock_execute_sync.call_count == 1
@pytest.mark.vcr(filter_headers=["authorization"])
def test_kickoff_for_each_single_input():
"""Tests if kickoff_for_each works with a single input."""
from unittest.mock import patch
inputs = [{"topic": "dog"}]
expected_outputs = ["Dogs are loyal companions and popular pets."]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
with patch.object(Agent, "execute_task") as mock_execute_task:
mock_execute_task.side_effect = expected_outputs
crew = Crew(agents=[agent], tasks=[task])
results = crew.kickoff_for_each(inputs=inputs)
assert len(results) == 1
print("RESULT:", results)
for result in results:
assert result == expected_outputs[0]
@pytest.mark.vcr(filter_headers=["authorization"])
def test_kickoff_for_each_multiple_inputs():
"""Tests if kickoff_for_each works with multiple inputs."""
from unittest.mock import patch
inputs = [
{"topic": "dog"},
{"topic": "cat"},
{"topic": "apple"},
]
expected_outputs = [
"Dogs are loyal companions and popular pets.",
"Cats are independent and low-maintenance pets.",
"Apples are a rich source of dietary fiber and vitamin C.",
]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
with patch.object(Agent, "execute_task") as mock_execute_task:
mock_execute_task.side_effect = expected_outputs
crew = Crew(agents=[agent], tasks=[task])
results = crew.kickoff_for_each(inputs=inputs)
assert len(results) == len(inputs)
for i, res in enumerate(results):
assert res == expected_outputs[i]
@pytest.mark.vcr(filter_headers=["authorization"])
def test_kickoff_for_each_empty_input():
"""Tests if kickoff_for_each handles an empty input list."""
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
results = crew.kickoff_for_each(inputs=[])
assert results == []
@pytest.mark.vcr(filter_headers=["authorization"])
def test_kickoff_for_each_invalid_input():
"""Tests if kickoff_for_each raises TypeError for invalid input types."""
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
with pytest.raises(TypeError):
# Pass a string instead of a list
crew.kickoff_for_each("invalid input")
@pytest.mark.vcr(filter_headers=["authorization"])
def test_kickoff_for_each_error_handling():
"""Tests error handling in kickoff_for_each when kickoff raises an error."""
from unittest.mock import patch
inputs = [
{"topic": "dog"},
{"topic": "cat"},
{"topic": "apple"},
]
expected_outputs = [
"Dogs are loyal companions and popular pets.",
"Cats are independent and low-maintenance pets.",
"Apples are a rich source of dietary fiber and vitamin C.",
]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
with patch.object(Crew, "kickoff") as mock_kickoff:
mock_kickoff.side_effect = expected_outputs[:2] + [
Exception("Simulated kickoff error")
]
with pytest.raises(Exception, match="Simulated kickoff error"):
crew.kickoff_for_each(inputs=inputs)
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.asyncio
async def test_kickoff_async_basic_functionality_and_output():
"""Tests the basic functionality and output of kickoff_async."""
from unittest.mock import patch
inputs = {"topic": "dog"}
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
# Create the crew
crew = Crew(
agents=[agent],
tasks=[task],
)
expected_output = "This is a sample output from kickoff."
with patch.object(Crew, "kickoff", return_value=expected_output) as mock_kickoff:
result = await crew.kickoff_async(inputs)
assert isinstance(result, str), "Result should be a string"
assert result == expected_output, "Result should match expected output"
mock_kickoff.assert_called_once_with(inputs)
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.asyncio
async def test_async_kickoff_for_each_async_basic_functionality_and_output():
"""Tests the basic functionality and output of akickoff_for_each_async."""
from unittest.mock import patch
inputs = [
{"topic": "dog"},
{"topic": "cat"},
{"topic": "apple"},
]
# Define expected outputs for each input
expected_outputs = [
"Dogs are loyal companions and popular pets.",
"Cats are independent and low-maintenance pets.",
"Apples are a rich source of dietary fiber and vitamin C.",
]
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
with patch.object(
Crew, "kickoff_async", side_effect=expected_outputs
) as mock_kickoff_async:
crew = Crew(agents=[agent], tasks=[task])
results = await crew.kickoff_for_each_async(inputs)
assert len(results) == len(inputs)
assert results == expected_outputs
for input_data in inputs:
mock_kickoff_async.assert_any_call(inputs=input_data)
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.asyncio
async def test_async_kickoff_for_each_async_empty_input():
"""Tests if akickoff_for_each_async handles an empty input list."""
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="1 bullet point about {topic} that's under 15 words.",
agent=agent,
)
# Create the crew
crew = Crew(
agents=[agent],
tasks=[task],
)
# Call the function we are testing
results = await crew.kickoff_for_each_async([])
# Assertion
assert results == [], "Result should be an empty list when input is empty"
def test_set_agents_step_callback():
@@ -597,12 +1106,34 @@ def test_task_with_no_arguments():
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
assert result == "75"
assert result.raw_output() == "75"
def test_code_execution_flag_adds_code_tool_upon_kickoff():
from crewai_tools import CodeInterpreterTool
programmer = Agent(
role="Programmer",
goal="Write code to solve problems.",
backstory="You're a programmer who loves to solve problems with code.",
allow_delegation=False,
allow_code_execution=True,
)
task = Task(
description="How much is 2 + 2?",
expected_output="The result of the sum as an integer.",
agent=programmer,
)
crew = Crew(agents=[programmer], tasks=[task])
crew.kickoff()
assert len(programmer.tools) == 1
assert programmer.tools[0].__class__ == CodeInterpreterTool
@pytest.mark.vcr(filter_headers=["authorization"])
def test_delegation_is_not_enabled_if_there_are_only_one_agent():
from unittest.mock import patch
researcher = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
@@ -618,10 +1149,8 @@ def test_delegation_is_not_enabled_if_there_are_only_one_agent():
crew = Crew(agents=[researcher], tasks=[task])
with patch.object(Task, "execute") as execute:
execute.return_value = "ok"
crew.kickoff()
assert task.tools == []
crew.kickoff()
assert task.tools == []
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -639,7 +1168,7 @@ def test_agents_do_not_get_delegation_tools_with_there_is_only_one_agent():
result = crew.kickoff()
assert (
result
result.raw_output()
== "Howdy! I hope this message finds you well and brings a smile to your face. Have a fantastic day!"
)
assert len(agent.tools) == 0
@@ -659,13 +1188,93 @@ def test_agent_usage_metrics_are_captured_for_sequential_process():
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
assert result == "Howdy!"
assert crew.usage_metrics == {
"completion_tokens": 17,
"prompt_tokens": 158,
"successful_requests": 1,
"total_tokens": 175,
}
assert result.raw_output() == "Howdy!"
required_keys = [
"total_tokens",
"prompt_tokens",
"completion_tokens",
"successful_requests",
]
for key in required_keys:
assert key in crew.usage_metrics, f"Key '{key}' not found in usage_metrics"
assert crew.usage_metrics[key] > 0, f"Value for key '{key}' is zero"
def test_conditional_task_requirement_breaks_when_singular_conditional_task():
task = ConditionalTask(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.",
)
with pytest.raises(pydantic_core._pydantic_core.ValidationError):
Crew(
agents=[researcher, writer],
tasks=[task],
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_conditional_should_not_execute():
task1 = Task(description="Return hello", expected_output="say hi", agent=researcher)
condition_mock = MagicMock(return_value=False)
task2 = ConditionalTask(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.",
condition=condition_mock,
agent=writer,
)
crew_met = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task 1 description",
raw_output="Task 1 output",
agent="Researcher",
)
result = crew_met.kickoff()
assert mock_execute_sync.call_count == 1
assert condition_mock.call_count == 1
assert condition_mock() is False
assert task2.output is None
assert result.raw_output().startswith("Task 1 output")
@pytest.mark.vcr(filter_headers=["authorization"])
def test_conditional_should_execute():
task1 = Task(description="Return hello", expected_output="say hi", agent=researcher)
condition_mock = MagicMock(
return_value=True
) # should execute this conditional task
task2 = ConditionalTask(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.",
condition=condition_mock,
agent=writer,
)
crew_met = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task 1 description",
raw_output="Task 1 output",
agent="Researcher",
)
crew_met.kickoff()
assert condition_mock.call_count == 1
assert condition_mock() is True
assert mock_execute_sync.call_count == 2
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -685,18 +1294,21 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
agents=[agent],
tasks=[task],
process=Process.hierarchical,
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"),
manager_llm=ChatOpenAI(temperature=0, model="gpt-4o"),
)
result = crew.kickoff()
assert result == '"Howdy!"'
assert result.raw_output() == '"Howdy!"'
assert crew.usage_metrics == {
"total_tokens": 507,
"prompt_tokens": 224,
"completion_tokens": 283,
"successful_requests": 3,
}
required_keys = [
"total_tokens",
"prompt_tokens",
"completion_tokens",
"successful_requests",
]
for key in required_keys:
assert key in crew.usage_metrics, f"Key '{key}' not found in usage_metrics"
assert crew.usage_metrics[key] > 0, f"Value for key '{key}' is zero"
def test_crew_inputs_interpolate_both_agents_and_tasks():
@@ -752,9 +1364,81 @@ def test_crew_inputs_interpolate_both_agents_and_tasks_diff():
interpolate_task_inputs.assert_called()
def test_task_callback_on_crew():
def test_crew_does_not_interpolate_without_inputs():
from unittest.mock import patch
agent = Agent(
role="{topic} Researcher",
goal="Express hot takes on {topic}.",
backstory="You have a lot of experience with {topic}.",
)
task = Task(
description="Give me an analysis around {topic}.",
expected_output="{points} bullet points about {topic}.",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])
with patch.object(Agent, "interpolate_inputs") as interpolate_agent_inputs:
with patch.object(Task, "interpolate_inputs") as interpolate_task_inputs:
crew.kickoff()
interpolate_agent_inputs.assert_not_called()
interpolate_task_inputs.assert_not_called()
# TODO: Ask @joao if we want to start throwing errors if inputs are not provided
# def test_crew_partial_inputs():
# agent = Agent(
# role="{topic} Researcher",
# goal="Express hot takes on {topic}.",
# backstory="You have a lot of experience with {topic}.",
# )
# task = Task(
# description="Give me an analysis around {topic}.",
# expected_output="{points} bullet points about {topic}.",
# )
# crew = Crew(agents=[agent], tasks=[task], inputs={"topic": "AI"})
# inputs = {"topic": "AI"}
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
# assert crew.tasks[0].description == "Give me an analysis around AI."
# assert crew.tasks[0].expected_output == "{points} bullet points about AI."
# assert crew.agents[0].role == "AI Researcher"
# assert crew.agents[0].goal == "Express hot takes on AI."
# assert crew.agents[0].backstory == "You have a lot of experience with AI."
# TODO: If we do want ot throw errors if we are missing inputs. Add in this test.
# def test_crew_invalid_inputs():
# agent = Agent(
# role="{topic} Researcher",
# goal="Express hot takes on {topic}.",
# backstory="You have a lot of experience with {topic}.",
# )
# task = Task(
# description="Give me an analysis around {topic}.",
# expected_output="{points} bullet points about {topic}.",
# )
# crew = Crew(agents=[agent], tasks=[task], inputs={"subject": "AI"})
# inputs = {"subject": "AI"}
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
# assert crew.tasks[0].description == "Give me an analysis around {topic}."
# assert crew.tasks[0].expected_output == "{points} bullet points about {topic}."
# assert crew.agents[0].role == "{topic} Researcher"
# assert crew.agents[0].goal == "Express hot takes on {topic}."
# assert crew.agents[0].backstory == "You have a lot of experience with {topic}."
def test_task_callback_on_crew():
from unittest.mock import MagicMock, patch
researcher_agent = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
@@ -769,17 +1453,23 @@ def test_task_callback_on_crew():
async_execution=True,
)
mock_callback = MagicMock()
crew = Crew(
agents=[researcher_agent],
process=Process.sequential,
tasks=[list_ideas],
task_callback=lambda: None,
task_callback=mock_callback,
)
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
crew.kickoff()
assert list_ideas.callback is not None
mock_callback.assert_called_once()
args, _ = mock_callback.call_args
assert isinstance(args[0], TaskOutput)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -851,7 +1541,7 @@ def test_tools_with_custom_caching():
input={"first_number": 2, "second_number": 6},
output=12,
)
assert result == "3"
assert result.raw_output() == "3"
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -949,10 +1639,20 @@ def test_manager_agent():
tasks=[task],
)
with patch.object(Task, "execute") as execute:
mock_task_output = TaskOutput(
description="Mock description", raw_output="mocked output", agent="mocked agent"
)
# Because we are mocking execute_sync, we never hit the underlying _execute_core
# which sets the output attribute of the task
task.output = mock_task_output
with patch.object(
Task, "execute_sync", return_value=mock_task_output
) as mock_execute_sync:
crew.kickoff()
assert manager.allow_delegation is True
execute.assert_called()
mock_execute_sync.assert_called()
def test_manager_agent_in_agents_raises_exception():
@@ -1111,3 +1811,8 @@ def test__setup_for_training():
for agent in agents:
assert agent.allow_delegation is False
# TODO: TEST EXPORT OUTPUT TASK WITH PYDANTIC
# TODO: TEST EXPORT OUTPUT TASK WITH JSON
# TODO: TEST EXPORT OUTPUT TASK CALLBACK

View File

@@ -80,7 +80,7 @@ def test_task_prompt_includes_expected_output():
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
task.execute()
task.execute_sync()
execute.assert_called_once_with(task=task, context=None, tools=[])
@@ -103,7 +103,7 @@ def test_task_callback():
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
task.execute()
task.execute_sync()
task_completed.assert_called_once_with(task.output)
@@ -126,7 +126,7 @@ def test_task_callback_returns_task_ouput():
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "exported_ok"
task.execute()
task.execute_sync()
# Ensure the callback is called with a TaskOutput object serialized to JSON
task_completed.assert_called_once()
callback_data = task_completed.call_args[0][0]
@@ -161,7 +161,7 @@ def test_execute_with_agent():
)
with patch.object(Agent, "execute_task", return_value="ok") as execute:
task.execute(agent=researcher)
task.execute_sync(agent=researcher)
execute.assert_called_once_with(task=task, context=None, tools=[])
@@ -181,7 +181,7 @@ def test_async_execution():
)
with patch.object(Agent, "execute_task", return_value="ok") as execute:
task.execute(agent=researcher)
task.execute_async(agent=researcher)
execute.assert_called_once_with(task=task, context=None, tools=[])
@@ -525,3 +525,14 @@ def test_interpolate_inputs():
== "Give me a list of 5 interesting ideas about ML to explore for an article, what makes them unique and interesting."
)
assert task.expected_output == "Bullet point list of 5 interesting ideas about ML."
"""
TODO: TEST SYNC
- Verify return type
"""
"""
TODO: TEST ASYNC
- Verify return type
"""