mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
Compare commits
89 Commits
v0.60.0
...
cli_wizard
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2f09652d85 | ||
|
|
0dfe3bcb0a | ||
|
|
3f81383285 | ||
|
|
e8a49e7687 | ||
|
|
ed48efb9aa | ||
|
|
c3291b967b | ||
|
|
92e867010c | ||
|
|
5059aef574 | ||
|
|
c50d62b82f | ||
|
|
f46a12b3b4 | ||
|
|
dd0b622826 | ||
|
|
835eb9fbea | ||
|
|
8cb10f9fcc | ||
|
|
30e26c9e35 | ||
|
|
01329a01ab | ||
|
|
0e11b33f6e | ||
|
|
5113bca025 | ||
|
|
71c5972fc7 | ||
|
|
ba55160d6b | ||
|
|
24e973d792 | ||
|
|
96427c1dd2 | ||
|
|
f15d5cbb64 | ||
|
|
c8a5a3e32e | ||
|
|
32fdd11c93 | ||
|
|
7f830b4f43 | ||
|
|
d6c57402cf | ||
|
|
42bea00184 | ||
|
|
5a6b0ff398 | ||
|
|
1b57bc0c75 | ||
|
|
96544009f5 | ||
|
|
44c8765add | ||
|
|
bc31019b67 | ||
|
|
ff16348d4c | ||
|
|
7310f4d85b | ||
|
|
ac331504e9 | ||
|
|
6823f76ff4 | ||
|
|
c3ac3219fe | ||
|
|
104ef7a0c2 | ||
|
|
2bbf8ed8a8 | ||
|
|
5dc6644ac7 | ||
|
|
9c0f97eaf7 | ||
|
|
164e7895bf | ||
|
|
fb46fb9ca3 | ||
|
|
effb7efc37 | ||
|
|
f5098e7e45 | ||
|
|
b15d632308 | ||
|
|
e534efa3e9 | ||
|
|
8001314718 | ||
|
|
e91ac4c5ad | ||
|
|
e19bdcb97d | ||
|
|
b8aa46a767 | ||
|
|
ab79ee32fd | ||
|
|
8d9c49a281 | ||
|
|
e659b60d8b | ||
|
|
7987bfee39 | ||
|
|
b6075f1a97 | ||
|
|
9820a69443 | ||
|
|
753118687d | ||
|
|
35e234ed6e | ||
|
|
2d54b096af | ||
|
|
493f046c03 | ||
|
|
3b6d1838b4 | ||
|
|
769ab940ed | ||
|
|
498a9e6e68 | ||
|
|
699be4887c | ||
|
|
854c58ded7 | ||
|
|
a19a4a5556 | ||
|
|
59e51f18fd | ||
|
|
7d981ba8ce | ||
|
|
6dad33f47c | ||
|
|
18c3925fa3 | ||
|
|
000e2666fb | ||
|
|
91ff331fec | ||
|
|
e3c7c0185d | ||
|
|
405650840e | ||
|
|
1bd188e0d2 | ||
|
|
9de7aa6377 | ||
|
|
d4c0a4248c | ||
|
|
c4167a5517 | ||
|
|
c055c35361 | ||
|
|
a318a226de | ||
|
|
e88cb2fea6 | ||
|
|
0ab072a95e | ||
|
|
5e8322b272 | ||
|
|
5a3b888f43 | ||
|
|
d7473edb41 | ||
|
|
d125c85a2b | ||
|
|
b46e663778 | ||
|
|
2787c9b0ef |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -2,6 +2,7 @@
|
||||
.pytest_cache
|
||||
__pycache__
|
||||
dist/
|
||||
lib/
|
||||
.env
|
||||
assets/*
|
||||
.idea
|
||||
@@ -15,4 +16,4 @@ rc-tests/*
|
||||
*.pkl
|
||||
temp/*
|
||||
.vscode/*
|
||||
crew_tasks_output.json
|
||||
crew_tasks_output.json
|
||||
|
||||
290
README.md
290
README.md
@@ -1,10 +1,10 @@
|
||||
<div align="center">
|
||||
|
||||

|
||||

|
||||
|
||||
# **crewAI**
|
||||
# **CrewAI**
|
||||
|
||||
🤖 **crewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
||||
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
||||
|
||||
<h3>
|
||||
|
||||
@@ -44,100 +44,222 @@ To get started with CrewAI, follow these simple steps:
|
||||
|
||||
### 1. Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
```
|
||||
|
||||
Then, install CrewAI:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command: pip install 'crewai[tools]'. This command installs the basic package and also adds extra components which require more dependencies to function."
|
||||
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
The command above installs the basic package and also adds extra components which require more dependencies to function.
|
||||
|
||||
### 2. Setting Up Your Crew
|
||||
### 2. Setting Up Your Crew with the YAML Configuration
|
||||
|
||||
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
|
||||
|
||||
```shell
|
||||
crewai create crew <project_name>
|
||||
```
|
||||
|
||||
This command creates a new project folder with the following structure:
|
||||
|
||||
```
|
||||
my_project/
|
||||
├── .gitignore
|
||||
├── pyproject.toml
|
||||
├── README.md
|
||||
├── .env
|
||||
└── src/
|
||||
└── my_project/
|
||||
├── __init__.py
|
||||
├── main.py
|
||||
├── crew.py
|
||||
├── tools/
|
||||
│ ├── custom_tool.py
|
||||
│ └── __init__.py
|
||||
└── config/
|
||||
├── agents.yaml
|
||||
└── tasks.yaml
|
||||
```
|
||||
|
||||
You can now start developing your crew by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents, and the `tasks.yaml` file is where you define your tasks.
|
||||
|
||||
#### To customize your project, you can:
|
||||
|
||||
- Modify `src/my_project/config/agents.yaml` to define your agents.
|
||||
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
|
||||
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
|
||||
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
|
||||
- Add your environment variables into the `.env` file.
|
||||
|
||||
#### Example of a simple crew with a sequential process:
|
||||
|
||||
Instatiate your crew:
|
||||
|
||||
```shell
|
||||
crewai create crew latest-ai-development
|
||||
```
|
||||
|
||||
Modify the files as needed to fit your use case:
|
||||
|
||||
**agents.yaml**
|
||||
|
||||
```yaml
|
||||
# src/my_project/config/agents.yaml
|
||||
researcher:
|
||||
role: >
|
||||
{topic} Senior Data Researcher
|
||||
goal: >
|
||||
Uncover cutting-edge developments in {topic}
|
||||
backstory: >
|
||||
You're a seasoned researcher with a knack for uncovering the latest
|
||||
developments in {topic}. Known for your ability to find the most relevant
|
||||
information and present it in a clear and concise manner.
|
||||
|
||||
reporting_analyst:
|
||||
role: >
|
||||
{topic} Reporting Analyst
|
||||
goal: >
|
||||
Create detailed reports based on {topic} data analysis and research findings
|
||||
backstory: >
|
||||
You're a meticulous analyst with a keen eye for detail. You're known for
|
||||
your ability to turn complex data into clear and concise reports, making
|
||||
it easy for others to understand and act on the information you provide.
|
||||
```
|
||||
|
||||
**tasks.yaml**
|
||||
|
||||
```yaml
|
||||
# src/my_project/config/tasks.yaml
|
||||
research_task:
|
||||
description: >
|
||||
Conduct a thorough research about {topic}
|
||||
Make sure you find any interesting and relevant information given
|
||||
the current year is 2024.
|
||||
expected_output: >
|
||||
A list with 10 bullet points of the most relevant information about {topic}
|
||||
agent: researcher
|
||||
|
||||
reporting_task:
|
||||
description: >
|
||||
Review the context you got and expand each topic into a full section for a report.
|
||||
Make sure the report is detailed and contains any and all relevant information.
|
||||
expected_output: >
|
||||
A fully fledge reports with the mains topics, each with a full section of information.
|
||||
Formatted as markdown without '```'
|
||||
agent: reporting_analyst
|
||||
output_file: report.md
|
||||
```
|
||||
|
||||
**crew.py**
|
||||
|
||||
```python
|
||||
import os
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
# src/my_project/crew.py
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
|
||||
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
|
||||
@CrewBase
|
||||
class LatestAiDevelopmentCrew():
|
||||
"""LatestAiDevelopment crew"""
|
||||
|
||||
# You can choose to use a local model through Ollama for example. See https://docs.crewai.com/how-to/LLM-Connections/ for more information.
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
# os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
|
||||
# os.environ["OPENAI_MODEL_NAME"] ='openhermes' # Adjust based on available model
|
||||
# os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
|
||||
@agent
|
||||
def reporting_analyst(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['reporting_analyst'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# You can pass an optional llm attribute specifying what model you wanna use.
|
||||
# It can be a local model through Ollama / LM Studio or a remote
|
||||
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
|
||||
# If you don't specify a model, the default is OpenAI gpt-4o
|
||||
#
|
||||
# import os
|
||||
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
|
||||
#
|
||||
# OR
|
||||
#
|
||||
# from langchain_openai import ChatOpenAI
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['research_task'],
|
||||
)
|
||||
|
||||
search_tool = SerperDevTool()
|
||||
@task
|
||||
def reporting_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['reporting_task'],
|
||||
output_file='report.md'
|
||||
)
|
||||
|
||||
# Define your agents with roles and goals
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Uncover cutting-edge developments in AI and data science',
|
||||
backstory="""You work at a leading tech think tank.
|
||||
Your expertise lies in identifying emerging trends.
|
||||
You have a knack for dissecting complex data and presenting actionable insights.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
# You can pass an optional llm attribute specifying what model you wanna use.
|
||||
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
|
||||
tools=[search_tool]
|
||||
)
|
||||
writer = Agent(
|
||||
role='Tech Content Strategist',
|
||||
goal='Craft compelling content on tech advancements',
|
||||
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
|
||||
You transform complex concepts into compelling narratives.""",
|
||||
verbose=True,
|
||||
allow_delegation=True
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
task1 = Task(
|
||||
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
|
||||
Identify key trends, breakthrough technologies, and potential industry impacts.""",
|
||||
expected_output="Full analysis report in bullet points",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description="""Using the insights provided, develop an engaging blog
|
||||
post that highlights the most significant AI advancements.
|
||||
Your post should be informative yet accessible, catering to a tech-savvy audience.
|
||||
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
|
||||
expected_output="Full blog post of at least 4 paragraphs",
|
||||
agent=writer
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task1, task2],
|
||||
verbose=True,
|
||||
process = Process.sequential
|
||||
)
|
||||
|
||||
# Get your crew to work!
|
||||
result = crew.kickoff()
|
||||
|
||||
print("######################")
|
||||
print(result)
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
"""Creates the LatestAiDevelopment crew"""
|
||||
return Crew(
|
||||
agents=self.agents, # Automatically created by the @agent decorator
|
||||
tasks=self.tasks, # Automatically created by the @task decorator
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
**main.py**
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
# src/my_project/main.py
|
||||
import sys
|
||||
from latest_ai_development.crew import LatestAiDevelopmentCrew
|
||||
|
||||
def run():
|
||||
"""
|
||||
Run the crew.
|
||||
"""
|
||||
inputs = {
|
||||
'topic': 'AI Agents'
|
||||
}
|
||||
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
|
||||
```
|
||||
|
||||
### 3. Running Your Crew
|
||||
|
||||
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
|
||||
|
||||
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
|
||||
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
|
||||
|
||||
Lock the dependencies and install them by using the CLI command but first, navigate to your project directory:
|
||||
|
||||
```shell
|
||||
cd my_project
|
||||
crewai install
|
||||
```
|
||||
|
||||
To run your crew, execute the following command in the root of your project:
|
||||
|
||||
```bash
|
||||
crewai run
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
python src/my_project/main.py
|
||||
```
|
||||
|
||||
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
|
||||
|
||||
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
|
||||
|
||||
## Key Features
|
||||
@@ -148,13 +270,13 @@ In addition to the sequential process, you can use the hierarchical process, whi
|
||||
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
|
||||
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
|
||||
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
|
||||
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
|
||||
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
|
||||
|
||||

|
||||
|
||||
## Examples
|
||||
|
||||
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
|
||||
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
|
||||
|
||||
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
|
||||
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
|
||||
@@ -185,9 +307,9 @@ You can test different real life examples of AI crews in the [crewAI-examples re
|
||||
|
||||
## Connecting Your Crew to a Model
|
||||
|
||||
crewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
|
||||
Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
|
||||
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
@@ -258,7 +380,7 @@ It's pivotal to understand that **NO data is collected** concerning prompts, tas
|
||||
|
||||
Data collected includes:
|
||||
|
||||
- Version of crewAI
|
||||
- Version of CrewAI
|
||||
- So we can understand how many users are using the latest version
|
||||
- Version of Python
|
||||
- So we can decide on what versions to better support
|
||||
@@ -283,7 +405,7 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
|
||||
|
||||
## License
|
||||
|
||||
CrewAI is released under the MIT License.
|
||||
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
|
||||
@@ -316,7 +438,7 @@ A: Yes, CrewAI is open-source and welcomes contributions from the community.
|
||||
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
|
||||
|
||||
### Q: Where can I find examples of CrewAI in action?
|
||||
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
||||
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
||||
|
||||
### Q: How can I contribute to CrewAI?
|
||||
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
|
||||
|
||||
@@ -36,7 +36,6 @@ description: What are crewAI Agents and how to use them.
|
||||
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
|
||||
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
|
||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
|
||||
| **Use Stop Words** *(optional)* | `use_stop_words` | Adds the ability to not use stop words (to support o1 models). Default is `True`. |
|
||||
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
|
||||
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
|
||||
|
||||
@@ -79,7 +78,6 @@ agent = Agent(
|
||||
callbacks=[callback1, callback2], # Optional
|
||||
allow_code_execution=True, # Optional
|
||||
max_retry_limit=2, # Optional
|
||||
use_stop_words=True, # Optional
|
||||
use_system_prompt=True, # Optional
|
||||
respect_context_window=True, # Optional
|
||||
)
|
||||
|
||||
@@ -85,20 +85,20 @@ Example:
|
||||
crewai replay -t task_123456
|
||||
```
|
||||
|
||||
### 5. log_tasks_outputs
|
||||
### 5. log-tasks-outputs
|
||||
|
||||
Retrieve your latest crew.kickoff() task outputs.
|
||||
|
||||
```
|
||||
crewai log_tasks_outputs
|
||||
crewai log-tasks-outputs
|
||||
```
|
||||
|
||||
### 6. reset_memories
|
||||
### 6. reset-memories
|
||||
|
||||
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
|
||||
|
||||
```
|
||||
crewai reset_memories [OPTIONS]
|
||||
crewai reset-memories [OPTIONS]
|
||||
```
|
||||
|
||||
- `-l, --long`: Reset LONG TERM memory
|
||||
@@ -109,8 +109,8 @@ crewai reset_memories [OPTIONS]
|
||||
|
||||
Example:
|
||||
```
|
||||
crewai reset_memories --long --short
|
||||
crewai reset_memories --all
|
||||
crewai reset-memories --long --short
|
||||
crewai reset-memories --all
|
||||
```
|
||||
|
||||
### 7. test
|
||||
|
||||
627
docs/core-concepts/Flows.md
Normal file
627
docs/core-concepts/Flows.md
Normal file
@@ -0,0 +1,627 @@
|
||||
# CrewAI Flows
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI Flows is a powerful feature designed to streamline the creation and management of AI workflows. Flows allow developers to combine and coordinate coding tasks and Crews efficiently, providing a robust framework for building sophisticated AI automations.
|
||||
|
||||
Flows allow you to create structured, event-driven workflows. They provide a seamless way to connect multiple tasks, manage state, and control the flow of execution in your AI applications. With Flows, you can easily design and implement multi-step processes that leverage the full potential of CrewAI's capabilities.
|
||||
|
||||
1. **Simplified Workflow Creation**: Easily chain together multiple Crews and tasks to create complex AI workflows.
|
||||
|
||||
2. **State Management**: Flows make it super easy to manage and share state between different tasks in your workflow.
|
||||
|
||||
3. **Event-Driven Architecture**: Built on an event-driven model, allowing for dynamic and responsive workflows.
|
||||
|
||||
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from litellm import completion
|
||||
|
||||
|
||||
class ExampleFlow(Flow):
|
||||
model = "gpt-4o-mini"
|
||||
|
||||
@start()
|
||||
def generate_city(self):
|
||||
print("Starting flow")
|
||||
|
||||
response = completion(
|
||||
model=self.model,
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Return the name of a random city in the world.",
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
random_city = response["choices"][0]["message"]["content"]
|
||||
print(f"Random City: {random_city}")
|
||||
|
||||
return random_city
|
||||
|
||||
@listen(generate_city)
|
||||
def generate_fun_fact(self, random_city):
|
||||
response = completion(
|
||||
model=self.model,
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": f"Tell me a fun fact about {random_city}",
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
fun_fact = response["choices"][0]["message"]["content"]
|
||||
return fun_fact
|
||||
|
||||
|
||||
async def main():
|
||||
flow = ExampleFlow()
|
||||
result = await flow.kickoff()
|
||||
|
||||
print(f"Generated fun fact: {result}")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
||||
|
||||
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
|
||||
|
||||
### @start()
|
||||
|
||||
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
|
||||
|
||||
### @listen()
|
||||
|
||||
The `@listen()` decorator is used to mark a method as a listener for the output of another task in the Flow. The method decorated with `@listen()` will be executed when the specified task emits an output. The method can access the output of the task it is listening to as an argument.
|
||||
|
||||
#### Usage
|
||||
|
||||
The `@listen()` decorator can be used in several ways:
|
||||
|
||||
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
|
||||
|
||||
```python
|
||||
@listen("generate_city")
|
||||
def generate_fun_fact(self, random_city):
|
||||
# Implementation
|
||||
```
|
||||
|
||||
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
|
||||
```python
|
||||
@listen(generate_city)
|
||||
def generate_fun_fact(self, random_city):
|
||||
# Implementation
|
||||
```
|
||||
|
||||
### Flow Output
|
||||
|
||||
Accessing and handling the output of a Flow is essential for integrating your AI workflows into larger applications or systems. CrewAI Flows provide straightforward mechanisms to retrieve the final output, access intermediate results, and manage the overall state of your Flow.
|
||||
|
||||
#### Retrieving the Final Output
|
||||
|
||||
When you run a Flow, the final output is determined by the last method that completes. The `kickoff()` method returns the output of this final method.
|
||||
|
||||
Here's how you can access the final output:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class OutputExampleFlow(Flow):
|
||||
@start()
|
||||
def first_method(self):
|
||||
return "Output from first_method"
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self, first_output):
|
||||
return f"Second method received: {first_output}"
|
||||
|
||||
async def main():
|
||||
flow = OutputExampleFlow()
|
||||
final_output = await flow.kickoff()
|
||||
print("---- Final Output ----")
|
||||
print(final_output)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow. The `kickoff()` method will return this final output, which is then printed to the console.
|
||||
|
||||
The output of the Flow will be:
|
||||
|
||||
```
|
||||
---- Final Output ----
|
||||
Second method received: Output from first_method
|
||||
```
|
||||
|
||||
#### Accessing and Updating State
|
||||
|
||||
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
|
||||
|
||||
Here's an example of how to update and access the state:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ExampleState(BaseModel):
|
||||
counter: int = 0
|
||||
message: str = ""
|
||||
|
||||
class StateExampleFlow(Flow[ExampleState]):
|
||||
|
||||
@start()
|
||||
def first_method(self):
|
||||
self.state.message = "Hello from first_method"
|
||||
self.state.counter += 1
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self):
|
||||
self.state.message += " - updated by second_method"
|
||||
self.state.counter += 1
|
||||
return self.state.message
|
||||
|
||||
async def main():
|
||||
flow = StateExampleFlow()
|
||||
final_output = await flow.kickoff()
|
||||
print(f"Final Output: {final_output}")
|
||||
print("Final State:")
|
||||
print(flow.state)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
In this example, the state is updated by both `first_method` and `second_method`. After the Flow has run, you can access the final state to see the updates made by these methods.
|
||||
|
||||
The output of the Flow will be:
|
||||
|
||||
```
|
||||
Final Output: Hello from first_method - updated by second_method
|
||||
Final State:
|
||||
counter=2 message='Hello from first_method - updated by second_method'
|
||||
```
|
||||
|
||||
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems, while also maintaining and accessing the state throughout the Flow's execution.
|
||||
|
||||
## Flow State Management
|
||||
|
||||
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management, allowing developers to choose the approach that best fits their application's needs.
|
||||
|
||||
### Unstructured State Management
|
||||
|
||||
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class. This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class UntructuredExampleFlow(Flow):
|
||||
|
||||
@start()
|
||||
def first_method(self):
|
||||
self.state.message = "Hello from structured flow"
|
||||
self.state.counter = 0
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self):
|
||||
self.state.counter += 1
|
||||
self.state.message += " - updated"
|
||||
|
||||
@listen(second_method)
|
||||
def third_method(self):
|
||||
self.state.counter += 1
|
||||
self.state.message += " - updated again"
|
||||
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = UntructuredExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
|
||||
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
|
||||
- **Simplicity:** Ideal for straightforward workflows where state structure is minimal or varies significantly.
|
||||
|
||||
### Structured State Management
|
||||
|
||||
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow. By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class ExampleState(BaseModel):
|
||||
counter: int = 0
|
||||
message: str = ""
|
||||
|
||||
|
||||
class StructuredExampleFlow(Flow[ExampleState]):
|
||||
|
||||
@start()
|
||||
def first_method(self):
|
||||
self.state.message = "Hello from structured flow"
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self):
|
||||
self.state.counter += 1
|
||||
self.state.message += " - updated"
|
||||
|
||||
@listen(second_method)
|
||||
def third_method(self):
|
||||
self.state.counter += 1
|
||||
self.state.message += " - updated again"
|
||||
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = StructuredExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
|
||||
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
|
||||
- **Type Safety:** Leveraging Pydantic ensures that state attributes adhere to the specified types, reducing runtime errors.
|
||||
- **Auto-Completion:** IDEs can provide better auto-completion and error checking based on the defined state model.
|
||||
|
||||
### Choosing Between Unstructured and Structured State Management
|
||||
|
||||
- **Use Unstructured State Management when:**
|
||||
|
||||
- The workflow's state is simple or highly dynamic.
|
||||
- Flexibility is prioritized over strict state definitions.
|
||||
- Rapid prototyping is required without the overhead of defining schemas.
|
||||
|
||||
- **Use Structured State Management when:**
|
||||
- The workflow requires a well-defined and consistent state structure.
|
||||
- Type safety and validation are important for your application's reliability.
|
||||
- You want to leverage IDE features like auto-completion and type checking for better developer experience.
|
||||
|
||||
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
|
||||
|
||||
## Flow Control
|
||||
|
||||
### Conditional Logic
|
||||
|
||||
#### or
|
||||
|
||||
The `or_` function in Flows allows you to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, or_, start
|
||||
|
||||
class OrExampleFlow(Flow):
|
||||
|
||||
@start()
|
||||
def start_method(self):
|
||||
return "Hello from the start method"
|
||||
|
||||
@listen(start_method)
|
||||
def second_method(self):
|
||||
return "Hello from the second method"
|
||||
|
||||
@listen(or_(start_method, second_method))
|
||||
def logger(self, result):
|
||||
print(f"Logger: {result}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = OrExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`. The `or_` function is to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
||||
|
||||
The output of the Flow will be:
|
||||
|
||||
```
|
||||
Logger: Hello from the start method
|
||||
Logger: Hello from the second method
|
||||
```
|
||||
|
||||
#### and
|
||||
|
||||
The `and_` function in Flows allows you to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, and_, listen, start
|
||||
|
||||
class AndExampleFlow(Flow):
|
||||
|
||||
@start()
|
||||
def start_method(self):
|
||||
self.state["greeting"] = "Hello from the start method"
|
||||
|
||||
@listen(start_method)
|
||||
def second_method(self):
|
||||
self.state["joke"] = "What do computers eat? Microchips."
|
||||
|
||||
@listen(and_(start_method, second_method))
|
||||
def logger(self):
|
||||
print("---- Logger ----")
|
||||
print(self.state)
|
||||
|
||||
|
||||
async def main():
|
||||
flow = AndExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output. The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
||||
|
||||
The output of the Flow will be:
|
||||
|
||||
```
|
||||
---- Logger ----
|
||||
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
|
||||
```
|
||||
|
||||
### Router
|
||||
|
||||
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method. You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import random
|
||||
from crewai.flow.flow import Flow, listen, router, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ExampleState(BaseModel):
|
||||
success_flag: bool = False
|
||||
|
||||
class RouterFlow(Flow[ExampleState]):
|
||||
|
||||
@start()
|
||||
def start_method(self):
|
||||
print("Starting the structured flow")
|
||||
random_boolean = random.choice([True, False])
|
||||
self.state.success_flag = random_boolean
|
||||
|
||||
@router(start_method)
|
||||
def second_method(self):
|
||||
if self.state.success_flag:
|
||||
return "success"
|
||||
else:
|
||||
return "failed"
|
||||
|
||||
@listen("success")
|
||||
def third_method(self):
|
||||
print("Third method running")
|
||||
|
||||
@listen("failed")
|
||||
def fourth_method(self):
|
||||
print("Fourth method running")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = RouterFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
In the above example, the `start_method` generates a random boolean value and sets it in the state. The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean. If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`. The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`, but you should see an output similar to the following:
|
||||
|
||||
```
|
||||
Starting the structured flow
|
||||
Third method running
|
||||
```
|
||||
|
||||
## Adding Crews to Flows
|
||||
|
||||
Creating a flow with multiple crews in CrewAI is straightforward. You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
|
||||
|
||||
```bash
|
||||
crewai create flow name_of_flow
|
||||
```
|
||||
|
||||
This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews.
|
||||
|
||||
### Folder Structure
|
||||
|
||||
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
|
||||
|
||||
```
|
||||
name_of_flow/
|
||||
├── crews/
|
||||
│ └── poem_crew/
|
||||
│ ├── config/
|
||||
│ │ ├── agents.yaml
|
||||
│ │ └── tasks.yaml
|
||||
│ ├── poem_crew.py
|
||||
├── tools/
|
||||
│ └── custom_tool.py
|
||||
├── main.py
|
||||
├── README.md
|
||||
├── pyproject.toml
|
||||
└── .gitignore
|
||||
```
|
||||
|
||||
### Building Your Crews
|
||||
|
||||
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
|
||||
|
||||
- `config/agents.yaml`: Defines the agents for the crew.
|
||||
- `config/tasks.yaml`: Defines the tasks for the crew.
|
||||
- `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself.
|
||||
|
||||
You can copy, paste, and edit the `poem_crew` to create other crews.
|
||||
|
||||
### Connecting Crews in `main.py`
|
||||
|
||||
The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution.
|
||||
|
||||
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
import asyncio
|
||||
from random import randint
|
||||
|
||||
from pydantic import BaseModel
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from .crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
class PoemState(BaseModel):
|
||||
sentence_count: int = 1
|
||||
poem: str = ""
|
||||
|
||||
class PoemFlow(Flow[PoemState]):
|
||||
|
||||
@start()
|
||||
def generate_sentence_count(self):
|
||||
print("Generating sentence count")
|
||||
# Generate a number between 1 and 5
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
|
||||
@listen(generate_sentence_count)
|
||||
def generate_poem(self):
|
||||
print("Generating poem")
|
||||
poem_crew = PoemCrew().crew()
|
||||
result = poem_crew.kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
|
||||
print("Poem generated", result.raw)
|
||||
self.state.poem = result.raw
|
||||
|
||||
@listen(generate_poem)
|
||||
def save_poem(self):
|
||||
print("Saving poem")
|
||||
with open("poem.txt", "w") as f:
|
||||
f.write(self.state.poem)
|
||||
|
||||
async def run():
|
||||
"""
|
||||
Run the flow.
|
||||
"""
|
||||
poem_flow = PoemFlow()
|
||||
await poem_flow.kickoff()
|
||||
|
||||
def main():
|
||||
asyncio.run(run())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
|
||||
|
||||
### Running the Flow
|
||||
|
||||
Before running the flow, make sure to install the dependencies by running:
|
||||
|
||||
```bash
|
||||
poetry install
|
||||
```
|
||||
|
||||
Once all of the dependencies are installed, you need to activate the virtual environment by running:
|
||||
|
||||
```bash
|
||||
poetry shell
|
||||
```
|
||||
|
||||
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
||||
|
||||
```bash
|
||||
crewai flow run
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
poetry run run_flow
|
||||
```
|
||||
|
||||
The flow will execute, and you should see the output in the console.
|
||||
|
||||
## Plot Flows
|
||||
|
||||
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
|
||||
|
||||
### What are Plots?
|
||||
|
||||
Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations.
|
||||
|
||||
### How to Generate a Plot
|
||||
|
||||
CrewAI provides two convenient methods to generate plots of your flows:
|
||||
|
||||
#### Option 1: Using the `plot()` Method
|
||||
|
||||
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
|
||||
|
||||
```python
|
||||
# Assuming you have a flow instance
|
||||
flow.plot("my_flow_plot")
|
||||
```
|
||||
|
||||
This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot.
|
||||
|
||||
#### Option 2: Using the Command Line
|
||||
|
||||
If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup.
|
||||
|
||||
```bash
|
||||
crewai flow plot
|
||||
```
|
||||
|
||||
This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow.
|
||||
|
||||
### Understanding the Plot
|
||||
|
||||
The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details.
|
||||
|
||||
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
|
||||
|
||||
## Next Steps
|
||||
|
||||
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
|
||||
|
||||
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
|
||||
|
||||
2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow)
|
||||
|
||||
3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows)
|
||||
|
||||
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
||||
|
||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||
155
docs/core-concepts/LLMs.md
Normal file
155
docs/core-concepts/LLMs.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# Large Language Models (LLMs) in crewAI
|
||||
|
||||
## Introduction
|
||||
Large Language Models (LLMs) are the backbone of intelligent agents in the crewAI framework. This guide will help you understand, configure, and optimize LLM usage for your crewAI projects.
|
||||
|
||||
## Table of Contents
|
||||
- [Key Concepts](#key-concepts)
|
||||
- [Configuring LLMs for Agents](#configuring-llms-for-agents)
|
||||
- [1. Default Configuration](#1-default-configuration)
|
||||
- [2. String Identifier](#2-string-identifier)
|
||||
- [3. LLM Instance](#3-llm-instance)
|
||||
- [4. Custom LLM Objects](#4-custom-llm-objects)
|
||||
- [Connecting to OpenAI-Compatible LLMs](#connecting-to-openai-compatible-llms)
|
||||
- [LLM Configuration Options](#llm-configuration-options)
|
||||
- [Using Ollama (Local LLMs)](#using-ollama-local-llms)
|
||||
- [Changing the Base API URL](#changing-the-base-api-url)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Key Concepts
|
||||
- **LLM**: Large Language Model, the AI powering agent intelligence
|
||||
- **Agent**: A crewAI entity that uses an LLM to perform tasks
|
||||
- **Provider**: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, [more providers](https://docs.litellm.ai/docs/providers))
|
||||
|
||||
## Configuring LLMs for Agents
|
||||
|
||||
crewAI offers flexible options for setting up LLMs:
|
||||
|
||||
### 1. Default Configuration
|
||||
By default, crewAI uses the `gpt-4o-mini` model. It uses environment variables if no LLM is specified:
|
||||
- `OPENAI_MODEL_NAME` (defaults to "gpt-4o-mini" if not set)
|
||||
- `OPENAI_API_BASE`
|
||||
- `OPENAI_API_KEY`
|
||||
|
||||
### 2. String Identifier
|
||||
```python
|
||||
agent = Agent(llm="gpt-4o", ...)
|
||||
```
|
||||
|
||||
### 3. LLM Instance
|
||||
List of [more providers](https://docs.litellm.ai/docs/providers).
|
||||
```python
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(model="gpt-4", temperature=0.7)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
### 4. Custom LLM Objects
|
||||
Pass a custom LLM implementation or object from another library.
|
||||
|
||||
## Connecting to OpenAI-Compatible LLMs
|
||||
|
||||
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
|
||||
|
||||
1. Using environment variables:
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "your-api-key"
|
||||
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
```
|
||||
|
||||
2. Using LLM class attributes:
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.your-provider.com/v1"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## LLM Configuration Options
|
||||
|
||||
When configuring an LLM for your agent, you have access to a wide range of parameters:
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `model` | str | The name of the model to use (e.g., "gpt-4", "gpt-3.5-turbo", "ollama/llama3.1", [more providers](https://docs.litellm.ai/docs/providers)) |
|
||||
| `timeout` | float, int | Maximum time (in seconds) to wait for a response |
|
||||
| `temperature` | float | Controls randomness in output (0.0 to 1.0) |
|
||||
| `top_p` | float | Controls diversity of output (0.0 to 1.0) |
|
||||
| `n` | int | Number of completions to generate |
|
||||
| `stop` | str, List[str] | Sequence(s) to stop generation |
|
||||
| `max_tokens` | int | Maximum number of tokens to generate |
|
||||
| `presence_penalty` | float | Penalizes new tokens based on their presence in the text so far |
|
||||
| `frequency_penalty` | float | Penalizes new tokens based on their frequency in the text so far |
|
||||
| `logit_bias` | Dict[int, float] | Modifies likelihood of specified tokens appearing in the completion |
|
||||
| `response_format` | Dict[str, Any] | Specifies the format of the response (e.g., {"type": "json_object"}) |
|
||||
| `seed` | int | Sets a random seed for deterministic results |
|
||||
| `logprobs` | bool | Whether to return log probabilities of the output tokens |
|
||||
| `top_logprobs` | int | Number of most likely tokens to return the log probabilities for |
|
||||
| `base_url` | str | The base URL for the API endpoint |
|
||||
| `api_version` | str | The version of the API to use |
|
||||
| `api_key` | str | Your API key for authentication |
|
||||
|
||||
Example:
|
||||
```python
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.8,
|
||||
max_tokens=150,
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.1,
|
||||
presence_penalty=0.1,
|
||||
stop=["END"],
|
||||
seed=42,
|
||||
base_url="https://api.openai.com/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## Using Ollama (Local LLMs)
|
||||
|
||||
crewAI supports using Ollama for running open-source models locally:
|
||||
|
||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||
2. Run a model: `ollama run llama2`
|
||||
3. Configure agent:
|
||||
```python
|
||||
agent = Agent(
|
||||
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
|
||||
...
|
||||
)
|
||||
```
|
||||
|
||||
## Changing the Base API URL
|
||||
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
api_key="your-api-key"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
|
||||
|
||||
## Best Practices
|
||||
1. **Choose the right model**: Balance capability and cost.
|
||||
2. **Optimize prompts**: Clear, concise instructions improve output.
|
||||
3. **Manage tokens**: Monitor and limit token usage for efficiency.
|
||||
4. **Use appropriate temperature**: Lower for factual tasks, higher for creative ones.
|
||||
5. **Implement error handling**: Gracefully manage API errors and rate limits.
|
||||
|
||||
## Troubleshooting
|
||||
- **API Errors**: Check your API key, network connection, and rate limits.
|
||||
- **Unexpected Outputs**: Refine your prompts and adjust temperature or top_p.
|
||||
- **Performance Issues**: Consider using a more powerful model or optimizing your queries.
|
||||
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.
|
||||
@@ -28,7 +28,7 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
|
||||
## Implementing Memory in Your Crew
|
||||
|
||||
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model. It's also possible to initialize the memory instance with your own instance.
|
||||
|
||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
|
||||
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||
@@ -50,6 +50,45 @@ my_crew = Crew(
|
||||
)
|
||||
```
|
||||
|
||||
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
|
||||
|
||||
```python
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
# Assemble your crew with memory capabilities
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process="Process.sequential",
|
||||
memory=True,
|
||||
long_term_memory=EnhanceLongTermMemory(
|
||||
storage=LTMSQLiteStorage(
|
||||
db_path="/my_data_dir/my_crew1/long_term_memory_storage.db"
|
||||
)
|
||||
),
|
||||
short_term_memory=EnhanceShortTermMemory(
|
||||
storage=CustomRAGStorage(
|
||||
crew_name="my_crew",
|
||||
storage_type="short_term",
|
||||
data_dir="//my_data_dir",
|
||||
model=embedder["model"],
|
||||
dimension=embedder["dimension"],
|
||||
),
|
||||
),
|
||||
entity_memory=EnhanceEntityMemory(
|
||||
storage=CustomRAGStorage(
|
||||
crew_name="my_crew",
|
||||
storage_type="entities",
|
||||
data_dir="//my_data_dir",
|
||||
model=embedder["model"],
|
||||
dimension=embedder["dimension"],
|
||||
),
|
||||
),
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## Additional Embedding Providers
|
||||
|
||||
### Using OpenAI embeddings (already default)
|
||||
@@ -169,7 +208,7 @@ my_crew = Crew(
|
||||
|
||||
### Resetting Memory
|
||||
```sh
|
||||
crewai reset_memories [OPTIONS]
|
||||
crewai reset-memories [OPTIONS]
|
||||
```
|
||||
|
||||
#### Resetting Memory Options
|
||||
|
||||
@@ -248,7 +248,7 @@ main_pipeline = Pipeline(stages=[classification_crew, email_router])
|
||||
|
||||
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
|
||||
|
||||
main_pipeline.kickoff(inputs=inputs=inputs)
|
||||
main_pipeline.kickoff(inputs=inputs)
|
||||
```
|
||||
|
||||
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7, it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
|
||||
@@ -265,4 +265,4 @@ In this example, the router decides between an urgent pipeline and a normal pipe
|
||||
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
||||
|
||||
- Validates that stages contain only Crew instances or lists of Crew instances.
|
||||
- Prevents double nesting of stages to maintain a clear structure.
|
||||
- Prevents double nesting of stages to maintain a clear structure.
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
```markdown
|
||||
---
|
||||
title: crewAI Tasks
|
||||
description: Detailed guide on managing and creating tasks within the crewAI framework, reflecting the latest codebase updates.
|
||||
@@ -314,4 +313,4 @@ save_output_task = Task(
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
||||
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 94 KiB After Width: | Height: | Size: 14 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 97 KiB After Width: | Height: | Size: 14 KiB |
@@ -176,7 +176,7 @@ This will install the dependencies specified in the `pyproject.toml` file.
|
||||
|
||||
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
|
||||
|
||||
#### agents.yaml
|
||||
#### tasks.yaml
|
||||
|
||||
```yaml
|
||||
research_task:
|
||||
|
||||
@@ -20,7 +20,6 @@ Crafting an efficient CrewAI team hinges on the ability to dynamically tailor yo
|
||||
- **System Template** *(Optional)*: `system_template` defines the system format for the agent.
|
||||
- **Prompt Template** *(Optional)*: `prompt_template` defines the prompt format for the agent.
|
||||
- **Response Template** *(Optional)*: `response_template` defines the response format for the agent.
|
||||
- **Use Stop Words** *(Optional)*: `use_stop_words` attribute controls whether the agent will use stop words during task execution. This is now supported to aid o1 models.
|
||||
- **Use System Prompt** *(Optional)*: `use_system_prompt` controls whether the agent will use a system prompt for task execution. Agents can now operate without system prompts.
|
||||
- **Respect Context Window**: `respect_context_window` renames the sliding context window attribute and enables it by default to maintain context size.
|
||||
- **Max Retry Limit**: `max_retry_limit` defines the maximum number of retries for an agent to execute a task when an error occurs.
|
||||
|
||||
@@ -46,7 +46,6 @@ researcher = Agent(
|
||||
verbose=False,
|
||||
# tools=[] # This can be optionally specified; defaults to an empty list
|
||||
use_system_prompt=True, # Enable or disable system prompts for this agent
|
||||
use_stop_words=True, # Enable or disable stop words for this agent
|
||||
max_rpm=30, # Limit on the number of requests per minute
|
||||
max_iter=5 # Maximum number of iterations for a final answer
|
||||
)
|
||||
@@ -58,7 +57,6 @@ writer = Agent(
|
||||
verbose=False,
|
||||
# tools=[] # Optionally specify tools; defaults to an empty list
|
||||
use_system_prompt=True, # Enable or disable system prompts for this agent
|
||||
use_stop_words=True, # Enable or disable stop words for this agent
|
||||
max_rpm=30, # Limit on the number of requests per minute
|
||||
max_iter=5 # Maximum number of iterations for a final answer
|
||||
)
|
||||
|
||||
@@ -5,10 +5,10 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua
|
||||
|
||||
## Connect CrewAI to LLMs
|
||||
|
||||
CrewAI now uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||
|
||||
!!! note "Default LLM"
|
||||
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can easily configure your agents to use a different model or provider as described in this guide.
|
||||
By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set. You can easily configure your agents to use a different model or provider as described in this guide.
|
||||
|
||||
## Supported Providers
|
||||
|
||||
@@ -35,7 +35,11 @@ For a complete and up-to-date list of supported providers, please refer to the [
|
||||
|
||||
## Changing the LLM
|
||||
|
||||
To use a different LLM with your CrewAI agents, you simply need to pass the model name as a string when initializing the agent. Here are some examples:
|
||||
To use a different LLM with your CrewAI agents, you have several options:
|
||||
|
||||
### 1. Using a String Identifier
|
||||
|
||||
Pass the model name as a string when initializing the agent:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
@@ -55,59 +59,105 @@ claude_agent = Agent(
|
||||
backstory="An AI assistant leveraging Anthropic's language model.",
|
||||
llm='claude-2'
|
||||
)
|
||||
```
|
||||
|
||||
# Using Ollama's local Llama 2 model
|
||||
ollama_agent = Agent(
|
||||
role='Local AI Expert',
|
||||
goal='Process information using a local model',
|
||||
backstory="An AI assistant running on local hardware.",
|
||||
llm='ollama/llama2'
|
||||
### 2. Using the LLM Class
|
||||
|
||||
For more detailed configuration, use the LLM class:
|
||||
|
||||
```python
|
||||
from crewai import Agent, LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.7,
|
||||
base_url="https://api.openai.com/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
|
||||
# Using Google's Gemini model
|
||||
gemini_agent = Agent(
|
||||
role='Google AI Expert',
|
||||
goal='Generate creative content with Gemini',
|
||||
backstory="An AI assistant powered by Google's advanced language model.",
|
||||
llm='gemini-pro'
|
||||
agent = Agent(
|
||||
role='Customized LLM Expert',
|
||||
goal='Provide tailored responses',
|
||||
backstory="An AI assistant with custom LLM settings.",
|
||||
llm=llm
|
||||
)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
## Configuration Options
|
||||
|
||||
For most providers, you'll need to set up your API keys as environment variables. Here's how you can do it for some common providers:
|
||||
When configuring an LLM for your agent, you have access to a wide range of parameters:
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `model` | str | The name of the model to use (e.g., "gpt-4", "claude-2") |
|
||||
| `temperature` | float | Controls randomness in output (0.0 to 1.0) |
|
||||
| `max_tokens` | int | Maximum number of tokens to generate |
|
||||
| `top_p` | float | Controls diversity of output (0.0 to 1.0) |
|
||||
| `frequency_penalty` | float | Penalizes new tokens based on their frequency in the text so far |
|
||||
| `presence_penalty` | float | Penalizes new tokens based on their presence in the text so far |
|
||||
| `stop` | str, List[str] | Sequence(s) to stop generation |
|
||||
| `base_url` | str | The base URL for the API endpoint |
|
||||
| `api_key` | str | Your API key for authentication |
|
||||
|
||||
For a complete list of parameters and their descriptions, refer to the LLM class documentation.
|
||||
|
||||
## Connecting to OpenAI-Compatible LLMs
|
||||
|
||||
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
|
||||
|
||||
### Using Environment Variables
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
# OpenAI
|
||||
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
|
||||
|
||||
# Anthropic
|
||||
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-api-key"
|
||||
|
||||
# Google (Vertex AI)
|
||||
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/your/credentials.json"
|
||||
|
||||
# Azure OpenAI
|
||||
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
|
||||
os.environ["AZURE_API_BASE"] = "your-azure-endpoint"
|
||||
|
||||
# AWS (Bedrock)
|
||||
os.environ["AWS_ACCESS_KEY_ID"] = "your-aws-access-key-id"
|
||||
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-aws-secret-access-key"
|
||||
os.environ["OPENAI_API_KEY"] = "your-api-key"
|
||||
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
os.environ["OPENAI_MODEL_NAME"] = "your-model-name"
|
||||
```
|
||||
|
||||
For providers that require additional configuration or have specific setup requirements, please refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions.
|
||||
### Using LLM Class Attributes
|
||||
|
||||
## Using Local Models
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.your-provider.com/v1"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
For local models like those provided by Ollama, ensure you have the necessary software installed and running. For example, to use Ollama:
|
||||
## Using Local Models with Ollama
|
||||
|
||||
For local models like those provided by Ollama:
|
||||
|
||||
1. [Download and install Ollama](https://ollama.com/download)
|
||||
2. Pull the desired model (e.g., `ollama pull llama2`)
|
||||
3. Use the model in your CrewAI agent by specifying `llm='ollama/llama2'`
|
||||
3. Configure your agent:
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role='Local AI Expert',
|
||||
goal='Process information using a local model',
|
||||
backstory="An AI assistant running on local hardware.",
|
||||
llm=LLM(model="ollama/llama2", base_url="http://localhost:11434")
|
||||
)
|
||||
```
|
||||
|
||||
## Changing the Base API URL
|
||||
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
api_key="your-api-key"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
|
||||
|
||||
## Conclusion
|
||||
|
||||
By leveraging LiteLLM, CrewAI now offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||
By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||
|
||||
@@ -53,6 +53,16 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
||||
Crews
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./core-concepts/LLMs">
|
||||
LLMs
|
||||
</a>
|
||||
</li>
|
||||
<!-- <li>
|
||||
<a href="./core-concepts/Flows">
|
||||
Flows
|
||||
</a>
|
||||
</li> -->
|
||||
<li>
|
||||
<a href="./core-concepts/Pipeline">
|
||||
Pipeline
|
||||
@@ -80,7 +90,7 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div style="width:30%">
|
||||
<div style="width:25%">
|
||||
<h2>How-To Guides</h2>
|
||||
<ul>
|
||||
<li>
|
||||
@@ -155,7 +165,7 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div style="width:30%">
|
||||
<!-- <div style="width:25%">
|
||||
<h2>Examples</h2>
|
||||
<ul>
|
||||
<li>
|
||||
@@ -193,6 +203,26 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
||||
Landing Page Generator
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow">
|
||||
Email Auto Responder Flow
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow">
|
||||
Lead Score Flow
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows">
|
||||
Write a Book Flow
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow">
|
||||
Meeting Assistant Flow
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div> -->
|
||||
</div>
|
||||
|
||||
14
mkdocs.yml
14
mkdocs.yml
@@ -78,14 +78,14 @@ theme:
|
||||
|
||||
palette:
|
||||
- scheme: default
|
||||
primary: red
|
||||
accent: red
|
||||
primary: deep orange
|
||||
accent: deep orange
|
||||
toggle:
|
||||
icon: material/brightness-7
|
||||
name: Switch to dark mode
|
||||
- scheme: slate
|
||||
primary: red
|
||||
accent: red
|
||||
primary: deep orange
|
||||
accent: deep orange
|
||||
toggle:
|
||||
icon: material/brightness-4
|
||||
name: Switch to light mode
|
||||
@@ -162,7 +162,7 @@ nav:
|
||||
- Directory RAG Search: 'tools/DirectorySearchTool.md'
|
||||
- Directory Read: 'tools/DirectoryReadTool.md'
|
||||
- Docx Rag Search: 'tools/DOCXSearchTool.md'
|
||||
- EXA Serch Web Loader: 'tools/EXASearchTool.md'
|
||||
- EXA Search Web Loader: 'tools/EXASearchTool.md'
|
||||
- File Read: 'tools/FileReadTool.md'
|
||||
- File Write: 'tools/FileWriteTool.md'
|
||||
- Firecrawl Crawl Website Tool: 'tools/FirecrawlCrawlWebsiteTool.md'
|
||||
@@ -210,6 +210,6 @@ extra:
|
||||
property: G-N3Q505TMQ6
|
||||
social:
|
||||
- icon: fontawesome/brands/twitter
|
||||
link: https://twitter.com/joaomdmoura
|
||||
link: https://x.com/crewAIInc
|
||||
- icon: fontawesome/brands/github
|
||||
link: https://github.com/joaomdmoura/crewAI
|
||||
link: https://github.com/crewAIInc/crewAI
|
||||
|
||||
2244
poetry.lock
generated
2244
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "crewai"
|
||||
version = "0.60.0"
|
||||
version = "0.67.1"
|
||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||
authors = ["Joao Moura <joao@crewai.com>"]
|
||||
readme = "README.md"
|
||||
@@ -14,14 +14,14 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
pydantic = "^2.4.2"
|
||||
langchain = ">0.2,<=0.3"
|
||||
langchain = "^0.2.16"
|
||||
openai = "^1.13.3"
|
||||
opentelemetry-api = "^1.22.0"
|
||||
opentelemetry-sdk = "^1.22.0"
|
||||
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
|
||||
instructor = "1.3.3"
|
||||
regex = "^2024.7.24"
|
||||
crewai-tools = { version = "^0.12.0", optional = true }
|
||||
regex = "^2024.9.11"
|
||||
crewai-tools = { version = "^0.12.1", optional = true }
|
||||
click = "^8.1.7"
|
||||
python-dotenv = "^1.0.0"
|
||||
appdirs = "^1.4.4"
|
||||
@@ -32,6 +32,7 @@ json-repair = "^0.25.2"
|
||||
auth0-python = "^4.7.1"
|
||||
poetry = "^1.8.3"
|
||||
litellm = "^1.44.22"
|
||||
pyvis = "^0.3.2"
|
||||
|
||||
[tool.poetry.extras]
|
||||
tools = ["crewai-tools"]
|
||||
@@ -49,13 +50,14 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
|
||||
mkdocs-material-extensions = "^1.3.1"
|
||||
pillow = "^10.2.0"
|
||||
cairosvg = "^2.7.1"
|
||||
crewai-tools = "^0.12.0"
|
||||
crewai-tools = "^0.12.1"
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^8.0.0"
|
||||
pytest-vcr = "^1.0.2"
|
||||
python-dotenv = "1.0.0"
|
||||
pytest-asyncio = "^0.23.7"
|
||||
pytest-subprocess = "^1.5.2"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
crewai = "crewai.cli.cli:crewai"
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
import warnings
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.flow.flow import Flow
|
||||
from crewai.llm import LLM
|
||||
from crewai.pipeline import Pipeline
|
||||
from crewai.process import Process
|
||||
from crewai.routers import Router
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
warnings.filterwarnings(
|
||||
"ignore",
|
||||
message="Pydantic serializer warnings:",
|
||||
@@ -14,4 +16,4 @@ warnings.filterwarnings(
|
||||
module="pydantic.main",
|
||||
)
|
||||
|
||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router"]
|
||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import os
|
||||
from inspect import signature
|
||||
from typing import Any, List, Optional
|
||||
from typing import Any, List, Optional, Union
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
|
||||
|
||||
from crewai.agents import CacheHandler
|
||||
@@ -12,6 +12,7 @@ from crewai.memory.contextual.contextual_memory import ContextualMemory
|
||||
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
from crewai.llm import LLM
|
||||
|
||||
|
||||
def mock_agent_ops_provider():
|
||||
@@ -73,16 +74,12 @@ class Agent(BaseAgent):
|
||||
default=None,
|
||||
description="Callback to be executed after each step of the agent execution.",
|
||||
)
|
||||
use_stop_words: bool = Field(
|
||||
default=True,
|
||||
description="Use stop words for the agent.",
|
||||
)
|
||||
use_system_prompt: Optional[bool] = Field(
|
||||
default=True,
|
||||
description="Use system prompt for the agent.",
|
||||
)
|
||||
llm: Any = Field(
|
||||
description="Language model that will run the agent.", default="gpt-4o"
|
||||
llm: Union[str, InstanceOf[LLM], Any] = Field(
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
function_calling_llm: Optional[Any] = Field(
|
||||
description="Language model that will run the agent.", default=None
|
||||
@@ -107,7 +104,7 @@ class Agent(BaseAgent):
|
||||
description="Keep messages under the context window size by summarizing content.",
|
||||
)
|
||||
max_iter: int = Field(
|
||||
default=15,
|
||||
default=20,
|
||||
description="Maximum number of iterations for an agent to execute a task before giving it's best answer",
|
||||
)
|
||||
max_retry_limit: int = Field(
|
||||
@@ -118,12 +115,60 @@ class Agent(BaseAgent):
|
||||
@model_validator(mode="after")
|
||||
def post_init_setup(self):
|
||||
self.agent_ops_agent_name = self.role
|
||||
self.llm = self.llm.model_name if hasattr(self.llm, "model_name") else self.llm
|
||||
self.function_calling_llm = (
|
||||
self.function_calling_llm.model_name
|
||||
if hasattr(self.function_calling_llm, "model_name")
|
||||
else self.function_calling_llm
|
||||
)
|
||||
|
||||
# Handle different cases for self.llm
|
||||
if isinstance(self.llm, str):
|
||||
# If it's a string, create an LLM instance
|
||||
self.llm = LLM(model=self.llm)
|
||||
elif isinstance(self.llm, LLM):
|
||||
# If it's already an LLM instance, keep it as is
|
||||
pass
|
||||
elif self.llm is None:
|
||||
# If it's None, use environment variables or default
|
||||
model_name = os.environ.get("OPENAI_MODEL_NAME", "gpt-4o-mini")
|
||||
llm_params = {"model": model_name}
|
||||
|
||||
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
|
||||
"OPENAI_BASE_URL"
|
||||
)
|
||||
if api_base:
|
||||
llm_params["base_url"] = api_base
|
||||
|
||||
api_key = os.environ.get("OPENAI_API_KEY")
|
||||
if api_key:
|
||||
llm_params["api_key"] = api_key
|
||||
|
||||
self.llm = LLM(**llm_params)
|
||||
else:
|
||||
# For any other type, attempt to extract relevant attributes
|
||||
llm_params = {
|
||||
"model": getattr(self.llm, "model_name", None)
|
||||
or getattr(self.llm, "deployment_name", None)
|
||||
or str(self.llm),
|
||||
"temperature": getattr(self.llm, "temperature", None),
|
||||
"max_tokens": getattr(self.llm, "max_tokens", None),
|
||||
"logprobs": getattr(self.llm, "logprobs", None),
|
||||
"timeout": getattr(self.llm, "timeout", None),
|
||||
"max_retries": getattr(self.llm, "max_retries", None),
|
||||
"api_key": getattr(self.llm, "api_key", None),
|
||||
"base_url": getattr(self.llm, "base_url", None),
|
||||
"organization": getattr(self.llm, "organization", None),
|
||||
}
|
||||
# Remove None values to avoid passing unnecessary parameters
|
||||
llm_params = {k: v for k, v in llm_params.items() if v is not None}
|
||||
self.llm = LLM(**llm_params)
|
||||
|
||||
# Similar handling for function_calling_llm
|
||||
if self.function_calling_llm:
|
||||
if isinstance(self.function_calling_llm, str):
|
||||
self.function_calling_llm = LLM(model=self.function_calling_llm)
|
||||
elif not isinstance(self.function_calling_llm, LLM):
|
||||
self.function_calling_llm = LLM(
|
||||
model=getattr(self.function_calling_llm, "model_name", None)
|
||||
or getattr(self.function_calling_llm, "deployment_name", None)
|
||||
or str(self.function_calling_llm)
|
||||
)
|
||||
|
||||
if not self.agent_executor:
|
||||
self._setup_agent_executor()
|
||||
|
||||
@@ -242,7 +287,6 @@ class Agent(BaseAgent):
|
||||
stop_words=stop_words,
|
||||
max_iter=self.max_iter,
|
||||
tools_handler=self.tools_handler,
|
||||
use_stop_words=self.use_stop_words,
|
||||
tools_names=self.__tools_names(parsed_tools),
|
||||
tools_description=self._render_text_description_and_args(parsed_tools),
|
||||
step_callback=self.step_callback,
|
||||
@@ -300,8 +344,9 @@ class Agent(BaseAgent):
|
||||
human_feedbacks = [
|
||||
i["human_feedback"] for i in data.get(agent_id, {}).values()
|
||||
]
|
||||
task_prompt += "You MUST follow these feedbacks: \n " + "\n - ".join(
|
||||
human_feedbacks
|
||||
task_prompt += (
|
||||
"\n\nYou MUST follow these instructions: \n "
|
||||
+ "\n - ".join(human_feedbacks)
|
||||
)
|
||||
|
||||
return task_prompt
|
||||
@@ -310,8 +355,9 @@ class Agent(BaseAgent):
|
||||
"""Use trained data for the agent task prompt to improve output."""
|
||||
if data := CrewTrainingHandler(TRAINED_AGENTS_DATA_FILE).load():
|
||||
if trained_data_output := data.get(self.role):
|
||||
task_prompt += "You MUST follow these feedbacks: \n " + "\n - ".join(
|
||||
trained_data_output["suggestions"]
|
||||
task_prompt += (
|
||||
"\n\nYou MUST follow these instructions: \n - "
|
||||
+ "\n - ".join(trained_data_output["suggestions"])
|
||||
)
|
||||
return task_prompt
|
||||
|
||||
|
||||
@@ -176,7 +176,11 @@ class BaseAgent(ABC, BaseModel):
|
||||
|
||||
@property
|
||||
def key(self):
|
||||
source = [self.role, self.goal, self.backstory]
|
||||
source = [
|
||||
self._original_role or self.role,
|
||||
self._original_goal or self.goal,
|
||||
self._original_backstory or self.backstory,
|
||||
]
|
||||
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
|
||||
|
||||
@abstractmethod
|
||||
|
||||
@@ -6,6 +6,7 @@ from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
|
||||
from crewai.utilities.converter import ConverterError
|
||||
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
|
||||
from crewai.utilities import I18N
|
||||
from crewai.utilities.printer import Printer
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -22,6 +23,7 @@ class CrewAgentExecutorMixin:
|
||||
have_forced_answer: bool
|
||||
max_iter: int
|
||||
_i18n: I18N
|
||||
_printer: Printer = Printer()
|
||||
|
||||
def _should_force_answer(self) -> bool:
|
||||
"""Determine if a forced answer is required based on iteration count."""
|
||||
@@ -100,6 +102,12 @@ class CrewAgentExecutorMixin:
|
||||
|
||||
def _ask_human_input(self, final_answer: dict) -> str:
|
||||
"""Prompt human input for final decision making."""
|
||||
return input(
|
||||
self._i18n.slice("getting_input").format(final_answer=final_answer)
|
||||
self._printer.print(
|
||||
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
|
||||
)
|
||||
|
||||
self._printer.print(
|
||||
content="\n\n=====\n## Please provide feedback on the Final Result and the Agent's actions:",
|
||||
color="bold_yellow",
|
||||
)
|
||||
return input()
|
||||
|
||||
@@ -39,9 +39,3 @@ class OutputConverter(BaseModel, ABC):
|
||||
def to_json(self, current_attempt=1):
|
||||
"""Convert text to json."""
|
||||
pass
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_gpt(self) -> bool:
|
||||
"""Return if llm provided is of gpt from openai."""
|
||||
pass
|
||||
|
||||
@@ -13,7 +13,6 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
)
|
||||
from crewai.utilities.logger import Logger
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
from crewai.llm import LLM
|
||||
from crewai.agents.parser import (
|
||||
AgentAction,
|
||||
AgentFinish,
|
||||
@@ -35,7 +34,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
max_iter: int,
|
||||
tools: List[Any],
|
||||
tools_names: str,
|
||||
use_stop_words: bool,
|
||||
stop_words: List[str],
|
||||
tools_description: str,
|
||||
tools_handler: ToolsHandler,
|
||||
@@ -61,7 +59,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self.tools_handler = tools_handler
|
||||
self.original_tools = original_tools
|
||||
self.step_callback = step_callback
|
||||
self.use_stop_words = use_stop_words
|
||||
self.use_stop_words = self.llm.supports_stop_words()
|
||||
self.tools_description = tools_description
|
||||
self.function_calling_llm = function_calling_llm
|
||||
self.respect_context_window = respect_context_window
|
||||
@@ -69,8 +67,13 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self.ask_for_human_input = False
|
||||
self.messages: List[Dict[str, str]] = []
|
||||
self.iterations = 0
|
||||
self.log_error_after = 3
|
||||
self.have_forced_answer = False
|
||||
self.name_to_tool_map = {tool.name: tool for tool in self.tools}
|
||||
if self.llm.stop:
|
||||
self.llm.stop = list(set(self.llm.stop + self.stop))
|
||||
else:
|
||||
self.llm.stop = self.stop
|
||||
|
||||
def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]:
|
||||
if "system" in self.prompt:
|
||||
@@ -98,17 +101,19 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self.messages.append(self._format_msg(f"Feedback: {human_feedback}"))
|
||||
formatted_answer = self._invoke_loop()
|
||||
|
||||
if self.crew and self.crew._train:
|
||||
self._handle_crew_training_output(formatted_answer)
|
||||
|
||||
return {"output": formatted_answer.output}
|
||||
|
||||
def _invoke_loop(self, formatted_answer=None):
|
||||
try:
|
||||
while not isinstance(formatted_answer, AgentFinish):
|
||||
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
|
||||
answer = LLM(
|
||||
self.llm,
|
||||
stop=self.stop if self.use_stop_words else None,
|
||||
answer = self.llm.call(
|
||||
self.messages,
|
||||
callbacks=self.callbacks,
|
||||
).call(self.messages)
|
||||
)
|
||||
|
||||
if not self.use_stop_words:
|
||||
try:
|
||||
@@ -146,10 +151,16 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
)
|
||||
self.have_forced_answer = True
|
||||
self.messages.append(
|
||||
self._format_msg(formatted_answer.text, role="assistant")
|
||||
self._format_msg(formatted_answer.text, role="user")
|
||||
)
|
||||
|
||||
except OutputParserException as e:
|
||||
self.messages.append({"role": "assistant", "content": e.error})
|
||||
self.messages.append({"role": "user", "content": e.error})
|
||||
if self.iterations > self.log_error_after:
|
||||
self._printer.print(
|
||||
content=f"Error parsing LLM output, agent will retry: {e.error}",
|
||||
color="red",
|
||||
)
|
||||
return self._invoke_loop(formatted_answer)
|
||||
|
||||
except Exception as e:
|
||||
@@ -168,8 +179,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
if self.agent.verbose or (
|
||||
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
||||
):
|
||||
agent_role = self.agent.role.split("\n")[0]
|
||||
self._printer.print(
|
||||
content=f"\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{self.agent.role}\033[00m"
|
||||
content=f"\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
|
||||
)
|
||||
self._printer.print(
|
||||
content=f"\033[95m## Task:\033[00m \033[92m{self.task.description}\033[00m"
|
||||
@@ -179,15 +191,16 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
if self.agent.verbose or (
|
||||
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
||||
):
|
||||
agent_role = self.agent.role.split("\n")[0]
|
||||
if isinstance(formatted_answer, AgentAction):
|
||||
thought = re.sub(r"\n+", "\n", formatted_answer.thought)
|
||||
formatted_json = json.dumps(
|
||||
json.loads(formatted_answer.tool_input),
|
||||
formatted_answer.tool_input,
|
||||
indent=2,
|
||||
ensure_ascii=False,
|
||||
)
|
||||
self._printer.print(
|
||||
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{self.agent.role}\033[00m"
|
||||
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
|
||||
)
|
||||
if thought and thought != "":
|
||||
self._printer.print(
|
||||
@@ -204,10 +217,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
)
|
||||
elif isinstance(formatted_answer, AgentFinish):
|
||||
self._printer.print(
|
||||
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{self.agent.role}\033[00m"
|
||||
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
|
||||
)
|
||||
self._printer.print(
|
||||
content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m"
|
||||
content=f"\033[95m## Final Answer:\033[00m \033[92m\n{formatted_answer.output}\033[00m\n\n"
|
||||
)
|
||||
|
||||
def _use_tool(self, agent_action: AgentAction) -> Any:
|
||||
@@ -241,25 +254,25 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
return tool_result
|
||||
|
||||
def _summarize_messages(self) -> None:
|
||||
llm = LLM(self.llm)
|
||||
messages_groups = []
|
||||
|
||||
for message in self.messages:
|
||||
content = message["content"]
|
||||
for i in range(0, len(content), 5000):
|
||||
messages_groups.append(content[i : i + 5000])
|
||||
cut_size = self.llm.get_context_window_size()
|
||||
for i in range(0, len(content), cut_size):
|
||||
messages_groups.append(content[i : i + cut_size])
|
||||
|
||||
summarized_contents = []
|
||||
for group in messages_groups:
|
||||
summary = llm.call(
|
||||
summary = self.llm.call(
|
||||
[
|
||||
self._format_msg(
|
||||
self._i18n.slices("summarizer_system_message"), role="system"
|
||||
self._i18n.slice("summarizer_system_message"), role="system"
|
||||
),
|
||||
self._format_msg(
|
||||
self._i18n.errors("sumamrize_instruction").format(group=group),
|
||||
self._i18n.slice("sumamrize_instruction").format(group=group),
|
||||
),
|
||||
]
|
||||
],
|
||||
callbacks=self.callbacks,
|
||||
)
|
||||
summarized_contents.append(summary)
|
||||
|
||||
@@ -267,7 +280,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
|
||||
self.messages = [
|
||||
self._format_msg(
|
||||
self._i18n.errors("summary").format(merged_summary=merged_summary)
|
||||
self._i18n.slice("summary").format(merged_summary=merged_summary)
|
||||
)
|
||||
]
|
||||
|
||||
@@ -294,24 +307,16 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
) -> None:
|
||||
"""Function to handle the process of the training data."""
|
||||
agent_id = str(self.agent.id)
|
||||
|
||||
if (
|
||||
CrewTrainingHandler(TRAINING_DATA_FILE).load()
|
||||
and not self.ask_for_human_input
|
||||
):
|
||||
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
|
||||
if training_data.get(agent_id):
|
||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||
training_data[agent_id][self.crew._train_iteration][
|
||||
"improved_output"
|
||||
] = result.output
|
||||
CrewTrainingHandler(TRAINING_DATA_FILE).save(training_data)
|
||||
else:
|
||||
self._logger.log(
|
||||
"error",
|
||||
"Invalid crew or missing _train_iteration attribute.",
|
||||
color="red",
|
||||
)
|
||||
training_data[agent_id][self.crew._train_iteration][
|
||||
"improved_output"
|
||||
] = result.output
|
||||
CrewTrainingHandler(TRAINING_DATA_FILE).save(training_data)
|
||||
|
||||
if self.ask_for_human_input and human_feedback is not None:
|
||||
training_data = {
|
||||
|
||||
@@ -4,6 +4,7 @@ import click
|
||||
import pkg_resources
|
||||
|
||||
from crewai.cli.create_crew import create_crew
|
||||
from crewai.cli.create_flow import create_flow
|
||||
from crewai.cli.create_pipeline import create_pipeline
|
||||
from crewai.memory.storage.kickoff_task_outputs_storage import (
|
||||
KickoffTaskOutputsSQLiteStorage,
|
||||
@@ -13,9 +14,12 @@ from .authentication.main import AuthenticationCommand
|
||||
from .deploy.main import DeployCommand
|
||||
from .evaluate_crew import evaluate_crew
|
||||
from .install_crew import install_crew
|
||||
from .plot_flow import plot_flow
|
||||
from .replay_from_task import replay_task_command
|
||||
from .reset_memories_command import reset_memories_command
|
||||
from .run_crew import run_crew
|
||||
from .run_flow import run_flow
|
||||
from .tools.main import ToolCommand
|
||||
from .train_crew import train_crew
|
||||
|
||||
|
||||
@@ -25,19 +29,20 @@ def crewai():
|
||||
|
||||
|
||||
@crewai.command()
|
||||
@click.argument("type", type=click.Choice(["crew", "pipeline"]))
|
||||
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
|
||||
@click.argument("name")
|
||||
@click.option(
|
||||
"--router", is_flag=True, help="Create a pipeline with router functionality"
|
||||
)
|
||||
def create(type, name, router):
|
||||
"""Create a new crew or pipeline."""
|
||||
def create(type, name):
|
||||
"""Create a new crew, pipeline, or flow."""
|
||||
if type == "crew":
|
||||
create_crew(name)
|
||||
elif type == "pipeline":
|
||||
create_pipeline(name, router)
|
||||
create_pipeline(name)
|
||||
elif type == "flow":
|
||||
create_flow(name)
|
||||
else:
|
||||
click.secho("Error: Invalid type. Must be 'crew' or 'pipeline'.", fg="red")
|
||||
click.secho(
|
||||
"Error: Invalid type. Must be 'crew', 'pipeline', or 'flow'.", fg="red"
|
||||
)
|
||||
|
||||
|
||||
@crewai.command()
|
||||
@@ -202,6 +207,12 @@ def deploy():
|
||||
pass
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def tool():
|
||||
"""Tool Repository related commands."""
|
||||
pass
|
||||
|
||||
|
||||
@deploy.command(name="create")
|
||||
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
|
||||
def deploy_create(yes: bool):
|
||||
@@ -249,5 +260,50 @@ def deploy_remove(uuid: Optional[str]):
|
||||
deploy_cmd.remove_crew(uuid=uuid)
|
||||
|
||||
|
||||
@tool.command(name="create")
|
||||
@click.argument("handle")
|
||||
def tool_create(handle: str):
|
||||
tool_cmd = ToolCommand()
|
||||
tool_cmd.create(handle)
|
||||
|
||||
|
||||
@tool.command(name="install")
|
||||
@click.argument("handle")
|
||||
def tool_install(handle: str):
|
||||
tool_cmd = ToolCommand()
|
||||
tool_cmd.login()
|
||||
tool_cmd.install(handle)
|
||||
|
||||
|
||||
@tool.command(name="publish")
|
||||
@click.option("--force", is_flag=True, show_default=True, default=False, help="Bypasses Git remote validations")
|
||||
@click.option("--public", "is_public", flag_value=True, default=False)
|
||||
@click.option("--private", "is_public", flag_value=False)
|
||||
def tool_publish(is_public: bool, force: bool):
|
||||
tool_cmd = ToolCommand()
|
||||
tool_cmd.login()
|
||||
tool_cmd.publish(is_public, force)
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def flow():
|
||||
"""Flow related commands."""
|
||||
pass
|
||||
|
||||
|
||||
@flow.command(name="run")
|
||||
def flow_run():
|
||||
"""Run the Flow."""
|
||||
click.echo("Running the Flow")
|
||||
run_flow()
|
||||
|
||||
|
||||
@flow.command(name="plot")
|
||||
def flow_plot():
|
||||
"""Plot the Flow."""
|
||||
click.echo("Plotting the Flow")
|
||||
plot_flow()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
crewai()
|
||||
|
||||
71
src/crewai/cli/command.py
Normal file
71
src/crewai/cli/command.py
Normal file
@@ -0,0 +1,71 @@
|
||||
import requests
|
||||
from requests.exceptions import JSONDecodeError
|
||||
from rich.console import Console
|
||||
from crewai.cli.plus_api import PlusAPI
|
||||
from crewai.cli.utils import get_auth_token
|
||||
from crewai.telemetry.telemetry import Telemetry
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
class BaseCommand:
|
||||
def __init__(self):
|
||||
self._telemetry = Telemetry()
|
||||
self._telemetry.set_tracer()
|
||||
|
||||
|
||||
class PlusAPIMixin:
|
||||
def __init__(self, telemetry):
|
||||
try:
|
||||
telemetry.set_tracer()
|
||||
self.plus_api_client = PlusAPI(api_key=get_auth_token())
|
||||
except Exception:
|
||||
self._deploy_signup_error_span = telemetry.deploy_signup_error_span()
|
||||
console.print(
|
||||
"Please sign up/login to CrewAI+ before using the CLI.",
|
||||
style="bold red",
|
||||
)
|
||||
console.print("Run 'crewai signup' to sign up/login.", style="bold green")
|
||||
raise SystemExit
|
||||
|
||||
def _validate_response(self, response: requests.Response) -> None:
|
||||
"""
|
||||
Handle and display error messages from API responses.
|
||||
|
||||
Args:
|
||||
response (requests.Response): The response from the Plus API
|
||||
"""
|
||||
try:
|
||||
json_response = response.json()
|
||||
except (JSONDecodeError, ValueError):
|
||||
console.print(
|
||||
"Failed to parse response from Enterprise API failed. Details:",
|
||||
style="bold red",
|
||||
)
|
||||
console.print(f"Status Code: {response.status_code}")
|
||||
console.print(f"Response:\n{response.content}")
|
||||
raise SystemExit
|
||||
|
||||
if response.status_code == 422:
|
||||
console.print(
|
||||
"Failed to complete operation. Please fix the following errors:",
|
||||
style="bold red",
|
||||
)
|
||||
for field, messages in json_response.items():
|
||||
for message in messages:
|
||||
console.print(
|
||||
f"* [bold red]{field.capitalize()}[/bold red] {message}"
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
if not response.ok:
|
||||
console.print(
|
||||
"Request to Enterprise API failed. Details:", style="bold red"
|
||||
)
|
||||
details = (
|
||||
json_response.get("error")
|
||||
or json_response.get("message")
|
||||
or response.content
|
||||
)
|
||||
console.print(f"{details}")
|
||||
raise SystemExit
|
||||
@@ -21,6 +21,7 @@ def create_crew(name, parent_folder=None):
|
||||
bold=True,
|
||||
)
|
||||
|
||||
# Create necessary directories
|
||||
if not folder_path.exists():
|
||||
folder_path.mkdir(parents=True)
|
||||
(folder_path / "tests").mkdir(exist_ok=True)
|
||||
@@ -28,14 +29,47 @@ def create_crew(name, parent_folder=None):
|
||||
(folder_path / "src" / folder_name).mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
||||
with open(folder_path / ".env", "w") as file:
|
||||
file.write("OPENAI_API_KEY=YOUR_API_KEY")
|
||||
else:
|
||||
click.secho(
|
||||
f"\tFolder {folder_name} already exists. Please choose a different name.",
|
||||
fg="red",
|
||||
f"\tFolder {folder_name} already exists. Updating .env file...",
|
||||
fg="yellow",
|
||||
)
|
||||
return
|
||||
|
||||
# Path to the .env file
|
||||
env_file_path = folder_path / ".env"
|
||||
|
||||
# Load existing environment variables if .env exists
|
||||
env_vars = {}
|
||||
if env_file_path.exists():
|
||||
with open(env_file_path, "r") as file:
|
||||
for line in file:
|
||||
key_value = line.strip().split('=', 1)
|
||||
if len(key_value) == 2:
|
||||
env_vars[key_value[0]] = key_value[1]
|
||||
|
||||
# Prompt for keys/variables/LLM settings only if not already set
|
||||
if 'OPENAI_API_KEY' not in env_vars:
|
||||
if click.confirm("Do you want to enter your OPENAI_API_KEY?", default=True):
|
||||
env_vars['OPENAI_API_KEY'] = click.prompt("Enter your OPENAI_API_KEY", type=str)
|
||||
|
||||
if 'ANTHROPIC_API_KEY' not in env_vars:
|
||||
if click.confirm("Do you want to enter your ANTHROPIC_API_KEY?", default=False):
|
||||
env_vars['ANTHROPIC_API_KEY'] = click.prompt("Enter your ANTHROPIC_API_KEY", type=str)
|
||||
|
||||
if 'GEMINI_API_KEY' not in env_vars:
|
||||
if click.confirm("Do you want to specify your GEMINI_API_KEY?", default=True):
|
||||
env_vars['GEMINI_API_KEY'] = click.prompt("Enter your GEMINI_API_KEY", type=str)
|
||||
|
||||
# Loop to add other environment variables
|
||||
while click.confirm("Do you want to specify another environment variable?", default=False):
|
||||
var_name = click.prompt("Enter the variable name", type=str)
|
||||
var_value = click.prompt(f"Enter the value for {var_name}", type=str)
|
||||
env_vars[var_name] = var_value
|
||||
|
||||
# Write the environment variables to .env file
|
||||
with open(env_file_path, "w") as file:
|
||||
for key, value in env_vars.items():
|
||||
file.write(f"{key}={value}\n")
|
||||
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "crew"
|
||||
|
||||
99
src/crewai/cli/create_flow.py
Normal file
99
src/crewai/cli/create_flow.py
Normal file
@@ -0,0 +1,99 @@
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
|
||||
from crewai.telemetry import Telemetry
|
||||
|
||||
|
||||
def create_flow(name):
|
||||
"""Create a new flow."""
|
||||
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
||||
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
||||
|
||||
click.secho(f"Creating flow {folder_name}...", fg="green", bold=True)
|
||||
|
||||
project_root = Path(folder_name)
|
||||
if project_root.exists():
|
||||
click.secho(f"Error: Folder {folder_name} already exists.", fg="red")
|
||||
return
|
||||
|
||||
# Initialize telemetry
|
||||
telemetry = Telemetry()
|
||||
telemetry.flow_creation_span(class_name)
|
||||
|
||||
# Create directory structure
|
||||
(project_root / "src" / folder_name).mkdir(parents=True)
|
||||
(project_root / "src" / folder_name / "crews").mkdir(parents=True)
|
||||
(project_root / "src" / folder_name / "tools").mkdir(parents=True)
|
||||
(project_root / "tests").mkdir(exist_ok=True)
|
||||
|
||||
# Create .env file
|
||||
with open(project_root / ".env", "w") as file:
|
||||
file.write("OPENAI_API_KEY=YOUR_API_KEY")
|
||||
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "flow"
|
||||
|
||||
# List of template files to copy
|
||||
root_template_files = [".gitignore", "pyproject.toml", "README.md"]
|
||||
src_template_files = ["__init__.py", "main.py"]
|
||||
tools_template_files = ["tools/__init__.py", "tools/custom_tool.py"]
|
||||
|
||||
crew_folders = [
|
||||
"poem_crew",
|
||||
]
|
||||
|
||||
def process_file(src_file, dst_file):
|
||||
if src_file.suffix in [".pyc", ".pyo", ".pyd"]:
|
||||
return
|
||||
|
||||
try:
|
||||
with open(src_file, "r", encoding="utf-8") as file:
|
||||
content = file.read()
|
||||
except Exception as e:
|
||||
click.secho(f"Error processing file {src_file}: {e}", fg="red")
|
||||
return
|
||||
|
||||
content = content.replace("{{name}}", name)
|
||||
content = content.replace("{{flow_name}}", class_name)
|
||||
content = content.replace("{{folder_name}}", folder_name)
|
||||
|
||||
with open(dst_file, "w") as file:
|
||||
file.write(content)
|
||||
|
||||
# Copy and process root template files
|
||||
for file_name in root_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = project_root / file_name
|
||||
process_file(src_file, dst_file)
|
||||
|
||||
# Copy and process src template files
|
||||
for file_name in src_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = project_root / "src" / folder_name / file_name
|
||||
process_file(src_file, dst_file)
|
||||
|
||||
# Copy tools files
|
||||
for file_name in tools_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = project_root / "src" / folder_name / file_name
|
||||
process_file(src_file, dst_file)
|
||||
|
||||
# Copy crew folders
|
||||
for crew_folder in crew_folders:
|
||||
src_crew_folder = templates_dir / "crews" / crew_folder
|
||||
dst_crew_folder = project_root / "src" / folder_name / "crews" / crew_folder
|
||||
if src_crew_folder.exists():
|
||||
for src_file in src_crew_folder.rglob("*"):
|
||||
if src_file.is_file():
|
||||
relative_path = src_file.relative_to(src_crew_folder)
|
||||
dst_file = dst_crew_folder / relative_path
|
||||
dst_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
process_file(src_file, dst_file)
|
||||
else:
|
||||
click.secho(
|
||||
f"Warning: Crew folder {crew_folder} not found in template.",
|
||||
fg="yellow",
|
||||
)
|
||||
|
||||
click.secho(f"Flow {name} created successfully!", fg="green", bold=True)
|
||||
@@ -1,66 +0,0 @@
|
||||
from os import getenv
|
||||
|
||||
import requests
|
||||
|
||||
from crewai.cli.deploy.utils import get_crewai_version
|
||||
|
||||
|
||||
class CrewAPI:
|
||||
"""
|
||||
CrewAPI class to interact with the crewAI+ API.
|
||||
"""
|
||||
|
||||
def __init__(self, api_key: str) -> None:
|
||||
self.api_key = api_key
|
||||
self.headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
|
||||
}
|
||||
self.base_url = getenv(
|
||||
"CREWAI_BASE_URL", "https://crewai.com/crewai_plus/api/v1/crews"
|
||||
)
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
url = f"{self.base_url}/{endpoint}"
|
||||
return requests.request(method, url, headers=self.headers, **kwargs)
|
||||
|
||||
# Deploy
|
||||
def deploy_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request("POST", f"by-name/{project_name}/deploy")
|
||||
|
||||
def deploy_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("POST", f"{uuid}/deploy")
|
||||
|
||||
# Status
|
||||
def status_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request("GET", f"by-name/{project_name}/status")
|
||||
|
||||
def status_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("GET", f"{uuid}/status")
|
||||
|
||||
# Logs
|
||||
def logs_by_name(
|
||||
self, project_name: str, log_type: str = "deployment"
|
||||
) -> requests.Response:
|
||||
return self._make_request("GET", f"by-name/{project_name}/logs/{log_type}")
|
||||
|
||||
def logs_by_uuid(
|
||||
self, uuid: str, log_type: str = "deployment"
|
||||
) -> requests.Response:
|
||||
return self._make_request("GET", f"{uuid}/logs/{log_type}")
|
||||
|
||||
# Delete
|
||||
def delete_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request("DELETE", f"by-name/{project_name}")
|
||||
|
||||
def delete_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("DELETE", f"{uuid}")
|
||||
|
||||
# List
|
||||
def list_crews(self) -> requests.Response:
|
||||
return self._make_request("GET", "")
|
||||
|
||||
# Create
|
||||
def create_crew(self, payload) -> requests.Response:
|
||||
return self._make_request("POST", "", json=payload)
|
||||
@@ -2,19 +2,14 @@ from typing import Any, Dict, List, Optional
|
||||
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.telemetry import Telemetry
|
||||
from .api import CrewAPI
|
||||
from .utils import (
|
||||
fetch_and_json_env_file,
|
||||
get_auth_token,
|
||||
get_git_remote_url,
|
||||
get_project_name,
|
||||
)
|
||||
from crewai.cli import git
|
||||
from crewai.cli.command import BaseCommand, PlusAPIMixin
|
||||
from crewai.cli.utils import fetch_and_json_env_file, get_project_name
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
class DeployCommand:
|
||||
class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
"""
|
||||
A class to handle deployment-related operations for CrewAI projects.
|
||||
"""
|
||||
@@ -23,40 +18,10 @@ class DeployCommand:
|
||||
"""
|
||||
Initialize the DeployCommand with project name and API client.
|
||||
"""
|
||||
try:
|
||||
self._telemetry = Telemetry()
|
||||
self._telemetry.set_tracer()
|
||||
access_token = get_auth_token()
|
||||
except Exception:
|
||||
self._deploy_signup_error_span = self._telemetry.deploy_signup_error_span()
|
||||
console.print(
|
||||
"Please sign up/login to CrewAI+ before using the CLI.",
|
||||
style="bold red",
|
||||
)
|
||||
console.print("Run 'crewai signup' to sign up/login.", style="bold green")
|
||||
raise SystemExit
|
||||
|
||||
self.project_name = get_project_name()
|
||||
if self.project_name is None:
|
||||
console.print(
|
||||
"No project name found. Please ensure your project has a valid pyproject.toml file.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
self.client = CrewAPI(api_key=access_token)
|
||||
|
||||
def _handle_error(self, json_response: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Handle and display error messages from API responses.
|
||||
|
||||
Args:
|
||||
json_response (Dict[str, Any]): The JSON response containing error information.
|
||||
"""
|
||||
error = json_response.get("error", "Unknown error")
|
||||
message = json_response.get("message", "No message provided")
|
||||
console.print(f"Error: {error}", style="bold red")
|
||||
console.print(f"Message: {message}", style="bold red")
|
||||
BaseCommand.__init__(self)
|
||||
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
||||
self.project_name = get_project_name(require=True)
|
||||
|
||||
def _standard_no_param_error_message(self) -> None:
|
||||
"""
|
||||
@@ -104,18 +69,15 @@ class DeployCommand:
|
||||
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
|
||||
console.print("Starting deployment...", style="bold blue")
|
||||
if uuid:
|
||||
response = self.client.deploy_by_uuid(uuid)
|
||||
response = self.plus_api_client.deploy_by_uuid(uuid)
|
||||
elif self.project_name:
|
||||
response = self.client.deploy_by_name(self.project_name)
|
||||
response = self.plus_api_client.deploy_by_name(self.project_name)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
json_response = response.json()
|
||||
if response.status_code == 200:
|
||||
self._display_deployment_info(json_response)
|
||||
else:
|
||||
self._handle_error(json_response)
|
||||
self._validate_response(response)
|
||||
self._display_deployment_info(response.json())
|
||||
|
||||
def create_crew(self, confirm: bool = False) -> None:
|
||||
"""
|
||||
@@ -126,7 +88,11 @@ class DeployCommand:
|
||||
)
|
||||
console.print("Creating deployment...", style="bold blue")
|
||||
env_vars = fetch_and_json_env_file()
|
||||
remote_repo_url = get_git_remote_url()
|
||||
|
||||
try:
|
||||
remote_repo_url = git.Repository().origin_url()
|
||||
except ValueError:
|
||||
remote_repo_url = None
|
||||
|
||||
if remote_repo_url is None:
|
||||
console.print("No remote repository URL found.", style="bold red")
|
||||
@@ -138,12 +104,10 @@ class DeployCommand:
|
||||
|
||||
self._confirm_input(env_vars, remote_repo_url, confirm)
|
||||
payload = self._create_payload(env_vars, remote_repo_url)
|
||||
response = self.plus_api_client.create_crew(payload)
|
||||
|
||||
response = self.client.create_crew(payload)
|
||||
if response.status_code == 201:
|
||||
self._display_creation_success(response.json())
|
||||
else:
|
||||
self._handle_error(response.json())
|
||||
self._validate_response(response)
|
||||
self._display_creation_success(response.json())
|
||||
|
||||
def _confirm_input(
|
||||
self, env_vars: Dict[str, str], remote_repo_url: str, confirm: bool
|
||||
@@ -208,7 +172,7 @@ class DeployCommand:
|
||||
"""
|
||||
console.print("Listing all Crews\n", style="bold blue")
|
||||
|
||||
response = self.client.list_crews()
|
||||
response = self.plus_api_client.list_crews()
|
||||
json_response = response.json()
|
||||
if response.status_code == 200:
|
||||
self._display_crews(json_response)
|
||||
@@ -243,18 +207,15 @@ class DeployCommand:
|
||||
"""
|
||||
console.print("Fetching deployment status...", style="bold blue")
|
||||
if uuid:
|
||||
response = self.client.status_by_uuid(uuid)
|
||||
response = self.plus_api_client.crew_status_by_uuid(uuid)
|
||||
elif self.project_name:
|
||||
response = self.client.status_by_name(self.project_name)
|
||||
response = self.plus_api_client.crew_status_by_name(self.project_name)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
json_response = response.json()
|
||||
if response.status_code == 200:
|
||||
self._display_crew_status(json_response)
|
||||
else:
|
||||
self._handle_error(json_response)
|
||||
self._validate_response(response)
|
||||
self._display_crew_status(response.json())
|
||||
|
||||
def _display_crew_status(self, status_data: Dict[str, str]) -> None:
|
||||
"""
|
||||
@@ -278,17 +239,15 @@ class DeployCommand:
|
||||
console.print(f"Fetching {log_type} logs...", style="bold blue")
|
||||
|
||||
if uuid:
|
||||
response = self.client.logs_by_uuid(uuid, log_type)
|
||||
response = self.plus_api_client.crew_by_uuid(uuid, log_type)
|
||||
elif self.project_name:
|
||||
response = self.client.logs_by_name(self.project_name, log_type)
|
||||
response = self.plus_api_client.crew_by_name(self.project_name, log_type)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
if response.status_code == 200:
|
||||
self._display_logs(response.json())
|
||||
else:
|
||||
self._handle_error(response.json())
|
||||
self._validate_response(response)
|
||||
self._display_logs(response.json())
|
||||
|
||||
def remove_crew(self, uuid: Optional[str]) -> None:
|
||||
"""
|
||||
@@ -301,9 +260,9 @@ class DeployCommand:
|
||||
console.print("Removing deployment...", style="bold blue")
|
||||
|
||||
if uuid:
|
||||
response = self.client.delete_by_uuid(uuid)
|
||||
response = self.plus_api_client.delete_crew_by_uuid(uuid)
|
||||
elif self.project_name:
|
||||
response = self.client.delete_by_name(self.project_name)
|
||||
response = self.plus_api_client.delete_crew_by_name(self.project_name)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
@@ -1,155 +0,0 @@
|
||||
import sys
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
from rich.console import Console
|
||||
|
||||
from ..authentication.utils import TokenManager
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
|
||||
|
||||
# Drop the simple_toml_parser when we move to python3.11
|
||||
def simple_toml_parser(content):
|
||||
result = {}
|
||||
current_section = result
|
||||
for line in content.split('\n'):
|
||||
line = line.strip()
|
||||
if line.startswith('[') and line.endswith(']'):
|
||||
# New section
|
||||
section = line[1:-1].split('.')
|
||||
current_section = result
|
||||
for key in section:
|
||||
current_section = current_section.setdefault(key, {})
|
||||
elif '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
key = key.strip()
|
||||
value = value.strip().strip('"')
|
||||
current_section[key] = value
|
||||
return result
|
||||
|
||||
|
||||
def parse_toml(content):
|
||||
if sys.version_info >= (3, 11):
|
||||
return tomllib.loads(content)
|
||||
else:
|
||||
return simple_toml_parser(content)
|
||||
|
||||
|
||||
def get_git_remote_url() -> str | None:
|
||||
"""Get the Git repository's remote URL."""
|
||||
try:
|
||||
# Run the git remote -v command
|
||||
result = subprocess.run(
|
||||
["git", "remote", "-v"], capture_output=True, text=True, check=True
|
||||
)
|
||||
|
||||
# Get the output
|
||||
output = result.stdout
|
||||
|
||||
# Parse the output to find the origin URL
|
||||
matches = re.findall(r"origin\s+(.*?)\s+\(fetch\)", output)
|
||||
|
||||
if matches:
|
||||
return matches[0] # Return the first match (origin URL)
|
||||
else:
|
||||
console.print("No origin remote found.", style="bold red")
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
console.print(f"Error running trying to fetch the Git Repository: {e}", style="bold red")
|
||||
except FileNotFoundError:
|
||||
console.print("Git command not found. Make sure Git is installed and in your PATH.", style="bold red")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_project_name(pyproject_path: str = "pyproject.toml") -> str | None:
|
||||
"""Get the project name from the pyproject.toml file."""
|
||||
try:
|
||||
# Read the pyproject.toml file
|
||||
with open(pyproject_path, "r") as f:
|
||||
pyproject_content = parse_toml(f.read())
|
||||
|
||||
# Extract the project name
|
||||
project_name = pyproject_content["tool"]["poetry"]["name"]
|
||||
|
||||
if "crewai" not in pyproject_content["tool"]["poetry"]["dependencies"]:
|
||||
raise Exception("crewai is not in the dependencies.")
|
||||
|
||||
return project_name
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {pyproject_path} not found.")
|
||||
except KeyError:
|
||||
print(f"Error: {pyproject_path} is not a valid pyproject.toml file.")
|
||||
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
|
||||
print(
|
||||
f"Error: {pyproject_path} is not a valid TOML file."
|
||||
if sys.version_info >= (3, 11)
|
||||
else f"Error reading the pyproject.toml file: {e}"
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Error reading the pyproject.toml file: {e}")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_crewai_version(poetry_lock_path: str = "poetry.lock") -> str:
|
||||
"""Get the version number of crewai from the poetry.lock file."""
|
||||
try:
|
||||
with open(poetry_lock_path, "r") as f:
|
||||
lock_content = f.read()
|
||||
|
||||
match = re.search(
|
||||
r'\[\[package\]\]\s*name\s*=\s*"crewai"\s*version\s*=\s*"([^"]+)"',
|
||||
lock_content,
|
||||
re.DOTALL,
|
||||
)
|
||||
if match:
|
||||
return match.group(1)
|
||||
else:
|
||||
print("crewai package not found in poetry.lock")
|
||||
return "no-version-found"
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {poetry_lock_path} not found.")
|
||||
except Exception as e:
|
||||
print(f"Error reading the poetry.lock file: {e}")
|
||||
|
||||
return "no-version-found"
|
||||
|
||||
|
||||
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
|
||||
"""Fetch the environment variables from a .env file and return them as a dictionary."""
|
||||
try:
|
||||
# Read the .env file
|
||||
with open(env_file_path, "r") as f:
|
||||
env_content = f.read()
|
||||
|
||||
# Parse the .env file content to a dictionary
|
||||
env_dict = {}
|
||||
for line in env_content.splitlines():
|
||||
if line.strip() and not line.strip().startswith("#"):
|
||||
key, value = line.split("=", 1)
|
||||
env_dict[key.strip()] = value.strip()
|
||||
|
||||
return env_dict
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {env_file_path} not found.")
|
||||
except Exception as e:
|
||||
print(f"Error reading the .env file: {e}")
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def get_auth_token() -> str:
|
||||
"""Get the authentication token."""
|
||||
access_token = TokenManager().get_token()
|
||||
if not access_token:
|
||||
raise Exception()
|
||||
return access_token
|
||||
80
src/crewai/cli/git.py
Normal file
80
src/crewai/cli/git.py
Normal file
@@ -0,0 +1,80 @@
|
||||
import subprocess
|
||||
|
||||
|
||||
class Repository:
|
||||
def __init__(self, path="."):
|
||||
self.path = path
|
||||
|
||||
if not self.is_git_installed():
|
||||
raise ValueError("Git is not installed or not found in your PATH.")
|
||||
|
||||
if not self.is_git_repo():
|
||||
raise ValueError(f"{self.path} is not a Git repository.")
|
||||
|
||||
self.fetch()
|
||||
|
||||
def is_git_installed(self) -> bool:
|
||||
"""Check if Git is installed and available in the system."""
|
||||
try:
|
||||
subprocess.run(
|
||||
["git", "--version"], capture_output=True, check=True, text=True
|
||||
)
|
||||
return True
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
return False
|
||||
|
||||
def fetch(self) -> None:
|
||||
"""Fetch latest updates from the remote."""
|
||||
subprocess.run(["git", "fetch"], cwd=self.path, check=True)
|
||||
|
||||
def status(self) -> str:
|
||||
"""Get the git status in porcelain format."""
|
||||
return subprocess.check_output(
|
||||
["git", "status", "--branch", "--porcelain"],
|
||||
cwd=self.path,
|
||||
encoding="utf-8",
|
||||
).strip()
|
||||
|
||||
def is_git_repo(self) -> bool:
|
||||
"""Check if the current directory is a git repository."""
|
||||
try:
|
||||
subprocess.check_output(
|
||||
["git", "rev-parse", "--is-inside-work-tree"],
|
||||
cwd=self.path,
|
||||
encoding="utf-8",
|
||||
)
|
||||
return True
|
||||
except subprocess.CalledProcessError:
|
||||
return False
|
||||
|
||||
def has_uncommitted_changes(self) -> bool:
|
||||
"""Check if the repository has uncommitted changes."""
|
||||
return len(self.status().splitlines()) > 1
|
||||
|
||||
def is_ahead_or_behind(self) -> bool:
|
||||
"""Check if the repository is ahead or behind the remote."""
|
||||
for line in self.status().splitlines():
|
||||
if line.startswith("##") and ("ahead" in line or "behind" in line):
|
||||
return True
|
||||
return False
|
||||
|
||||
def is_synced(self) -> bool:
|
||||
"""Return True if the Git repository is fully synced with the remote, False otherwise."""
|
||||
if self.has_uncommitted_changes() or self.is_ahead_or_behind():
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def origin_url(self) -> str | None:
|
||||
"""Get the Git repository's remote URL."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git", "remote", "get-url", "origin"],
|
||||
cwd=self.path,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
return result.stdout.strip()
|
||||
except subprocess.CalledProcessError:
|
||||
return None
|
||||
23
src/crewai/cli/plot_flow.py
Normal file
23
src/crewai/cli/plot_flow.py
Normal file
@@ -0,0 +1,23 @@
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
|
||||
|
||||
def plot_flow() -> None:
|
||||
"""
|
||||
Plot the flow by running a command in the Poetry environment.
|
||||
"""
|
||||
command = ["poetry", "run", "plot_flow"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
if result.stderr:
|
||||
click.echo(result.stderr, err=True)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while plotting the flow: {e}", err=True)
|
||||
click.echo(e.output, err=True)
|
||||
|
||||
except Exception as e:
|
||||
click.echo(f"An unexpected error occurred: {e}", err=True)
|
||||
95
src/crewai/cli/plus_api.py
Normal file
95
src/crewai/cli/plus_api.py
Normal file
@@ -0,0 +1,95 @@
|
||||
from typing import Optional
|
||||
import requests
|
||||
from os import getenv
|
||||
from crewai.cli.utils import get_crewai_version
|
||||
from urllib.parse import urljoin
|
||||
|
||||
|
||||
class PlusAPI:
|
||||
"""
|
||||
This class exposes methods for working with the CrewAI+ API.
|
||||
"""
|
||||
|
||||
TOOLS_RESOURCE = "/crewai_plus/api/v1/tools"
|
||||
CREWS_RESOURCE = "/crewai_plus/api/v1/crews"
|
||||
|
||||
def __init__(self, api_key: str) -> None:
|
||||
self.api_key = api_key
|
||||
self.headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
|
||||
"X-Crewai-Version": get_crewai_version(),
|
||||
}
|
||||
self.base_url = getenv("CREWAI_BASE_URL", "https://app.crewai.com")
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
return requests.request(method, url, headers=self.headers, **kwargs)
|
||||
|
||||
def login_to_tool_repository(self):
|
||||
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
|
||||
|
||||
def get_tool(self, handle: str):
|
||||
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
|
||||
|
||||
def publish_tool(
|
||||
self,
|
||||
handle: str,
|
||||
is_public: bool,
|
||||
version: str,
|
||||
description: Optional[str],
|
||||
encoded_file: str,
|
||||
):
|
||||
params = {
|
||||
"handle": handle,
|
||||
"public": is_public,
|
||||
"version": version,
|
||||
"file": encoded_file,
|
||||
"description": description,
|
||||
}
|
||||
return self._make_request("POST", f"{self.TOOLS_RESOURCE}", json=params)
|
||||
|
||||
def deploy_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request(
|
||||
"POST", f"{self.CREWS_RESOURCE}/by-name/{project_name}/deploy"
|
||||
)
|
||||
|
||||
def deploy_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("POST", f"{self.CREWS_RESOURCE}/{uuid}/deploy")
|
||||
|
||||
def crew_status_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request(
|
||||
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/status"
|
||||
)
|
||||
|
||||
def crew_status_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("GET", f"{self.CREWS_RESOURCE}/{uuid}/status")
|
||||
|
||||
def crew_by_name(
|
||||
self, project_name: str, log_type: str = "deployment"
|
||||
) -> requests.Response:
|
||||
return self._make_request(
|
||||
"GET", f"{self.CREWS_RESOURCE}/by-name/{project_name}/logs/{log_type}"
|
||||
)
|
||||
|
||||
def crew_by_uuid(
|
||||
self, uuid: str, log_type: str = "deployment"
|
||||
) -> requests.Response:
|
||||
return self._make_request(
|
||||
"GET", f"{self.CREWS_RESOURCE}/{uuid}/logs/{log_type}"
|
||||
)
|
||||
|
||||
def delete_crew_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request(
|
||||
"DELETE", f"{self.CREWS_RESOURCE}/by-name/{project_name}"
|
||||
)
|
||||
|
||||
def delete_crew_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("DELETE", f"{self.CREWS_RESOURCE}/{uuid}")
|
||||
|
||||
def list_crews(self) -> requests.Response:
|
||||
return self._make_request("GET", self.CREWS_RESOURCE)
|
||||
|
||||
def create_crew(self, payload) -> requests.Response:
|
||||
return self._make_request("POST", self.CREWS_RESOURCE, json=payload)
|
||||
23
src/crewai/cli/run_flow.py
Normal file
23
src/crewai/cli/run_flow.py
Normal file
@@ -0,0 +1,23 @@
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
|
||||
|
||||
def run_flow() -> None:
|
||||
"""
|
||||
Run the flow by running a command in the Poetry environment.
|
||||
"""
|
||||
command = ["poetry", "run", "run_flow"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
if result.stderr:
|
||||
click.echo(result.stderr, err=True)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while running the flow: {e}", err=True)
|
||||
click.echo(e.output, err=True)
|
||||
|
||||
except Exception as e:
|
||||
click.echo(f"An unexpected error occurred: {e}", err=True)
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
3
src/crewai/cli/templates/flow/.gitignore
vendored
Normal file
3
src/crewai/cli/templates/flow/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
.env
|
||||
__pycache__/
|
||||
lib/
|
||||
57
src/crewai/cli/templates/flow/README.md
Normal file
57
src/crewai/cli/templates/flow/README.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# {{crew_name}} Crew
|
||||
|
||||
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
|
||||
|
||||
## Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies:
|
||||
|
||||
1. First lock the dependencies and then install them:
|
||||
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
### Customizing
|
||||
|
||||
**Add your `OPENAI_API_KEY` into the `.env` file**
|
||||
|
||||
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
|
||||
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
|
||||
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
|
||||
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
|
||||
|
||||
## Running the Project
|
||||
|
||||
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
|
||||
|
||||
```bash
|
||||
crewai run
|
||||
```
|
||||
|
||||
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
|
||||
|
||||
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
|
||||
|
||||
## Understanding Your Crew
|
||||
|
||||
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
|
||||
|
||||
## Support
|
||||
|
||||
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
|
||||
|
||||
- Visit our [documentation](https://docs.crewai.com)
|
||||
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
|
||||
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
|
||||
- [Chat with our docs](https://chatg.pt/DWjSBZn)
|
||||
|
||||
Let's create wonders together with the power and simplicity of crewAI.
|
||||
0
src/crewai/cli/templates/flow/__init__.py
Normal file
0
src/crewai/cli/templates/flow/__init__.py
Normal file
@@ -0,0 +1,11 @@
|
||||
poem_writer:
|
||||
role: >
|
||||
CrewAI Poem Writer
|
||||
goal: >
|
||||
Generate a funny, light heartedpoem about how CrewAI
|
||||
is awesome with a sentence count of {sentence_count}
|
||||
backstory: >
|
||||
You're a creative poet with a talent for capturing the essence of any topic
|
||||
in a beautiful and engaging way. Known for your ability to craft poems that
|
||||
resonate with readers, you bring a unique perspective and artistic flair to
|
||||
every piece you write.
|
||||
@@ -0,0 +1,7 @@
|
||||
write_poem:
|
||||
description: >
|
||||
Write a poem about how CrewAI is awesome.
|
||||
Ensure the poem is engaging and adheres to the specified sentence count of {sentence_count}.
|
||||
expected_output: >
|
||||
A beautifully crafted poem about CrewAI, with exactly {sentence_count} sentences.
|
||||
agent: poem_writer
|
||||
31
src/crewai/cli/templates/flow/crews/poem_crew/poem_crew.py
Normal file
31
src/crewai/cli/templates/flow/crews/poem_crew/poem_crew.py
Normal file
@@ -0,0 +1,31 @@
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
|
||||
@CrewBase
|
||||
class PoemCrew():
|
||||
"""Poem Crew"""
|
||||
|
||||
agents_config = 'config/agents.yaml'
|
||||
tasks_config = 'config/tasks.yaml'
|
||||
|
||||
@agent
|
||||
def poem_writer(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['poem_writer'],
|
||||
)
|
||||
|
||||
@task
|
||||
def write_poem(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['write_poem'],
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
"""Creates the Research Crew"""
|
||||
return Crew(
|
||||
agents=self.agents, # Automatically created by the @agent decorator
|
||||
tasks=self.tasks, # Automatically created by the @task decorator
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
65
src/crewai/cli/templates/flow/main.py
Normal file
65
src/crewai/cli/templates/flow/main.py
Normal file
@@ -0,0 +1,65 @@
|
||||
#!/usr/bin/env python
|
||||
import asyncio
|
||||
from random import randint
|
||||
|
||||
from pydantic import BaseModel
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from .crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
class PoemState(BaseModel):
|
||||
sentence_count: int = 1
|
||||
poem: str = ""
|
||||
|
||||
class PoemFlow(Flow[PoemState]):
|
||||
|
||||
@start()
|
||||
def generate_sentence_count(self):
|
||||
print("Generating sentence count")
|
||||
# Generate a number between 1 and 5
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
|
||||
@listen(generate_sentence_count)
|
||||
def generate_poem(self):
|
||||
print("Generating poem")
|
||||
print(f"State before poem: {self.state}")
|
||||
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
|
||||
print("Poem generated", result.raw)
|
||||
self.state.poem = result.raw
|
||||
|
||||
print(f"State after generate_poem: {self.state}")
|
||||
|
||||
@listen(generate_poem)
|
||||
def save_poem(self):
|
||||
print("Saving poem")
|
||||
print(f"State before save_poem: {self.state}")
|
||||
with open("poem.txt", "w") as f:
|
||||
f.write(self.state.poem)
|
||||
print(f"State after save_poem: {self.state}")
|
||||
|
||||
async def run_flow():
|
||||
"""
|
||||
Run the flow.
|
||||
"""
|
||||
poem_flow = PoemFlow()
|
||||
await poem_flow.kickoff()
|
||||
|
||||
async def plot_flow():
|
||||
"""
|
||||
Plot the flow.
|
||||
"""
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.plot()
|
||||
|
||||
|
||||
def main():
|
||||
asyncio.run(run_flow())
|
||||
|
||||
|
||||
def plot():
|
||||
asyncio.run(plot_flow())
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
19
src/crewai/cli/templates/flow/pyproject.toml
Normal file
19
src/crewai/cli/templates/flow/pyproject.toml
Normal file
@@ -0,0 +1,19 @@
|
||||
[tool.poetry]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
|
||||
asyncio = "*"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:main"
|
||||
run_flow = "{{folder_name}}.main:main"
|
||||
plot_flow = "{{folder_name}}.main:plot"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
0
src/crewai/cli/templates/flow/tools/__init__.py
Normal file
0
src/crewai/cli/templates/flow/tools/__init__.py
Normal file
12
src/crewai/cli/templates/flow/tools/custom_tool.py
Normal file
12
src/crewai/cli/templates/flow/tools/custom_tool.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from crewai_tools import BaseTool
|
||||
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = (
|
||||
"Clear description for what this tool is useful for, you agent will need this information to use it."
|
||||
)
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Implementation goes here
|
||||
return "this is an example of a tool output, ignore it and move along."
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
|
||||
asyncio = "*"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
48
src/crewai/cli/templates/tool/README.md
Normal file
48
src/crewai/cli/templates/tool/README.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# {{folder_name}}
|
||||
|
||||
{{folder_name}} is a CrewAI Tool. This template is designed to help you create
|
||||
custom tools to power up your crews.
|
||||
|
||||
## Installing
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project
|
||||
uses [Poetry](https://python-poetry.org/) for dependency management and package
|
||||
handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies with:
|
||||
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
## Publishing
|
||||
|
||||
Collaborate by sharing tools within your organization, or publish them publicly
|
||||
to contribute with the community.
|
||||
|
||||
```bash
|
||||
crewai tool publish {{tool_name}}
|
||||
```
|
||||
|
||||
Others may install your tool in their crews running:
|
||||
|
||||
```bash
|
||||
crewai tool install {{tool_name}}
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For support, questions, or feedback regarding the {{crew_name}} tool or CrewAI.
|
||||
|
||||
- Visit our [documentation](https://docs.crewai.com)
|
||||
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
|
||||
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
|
||||
- [Chat with our docs](https://chatg.pt/DWjSBZn)
|
||||
|
||||
Let's create wonders together with the power and simplicity of crewAI.
|
||||
14
src/crewai/cli/templates/tool/pyproject.toml
Normal file
14
src/crewai/cli/templates/tool/pyproject.toml
Normal file
@@ -0,0 +1,14 @@
|
||||
[tool.poetry]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "Power up your crews with {{folder_name}}"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.64.0,<1.0.0" }
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
@@ -0,0 +1,9 @@
|
||||
from crewai_tools import BaseTool
|
||||
|
||||
class {{class_name}}(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = "What this tool does. It's vital for effective utilization."
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Your tool's logic here
|
||||
return "Tool's result"
|
||||
0
src/crewai/cli/tools/__init__.py
Normal file
0
src/crewai/cli/tools/__init__.py
Normal file
229
src/crewai/cli/tools/main.py
Normal file
229
src/crewai/cli/tools/main.py
Normal file
@@ -0,0 +1,229 @@
|
||||
import base64
|
||||
from pathlib import Path
|
||||
import click
|
||||
import os
|
||||
import subprocess
|
||||
import tempfile
|
||||
|
||||
from crewai.cli.command import BaseCommand, PlusAPIMixin
|
||||
from crewai.cli import git
|
||||
from crewai.cli.utils import (
|
||||
get_project_name,
|
||||
get_project_description,
|
||||
get_project_version,
|
||||
tree_copy,
|
||||
tree_find_and_replace,
|
||||
)
|
||||
from rich.console import Console
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
"""
|
||||
A class to handle tool repository related operations for CrewAI projects.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
BaseCommand.__init__(self)
|
||||
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
||||
|
||||
def create(self, handle: str):
|
||||
self._ensure_not_in_project()
|
||||
|
||||
folder_name = handle.replace(" ", "_").replace("-", "_").lower()
|
||||
class_name = handle.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
||||
|
||||
project_root = Path(folder_name)
|
||||
if project_root.exists():
|
||||
click.secho(f"Folder {folder_name} already exists.", fg="red")
|
||||
raise SystemExit
|
||||
else:
|
||||
os.makedirs(project_root)
|
||||
|
||||
click.secho(f"Creating custom tool {folder_name}...", fg="green", bold=True)
|
||||
|
||||
template_dir = Path(__file__).parent.parent / "templates" / "tool"
|
||||
tree_copy(template_dir, project_root)
|
||||
tree_find_and_replace(project_root, "{{folder_name}}", folder_name)
|
||||
tree_find_and_replace(project_root, "{{class_name}}", class_name)
|
||||
|
||||
old_directory = os.getcwd()
|
||||
os.chdir(project_root)
|
||||
try:
|
||||
self.login()
|
||||
subprocess.run(["git", "init"], check=True)
|
||||
console.print(
|
||||
f"[green]Created custom tool [bold]{folder_name}[/bold]. Run [bold]cd {project_root}[/bold] to start working.[/green]"
|
||||
)
|
||||
finally:
|
||||
os.chdir(old_directory)
|
||||
|
||||
def publish(self, is_public: bool, force: bool = False):
|
||||
if not git.Repository().is_synced() and not force:
|
||||
console.print(
|
||||
"[bold red]Failed to publish tool.[/bold red]\n"
|
||||
"Local changes need to be resolved before publishing. Please do the following:\n"
|
||||
"* [bold]Commit[/bold] your changes.\n"
|
||||
"* [bold]Push[/bold] to sync with the remote.\n"
|
||||
"* [bold]Pull[/bold] the latest changes from the remote.\n"
|
||||
"\nOnce your repository is up-to-date, retry publishing the tool."
|
||||
)
|
||||
raise SystemExit()
|
||||
|
||||
project_name = get_project_name(require=True)
|
||||
assert isinstance(project_name, str)
|
||||
|
||||
project_version = get_project_version(require=True)
|
||||
assert isinstance(project_version, str)
|
||||
|
||||
project_description = get_project_description(require=False)
|
||||
encoded_tarball = None
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_build_dir:
|
||||
subprocess.run(
|
||||
["poetry", "build", "-f", "sdist", "--output", temp_build_dir],
|
||||
check=True,
|
||||
capture_output=False,
|
||||
)
|
||||
|
||||
tarball_filename = next(
|
||||
(f for f in os.listdir(temp_build_dir) if f.endswith(".tar.gz")), None
|
||||
)
|
||||
if not tarball_filename:
|
||||
console.print(
|
||||
"Project build failed. Please ensure that the command `poetry build -f sdist` completes successfully.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
tarball_path = os.path.join(temp_build_dir, tarball_filename)
|
||||
with open(tarball_path, "rb") as file:
|
||||
tarball_contents = file.read()
|
||||
|
||||
encoded_tarball = base64.b64encode(tarball_contents).decode("utf-8")
|
||||
|
||||
publish_response = self.plus_api_client.publish_tool(
|
||||
handle=project_name,
|
||||
is_public=is_public,
|
||||
version=project_version,
|
||||
description=project_description,
|
||||
encoded_file=f"data:application/x-gzip;base64,{encoded_tarball}",
|
||||
)
|
||||
|
||||
self._validate_response(publish_response)
|
||||
|
||||
published_handle = publish_response.json()["handle"]
|
||||
console.print(
|
||||
f"Succesfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
|
||||
style="bold green",
|
||||
)
|
||||
|
||||
def install(self, handle: str):
|
||||
get_response = self.plus_api_client.get_tool(handle)
|
||||
|
||||
if get_response.status_code == 404:
|
||||
console.print(
|
||||
"No tool found with this name. Please ensure the tool was published and you have access to it.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
elif get_response.status_code != 200:
|
||||
console.print(
|
||||
"Failed to get tool details. Please try again later.", style="bold red"
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
self._add_package(get_response.json())
|
||||
|
||||
console.print(f"Succesfully installed {handle}", style="bold green")
|
||||
|
||||
def login(self):
|
||||
login_response = self.plus_api_client.login_to_tool_repository()
|
||||
|
||||
if login_response.status_code != 200:
|
||||
console.print(
|
||||
"Failed to authenticate to the tool repository. Make sure you have the access to tools.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
login_response_json = login_response.json()
|
||||
for repository in login_response_json["repositories"]:
|
||||
self._add_repository_to_poetry(
|
||||
repository, login_response_json["credential"]
|
||||
)
|
||||
|
||||
console.print(
|
||||
"Succesfully authenticated to the tool repository.", style="bold green"
|
||||
)
|
||||
|
||||
def _add_repository_to_poetry(self, repository, credentials):
|
||||
repository_handle = f"crewai-{repository['handle']}"
|
||||
|
||||
add_repository_command = [
|
||||
"poetry",
|
||||
"source",
|
||||
"add",
|
||||
"--priority=explicit",
|
||||
repository_handle,
|
||||
repository["url"],
|
||||
]
|
||||
add_repository_result = subprocess.run(
|
||||
add_repository_command, text=True, check=True
|
||||
)
|
||||
|
||||
if add_repository_result.stderr:
|
||||
click.echo(add_repository_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
|
||||
add_repository_credentials_command = [
|
||||
"poetry",
|
||||
"config",
|
||||
f"http-basic.{repository_handle}",
|
||||
credentials["username"],
|
||||
credentials["password"],
|
||||
]
|
||||
add_repository_credentials_result = subprocess.run(
|
||||
add_repository_credentials_command,
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
|
||||
if add_repository_credentials_result.stderr:
|
||||
click.echo(add_repository_credentials_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
|
||||
def _add_package(self, tool_details):
|
||||
tool_handle = tool_details["handle"]
|
||||
repository_handle = tool_details["repository"]["handle"]
|
||||
pypi_index_handle = f"crewai-{repository_handle}"
|
||||
|
||||
add_package_command = [
|
||||
"poetry",
|
||||
"add",
|
||||
"--source",
|
||||
pypi_index_handle,
|
||||
tool_handle,
|
||||
]
|
||||
add_package_result = subprocess.run(
|
||||
add_package_command, capture_output=False, text=True, check=True
|
||||
)
|
||||
|
||||
if add_package_result.stderr:
|
||||
click.echo(add_package_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
|
||||
def _ensure_not_in_project(self):
|
||||
if os.path.isfile("./pyproject.toml"):
|
||||
console.print(
|
||||
"[bold red]Oops! It looks like you're inside a project.[/bold red]"
|
||||
)
|
||||
console.print(
|
||||
"You can't create a new tool while inside an existing project."
|
||||
)
|
||||
console.print(
|
||||
"[bold yellow]Tip:[/bold yellow] Navigate to a different directory and try again."
|
||||
)
|
||||
raise SystemExit
|
||||
@@ -1,4 +1,18 @@
|
||||
import os
|
||||
import shutil
|
||||
import click
|
||||
import sys
|
||||
import importlib.metadata
|
||||
|
||||
from crewai.cli.authentication.utils import TokenManager
|
||||
from functools import reduce
|
||||
from rich.console import Console
|
||||
from typing import Any, Dict, List
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
def copy_template(src, dst, name, class_name, folder_name):
|
||||
@@ -16,3 +30,176 @@ def copy_template(src, dst, name, class_name, folder_name):
|
||||
file.write(content)
|
||||
|
||||
click.secho(f" - Created {dst}", fg="green")
|
||||
|
||||
|
||||
# Drop the simple_toml_parser when we move to python3.11
|
||||
def simple_toml_parser(content):
|
||||
result = {}
|
||||
current_section = result
|
||||
for line in content.split("\n"):
|
||||
line = line.strip()
|
||||
if line.startswith("[") and line.endswith("]"):
|
||||
# New section
|
||||
section = line[1:-1].split(".")
|
||||
current_section = result
|
||||
for key in section:
|
||||
current_section = current_section.setdefault(key, {})
|
||||
elif "=" in line:
|
||||
key, value = line.split("=", 1)
|
||||
key = key.strip()
|
||||
value = value.strip().strip('"')
|
||||
current_section[key] = value
|
||||
return result
|
||||
|
||||
|
||||
def parse_toml(content):
|
||||
if sys.version_info >= (3, 11):
|
||||
return tomllib.loads(content)
|
||||
else:
|
||||
return simple_toml_parser(content)
|
||||
|
||||
|
||||
def get_project_name(
|
||||
pyproject_path: str = "pyproject.toml", require: bool = False
|
||||
) -> str | None:
|
||||
"""Get the project name from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "name"], require=require
|
||||
)
|
||||
|
||||
|
||||
def get_project_version(
|
||||
pyproject_path: str = "pyproject.toml", require: bool = False
|
||||
) -> str | None:
|
||||
"""Get the project version from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "version"], require=require
|
||||
)
|
||||
|
||||
|
||||
def get_project_description(
|
||||
pyproject_path: str = "pyproject.toml", require: bool = False
|
||||
) -> str | None:
|
||||
"""Get the project description from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "description"], require=require
|
||||
)
|
||||
|
||||
|
||||
def _get_project_attribute(
|
||||
pyproject_path: str, keys: List[str], require: bool
|
||||
) -> Any | None:
|
||||
"""Get an attribute from the pyproject.toml file."""
|
||||
attribute = None
|
||||
|
||||
try:
|
||||
with open(pyproject_path, "r") as f:
|
||||
pyproject_content = parse_toml(f.read())
|
||||
|
||||
dependencies = (
|
||||
_get_nested_value(pyproject_content, ["tool", "poetry", "dependencies"])
|
||||
or {}
|
||||
)
|
||||
if "crewai" not in dependencies:
|
||||
raise Exception("crewai is not in the dependencies.")
|
||||
|
||||
attribute = _get_nested_value(pyproject_content, keys)
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {pyproject_path} not found.")
|
||||
except KeyError:
|
||||
print(f"Error: {pyproject_path} is not a valid pyproject.toml file.")
|
||||
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
|
||||
print(
|
||||
f"Error: {pyproject_path} is not a valid TOML file."
|
||||
if sys.version_info >= (3, 11)
|
||||
else f"Error reading the pyproject.toml file: {e}"
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Error reading the pyproject.toml file: {e}")
|
||||
|
||||
if require and not attribute:
|
||||
console.print(
|
||||
f"Unable to read '{'.'.join(keys)}' in the pyproject.toml file. Please verify that the file exists and contains the specified attribute.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
return attribute
|
||||
|
||||
|
||||
def _get_nested_value(data: Dict[str, Any], keys: List[str]) -> Any:
|
||||
return reduce(dict.__getitem__, keys, data)
|
||||
|
||||
|
||||
def get_crewai_version() -> str:
|
||||
"""Get the version number of CrewAI running the CLI"""
|
||||
return importlib.metadata.version("crewai")
|
||||
|
||||
|
||||
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
|
||||
"""Fetch the environment variables from a .env file and return them as a dictionary."""
|
||||
try:
|
||||
# Read the .env file
|
||||
with open(env_file_path, "r") as f:
|
||||
env_content = f.read()
|
||||
|
||||
# Parse the .env file content to a dictionary
|
||||
env_dict = {}
|
||||
for line in env_content.splitlines():
|
||||
if line.strip() and not line.strip().startswith("#"):
|
||||
key, value = line.split("=", 1)
|
||||
env_dict[key.strip()] = value.strip()
|
||||
|
||||
return env_dict
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {env_file_path} not found.")
|
||||
except Exception as e:
|
||||
print(f"Error reading the .env file: {e}")
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def get_auth_token() -> str:
|
||||
"""Get the authentication token."""
|
||||
access_token = TokenManager().get_token()
|
||||
if not access_token:
|
||||
raise Exception()
|
||||
return access_token
|
||||
|
||||
|
||||
def tree_copy(source, destination):
|
||||
"""Copies the entire directory structure from the source to the destination."""
|
||||
for item in os.listdir(source):
|
||||
source_item = os.path.join(source, item)
|
||||
destination_item = os.path.join(destination, item)
|
||||
if os.path.isdir(source_item):
|
||||
shutil.copytree(source_item, destination_item)
|
||||
else:
|
||||
shutil.copy2(source_item, destination_item)
|
||||
|
||||
|
||||
def tree_find_and_replace(directory, find, replace):
|
||||
"""Recursively searches through a directory, replacing a target string in
|
||||
both file contents and filenames with a specified replacement string.
|
||||
"""
|
||||
for path, dirs, files in os.walk(os.path.abspath(directory), topdown=False):
|
||||
for filename in files:
|
||||
filepath = os.path.join(path, filename)
|
||||
|
||||
with open(filepath, "r") as file:
|
||||
contents = file.read()
|
||||
with open(filepath, "w") as file:
|
||||
file.write(contents.replace(find, replace))
|
||||
|
||||
if find in filename:
|
||||
new_filename = filename.replace(find, replace)
|
||||
new_filepath = os.path.join(path, new_filename)
|
||||
os.rename(filepath, new_filepath)
|
||||
|
||||
for dirname in dirs:
|
||||
if find in dirname:
|
||||
new_dirname = dirname.replace(find, replace)
|
||||
new_dirpath = os.path.join(path, new_dirname)
|
||||
old_dirpath = os.path.join(path, dirname)
|
||||
os.rename(old_dirpath, new_dirpath)
|
||||
|
||||
@@ -22,6 +22,7 @@ from crewai.agent import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.llm import LLM
|
||||
from crewai.memory.entity.entity_memory import EntityMemory
|
||||
from crewai.memory.long_term.long_term_memory import LongTermMemory
|
||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||
@@ -110,6 +111,18 @@ class Crew(BaseModel):
|
||||
default=False,
|
||||
description="Whether the crew should use memory to store memories of it's execution",
|
||||
)
|
||||
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
|
||||
default=None,
|
||||
description="An Instance of the ShortTermMemory to be used by the Crew",
|
||||
)
|
||||
long_term_memory: Optional[InstanceOf[LongTermMemory]] = Field(
|
||||
default=None,
|
||||
description="An Instance of the LongTermMemory to be used by the Crew",
|
||||
)
|
||||
entity_memory: Optional[InstanceOf[EntityMemory]] = Field(
|
||||
default=None,
|
||||
description="An Instance of the EntityMemory to be used by the Crew",
|
||||
)
|
||||
embedder: Optional[dict] = Field(
|
||||
default={"provider": "openai"},
|
||||
description="Configuration for the embedder to be used for the crew.",
|
||||
@@ -199,12 +212,15 @@ class Crew(BaseModel):
|
||||
if self.output_log_file:
|
||||
self._file_handler = FileHandler(self.output_log_file)
|
||||
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
|
||||
self.function_calling_llm = (
|
||||
self.function_calling_llm.model_name
|
||||
if self.function_calling_llm is not None
|
||||
and hasattr(self.function_calling_llm, "model_name")
|
||||
else self.function_calling_llm
|
||||
)
|
||||
if self.function_calling_llm:
|
||||
if isinstance(self.function_calling_llm, str):
|
||||
self.function_calling_llm = LLM(model=self.function_calling_llm)
|
||||
elif not isinstance(self.function_calling_llm, LLM):
|
||||
self.function_calling_llm = LLM(
|
||||
model=getattr(self.function_calling_llm, "model_name", None)
|
||||
or getattr(self.function_calling_llm, "deployment_name", None)
|
||||
or str(self.function_calling_llm)
|
||||
)
|
||||
self._telemetry = Telemetry()
|
||||
self._telemetry.set_tracer()
|
||||
return self
|
||||
@@ -213,11 +229,19 @@ class Crew(BaseModel):
|
||||
def create_crew_memory(self) -> "Crew":
|
||||
"""Set private attributes."""
|
||||
if self.memory:
|
||||
self._long_term_memory = LongTermMemory()
|
||||
self._short_term_memory = ShortTermMemory(
|
||||
crew=self, embedder_config=self.embedder
|
||||
self._long_term_memory = (
|
||||
self.long_term_memory if self.long_term_memory else LongTermMemory()
|
||||
)
|
||||
self._short_term_memory = (
|
||||
self.short_term_memory
|
||||
if self.short_term_memory
|
||||
else ShortTermMemory(crew=self, embedder_config=self.embedder)
|
||||
)
|
||||
self._entity_memory = (
|
||||
self.entity_memory
|
||||
if self.entity_memory
|
||||
else EntityMemory(crew=self, embedder_config=self.embedder)
|
||||
)
|
||||
self._entity_memory = EntityMemory(crew=self, embedder_config=self.embedder)
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
@@ -514,10 +538,6 @@ class Crew(BaseModel):
|
||||
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
|
||||
for i in range(len(inputs))
|
||||
]
|
||||
tasks = [
|
||||
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
|
||||
for i in range(len(inputs))
|
||||
]
|
||||
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
@@ -592,9 +612,9 @@ class Crew(BaseModel):
|
||||
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
|
||||
else:
|
||||
self.manager_llm = (
|
||||
self.manager_llm.model_name
|
||||
if hasattr(self.manager_llm, "model_name")
|
||||
else self.manager_llm
|
||||
getattr(self.manager_llm, "model_name", None)
|
||||
or getattr(self.manager_llm, "deployment_name", None)
|
||||
or self.manager_llm
|
||||
)
|
||||
manager = Agent(
|
||||
role=i18n.retrieve("hierarchical_manager_agent", "role"),
|
||||
@@ -605,6 +625,7 @@ class Crew(BaseModel):
|
||||
verbose=self.verbose,
|
||||
)
|
||||
self.manager_agent = manager
|
||||
manager.crew = self
|
||||
|
||||
def _execute_tasks(
|
||||
self,
|
||||
@@ -936,14 +957,17 @@ class Crew(BaseModel):
|
||||
def test(
|
||||
self,
|
||||
n_iterations: int,
|
||||
openai_model_name: str,
|
||||
openai_model_name: Optional[str] = None,
|
||||
inputs: Optional[Dict[str, Any]] = None,
|
||||
) -> None:
|
||||
"""Test and evaluate the Crew with the given inputs for n iterations."""
|
||||
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
|
||||
self._test_execution_span = self._telemetry.test_execution_span(
|
||||
self, n_iterations, inputs, openai_model_name
|
||||
)
|
||||
evaluator = CrewEvaluator(self, openai_model_name)
|
||||
self,
|
||||
n_iterations,
|
||||
inputs,
|
||||
openai_model_name, # type: ignore[arg-type]
|
||||
) # type: ignore[arg-type]
|
||||
evaluator = CrewEvaluator(self, openai_model_name) # type: ignore[arg-type]
|
||||
|
||||
for i in range(1, n_iterations + 1):
|
||||
evaluator.set_iteration(i)
|
||||
|
||||
@@ -41,6 +41,14 @@ class CrewOutput(BaseModel):
|
||||
output_dict.update(self.pydantic.model_dump())
|
||||
return output_dict
|
||||
|
||||
def __getitem__(self, key):
|
||||
if self.pydantic and hasattr(self.pydantic, key):
|
||||
return getattr(self.pydantic, key)
|
||||
elif self.json_dict and key in self.json_dict:
|
||||
return self.json_dict[key]
|
||||
else:
|
||||
raise KeyError(f"Key '{key}' not found in CrewOutput.")
|
||||
|
||||
def __str__(self):
|
||||
if self.pydantic:
|
||||
return str(self.pydantic)
|
||||
|
||||
3
src/crewai/flow/__init__.py
Normal file
3
src/crewai/flow/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from crewai.flow.flow import Flow
|
||||
|
||||
__all__ = ["Flow"]
|
||||
93
src/crewai/flow/assets/crewai_flow_visual_template.html
Normal file
93
src/crewai/flow/assets/crewai_flow_visual_template.html
Normal file
@@ -0,0 +1,93 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<title>{{ title }}</title>
|
||||
<script
|
||||
src="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/vis-network.min.js"
|
||||
integrity="sha512-LnvoEWDFrqGHlHmDD2101OrLcbsfkrzoSpvtSQtxK3RMnRV0eOkhhBN2dXHKRrUU8p2DGRTk35n4O8nWSVe1mQ=="
|
||||
crossorigin="anonymous"
|
||||
referrerpolicy="no-referrer"
|
||||
></script>
|
||||
<link
|
||||
rel="stylesheet"
|
||||
href="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/dist/vis-network.min.css"
|
||||
integrity="sha512-WgxfT5LWjfszlPHXRmBWHkV2eceiWTOBvrKCNbdgDYTHrT2AeLCGbF4sZlZw3UMN3WtL0tGUoIAKsu8mllg/XA=="
|
||||
crossorigin="anonymous"
|
||||
referrerpolicy="no-referrer"
|
||||
/>
|
||||
<style type="text/css">
|
||||
body {
|
||||
font-family: verdana;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
.container {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100vh;
|
||||
}
|
||||
#mynetwork {
|
||||
flex-grow: 1;
|
||||
width: 100%;
|
||||
height: 750px;
|
||||
background-color: #ffffff;
|
||||
}
|
||||
.card {
|
||||
border: none;
|
||||
}
|
||||
.legend-container {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 10px;
|
||||
background-color: #f8f9fa;
|
||||
position: fixed; /* Make the legend fixed */
|
||||
bottom: 0; /* Position it at the bottom */
|
||||
width: 100%; /* Make it span the full width */
|
||||
}
|
||||
.legend-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
margin-right: 20px;
|
||||
}
|
||||
.legend-color-box {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
margin-right: 5px;
|
||||
}
|
||||
.logo {
|
||||
height: 50px;
|
||||
margin-right: 20px;
|
||||
}
|
||||
.legend-dashed {
|
||||
border-bottom: 2px dashed #666666;
|
||||
width: 20px;
|
||||
height: 0;
|
||||
margin-right: 5px;
|
||||
}
|
||||
.legend-solid {
|
||||
border-bottom: 2px solid #666666;
|
||||
width: 20px;
|
||||
height: 0;
|
||||
margin-right: 5px;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<div class="card" style="width: 100%">
|
||||
<div id="mynetwork" class="card-body"></div>
|
||||
</div>
|
||||
<div class="legend-container">
|
||||
<img
|
||||
src="data:image/svg+xml;base64,{{ logo_svg_base64 }}"
|
||||
alt="CrewAI logo"
|
||||
class="logo"
|
||||
/>
|
||||
<!-- LEGEND_ITEMS_PLACEHOLDER -->
|
||||
</div>
|
||||
</div>
|
||||
{{ network_content }}
|
||||
</body>
|
||||
</html>
|
||||
12
src/crewai/flow/assets/crewai_logo.svg
Normal file
12
src/crewai/flow/assets/crewai_logo.svg
Normal file
File diff suppressed because one or more lines are too long
|
After Width: | Height: | Size: 27 KiB |
59
src/crewai/flow/config.py
Normal file
59
src/crewai/flow/config.py
Normal file
@@ -0,0 +1,59 @@
|
||||
DARK_GRAY = "#333333"
|
||||
CREWAI_ORANGE = "#FF5A50"
|
||||
GRAY = "#666666"
|
||||
WHITE = "#FFFFFF"
|
||||
BLACK = "#000000"
|
||||
|
||||
COLORS = {
|
||||
"bg": WHITE,
|
||||
"start": CREWAI_ORANGE,
|
||||
"method": DARK_GRAY,
|
||||
"router": DARK_GRAY,
|
||||
"router_border": CREWAI_ORANGE,
|
||||
"edge": GRAY,
|
||||
"router_edge": CREWAI_ORANGE,
|
||||
"text": WHITE,
|
||||
}
|
||||
|
||||
NODE_STYLES = {
|
||||
"start": {
|
||||
"color": CREWAI_ORANGE,
|
||||
"shape": "box",
|
||||
"font": {"color": WHITE},
|
||||
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
|
||||
},
|
||||
"method": {
|
||||
"color": DARK_GRAY,
|
||||
"shape": "box",
|
||||
"font": {"color": WHITE},
|
||||
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
|
||||
},
|
||||
"router": {
|
||||
"color": {
|
||||
"background": DARK_GRAY,
|
||||
"border": CREWAI_ORANGE,
|
||||
"highlight": {
|
||||
"border": CREWAI_ORANGE,
|
||||
"background": DARK_GRAY,
|
||||
},
|
||||
},
|
||||
"shape": "box",
|
||||
"font": {"color": WHITE},
|
||||
"borderWidth": 3,
|
||||
"borderWidthSelected": 4,
|
||||
"shapeProperties": {"borderDashes": [5, 5]},
|
||||
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
|
||||
},
|
||||
"crew": {
|
||||
"color": {
|
||||
"background": WHITE,
|
||||
"border": CREWAI_ORANGE,
|
||||
},
|
||||
"shape": "box",
|
||||
"font": {"color": BLACK},
|
||||
"borderWidth": 3,
|
||||
"borderWidthSelected": 4,
|
||||
"shapeProperties": {"borderDashes": False},
|
||||
"margin": {"top": 10, "bottom": 8, "left": 10, "right": 10},
|
||||
},
|
||||
}
|
||||
286
src/crewai/flow/flow.py
Normal file
286
src/crewai/flow/flow.py
Normal file
@@ -0,0 +1,286 @@
|
||||
# flow.py
|
||||
|
||||
import asyncio
|
||||
import inspect
|
||||
from typing import Any, Callable, Dict, Generic, List, Set, Type, TypeVar, Union
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.flow.flow_visualizer import plot_flow
|
||||
from crewai.flow.utils import get_possible_return_constants
|
||||
from crewai.telemetry import Telemetry
|
||||
|
||||
T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]])
|
||||
|
||||
|
||||
def start(condition=None):
|
||||
def decorator(func):
|
||||
func.__is_start_method__ = True
|
||||
if condition is not None:
|
||||
if isinstance(condition, str):
|
||||
func.__trigger_methods__ = [condition]
|
||||
func.__condition_type__ = "OR"
|
||||
elif (
|
||||
isinstance(condition, dict)
|
||||
and "type" in condition
|
||||
and "methods" in condition
|
||||
):
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif callable(condition) and hasattr(condition, "__name__"):
|
||||
func.__trigger_methods__ = [condition.__name__]
|
||||
func.__condition_type__ = "OR"
|
||||
else:
|
||||
raise ValueError(
|
||||
"Condition must be a method, string, or a result of or_() or and_()"
|
||||
)
|
||||
return func
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def listen(condition):
|
||||
def decorator(func):
|
||||
if isinstance(condition, str):
|
||||
func.__trigger_methods__ = [condition]
|
||||
func.__condition_type__ = "OR"
|
||||
elif (
|
||||
isinstance(condition, dict)
|
||||
and "type" in condition
|
||||
and "methods" in condition
|
||||
):
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif callable(condition) and hasattr(condition, "__name__"):
|
||||
func.__trigger_methods__ = [condition.__name__]
|
||||
func.__condition_type__ = "OR"
|
||||
else:
|
||||
raise ValueError(
|
||||
"Condition must be a method, string, or a result of or_() or and_()"
|
||||
)
|
||||
return func
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def router(method):
|
||||
def decorator(func):
|
||||
func.__is_router__ = True
|
||||
func.__router_for__ = method.__name__
|
||||
return func
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def or_(*conditions):
|
||||
methods = []
|
||||
for condition in conditions:
|
||||
if isinstance(condition, dict) and "methods" in condition:
|
||||
methods.extend(condition["methods"])
|
||||
elif isinstance(condition, str):
|
||||
methods.append(condition)
|
||||
elif callable(condition):
|
||||
methods.append(getattr(condition, "__name__", repr(condition)))
|
||||
else:
|
||||
raise ValueError("Invalid condition in or_()")
|
||||
return {"type": "OR", "methods": methods}
|
||||
|
||||
|
||||
def and_(*conditions):
|
||||
methods = []
|
||||
for condition in conditions:
|
||||
if isinstance(condition, dict) and "methods" in condition:
|
||||
methods.extend(condition["methods"])
|
||||
elif isinstance(condition, str):
|
||||
methods.append(condition)
|
||||
elif callable(condition):
|
||||
methods.append(getattr(condition, "__name__", repr(condition)))
|
||||
else:
|
||||
raise ValueError("Invalid condition in and_()")
|
||||
return {"type": "AND", "methods": methods}
|
||||
|
||||
|
||||
class FlowMeta(type):
|
||||
def __new__(mcs, name, bases, dct):
|
||||
cls = super().__new__(mcs, name, bases, dct)
|
||||
|
||||
start_methods = []
|
||||
listeners = {}
|
||||
routers = {}
|
||||
router_paths = {}
|
||||
|
||||
for attr_name, attr_value in dct.items():
|
||||
if hasattr(attr_value, "__is_start_method__"):
|
||||
start_methods.append(attr_name)
|
||||
if hasattr(attr_value, "__trigger_methods__"):
|
||||
methods = attr_value.__trigger_methods__
|
||||
condition_type = getattr(attr_value, "__condition_type__", "OR")
|
||||
listeners[attr_name] = (condition_type, methods)
|
||||
elif hasattr(attr_value, "__trigger_methods__"):
|
||||
methods = attr_value.__trigger_methods__
|
||||
condition_type = getattr(attr_value, "__condition_type__", "OR")
|
||||
listeners[attr_name] = (condition_type, methods)
|
||||
elif hasattr(attr_value, "__is_router__"):
|
||||
routers[attr_value.__router_for__] = attr_name
|
||||
possible_returns = get_possible_return_constants(attr_value)
|
||||
if possible_returns:
|
||||
router_paths[attr_name] = possible_returns
|
||||
|
||||
# Register router as a listener to its triggering method
|
||||
trigger_method_name = attr_value.__router_for__
|
||||
methods = [trigger_method_name]
|
||||
condition_type = "OR"
|
||||
listeners[attr_name] = (condition_type, methods)
|
||||
|
||||
setattr(cls, "_start_methods", start_methods)
|
||||
setattr(cls, "_listeners", listeners)
|
||||
setattr(cls, "_routers", routers)
|
||||
setattr(cls, "_router_paths", router_paths)
|
||||
|
||||
return cls
|
||||
|
||||
|
||||
class Flow(Generic[T], metaclass=FlowMeta):
|
||||
_telemetry = Telemetry()
|
||||
|
||||
_start_methods: List[str] = []
|
||||
_listeners: Dict[str, tuple[str, List[str]]] = {}
|
||||
_routers: Dict[str, str] = {}
|
||||
_router_paths: Dict[str, List[str]] = {}
|
||||
initial_state: Union[Type[T], T, None] = None
|
||||
|
||||
def __class_getitem__(cls, item: Type[T]) -> Type["Flow"]:
|
||||
class _FlowGeneric(cls):
|
||||
_initial_state_T: Type[T] = item
|
||||
|
||||
_FlowGeneric.__name__ = f"{cls.__name__}[{item.__name__}]"
|
||||
return _FlowGeneric
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._methods: Dict[str, Callable] = {}
|
||||
self._state: T = self._create_initial_state()
|
||||
self._completed_methods: Set[str] = set()
|
||||
self._pending_and_listeners: Dict[str, Set[str]] = {}
|
||||
self._method_outputs: List[Any] = [] # List to store all method outputs
|
||||
|
||||
self._telemetry.flow_creation_span(self.__class__.__name__)
|
||||
|
||||
for method_name in dir(self):
|
||||
if callable(getattr(self, method_name)) and not method_name.startswith(
|
||||
"__"
|
||||
):
|
||||
self._methods[method_name] = getattr(self, method_name)
|
||||
|
||||
def _create_initial_state(self) -> T:
|
||||
if self.initial_state is None and hasattr(self, "_initial_state_T"):
|
||||
return self._initial_state_T() # type: ignore
|
||||
if self.initial_state is None:
|
||||
return {} # type: ignore
|
||||
elif isinstance(self.initial_state, type):
|
||||
return self.initial_state()
|
||||
else:
|
||||
return self.initial_state
|
||||
|
||||
@property
|
||||
def state(self) -> T:
|
||||
return self._state
|
||||
|
||||
@property
|
||||
def method_outputs(self) -> List[Any]:
|
||||
"""Returns the list of all outputs from executed methods."""
|
||||
return self._method_outputs
|
||||
|
||||
async def kickoff(self) -> Any:
|
||||
if not self._start_methods:
|
||||
raise ValueError("No start method defined")
|
||||
|
||||
self._telemetry.flow_execution_span(
|
||||
self.__class__.__name__, list(self._methods.keys())
|
||||
)
|
||||
|
||||
# Create tasks for all start methods
|
||||
tasks = [
|
||||
self._execute_start_method(start_method)
|
||||
for start_method in self._start_methods
|
||||
]
|
||||
|
||||
# Run all start methods concurrently
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
# Return the final output (from the last executed method)
|
||||
if self._method_outputs:
|
||||
return self._method_outputs[-1]
|
||||
else:
|
||||
return None # Or raise an exception if no methods were executed
|
||||
|
||||
async def _execute_start_method(self, start_method: str) -> None:
|
||||
result = await self._execute_method(self._methods[start_method])
|
||||
await self._execute_listeners(start_method, result)
|
||||
|
||||
async def _execute_method(self, method: Callable, *args: Any, **kwargs: Any) -> Any:
|
||||
result = (
|
||||
await method(*args, **kwargs)
|
||||
if asyncio.iscoroutinefunction(method)
|
||||
else method(*args, **kwargs)
|
||||
)
|
||||
self._method_outputs.append(result) # Store the output
|
||||
return result
|
||||
|
||||
async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
|
||||
listener_tasks = []
|
||||
|
||||
if trigger_method in self._routers:
|
||||
router_method = self._methods[self._routers[trigger_method]]
|
||||
path = await self._execute_method(router_method)
|
||||
# Use the path as the new trigger method
|
||||
trigger_method = path
|
||||
|
||||
for listener, (condition_type, methods) in self._listeners.items():
|
||||
if condition_type == "OR":
|
||||
if trigger_method in methods:
|
||||
listener_tasks.append(
|
||||
self._execute_single_listener(listener, result)
|
||||
)
|
||||
elif condition_type == "AND":
|
||||
if listener not in self._pending_and_listeners:
|
||||
self._pending_and_listeners[listener] = set()
|
||||
self._pending_and_listeners[listener].add(trigger_method)
|
||||
if set(methods) == self._pending_and_listeners[listener]:
|
||||
listener_tasks.append(
|
||||
self._execute_single_listener(listener, result)
|
||||
)
|
||||
del self._pending_and_listeners[listener]
|
||||
|
||||
# Run all listener tasks concurrently and wait for them to complete
|
||||
await asyncio.gather(*listener_tasks)
|
||||
|
||||
async def _execute_single_listener(self, listener: str, result: Any) -> None:
|
||||
try:
|
||||
method = self._methods[listener]
|
||||
sig = inspect.signature(method)
|
||||
params = list(sig.parameters.values())
|
||||
|
||||
# Exclude 'self' parameter
|
||||
method_params = [p for p in params if p.name != "self"]
|
||||
|
||||
if method_params:
|
||||
# If listener expects parameters, pass the result
|
||||
listener_result = await self._execute_method(method, result)
|
||||
else:
|
||||
# If listener does not expect parameters, call without arguments
|
||||
listener_result = await self._execute_method(method)
|
||||
|
||||
# Execute listeners of this listener
|
||||
await self._execute_listeners(listener, listener_result)
|
||||
except Exception as e:
|
||||
print(f"[Flow._execute_single_listener] Error in method {listener}: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
|
||||
def plot(self, filename: str = "crewai_flow") -> None:
|
||||
self._telemetry.flow_plotting_span(
|
||||
self.__class__.__name__, list(self._methods.keys())
|
||||
)
|
||||
|
||||
plot_flow(self, filename)
|
||||
104
src/crewai/flow/flow_visualizer.py
Normal file
104
src/crewai/flow/flow_visualizer.py
Normal file
@@ -0,0 +1,104 @@
|
||||
# flow_visualizer.py
|
||||
|
||||
import os
|
||||
|
||||
from pyvis.network import Network
|
||||
|
||||
from crewai.flow.config import COLORS, NODE_STYLES
|
||||
from crewai.flow.html_template_handler import HTMLTemplateHandler
|
||||
from crewai.flow.legend_generator import generate_legend_items_html, get_legend_items
|
||||
from crewai.flow.utils import calculate_node_levels
|
||||
from crewai.flow.visualization_utils import (
|
||||
add_edges,
|
||||
add_nodes_to_network,
|
||||
compute_positions,
|
||||
)
|
||||
|
||||
|
||||
class FlowPlot:
|
||||
def __init__(self, flow):
|
||||
self.flow = flow
|
||||
self.colors = COLORS
|
||||
self.node_styles = NODE_STYLES
|
||||
|
||||
def plot(self, filename):
|
||||
net = Network(
|
||||
directed=True,
|
||||
height="750px",
|
||||
width="100%",
|
||||
bgcolor=self.colors["bg"],
|
||||
layout=None,
|
||||
)
|
||||
|
||||
# Set options to disable physics
|
||||
net.set_options(
|
||||
"""
|
||||
var options = {
|
||||
"nodes": {
|
||||
"font": {
|
||||
"multi": "html"
|
||||
}
|
||||
},
|
||||
"physics": {
|
||||
"enabled": false
|
||||
}
|
||||
}
|
||||
"""
|
||||
)
|
||||
|
||||
# Calculate levels for nodes
|
||||
node_levels = calculate_node_levels(self.flow)
|
||||
|
||||
# Compute positions
|
||||
node_positions = compute_positions(self.flow, node_levels)
|
||||
|
||||
# Add nodes to the network
|
||||
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
|
||||
|
||||
# Add edges to the network
|
||||
add_edges(net, self.flow, node_positions, self.colors)
|
||||
|
||||
network_html = net.generate_html()
|
||||
final_html_content = self._generate_final_html(network_html)
|
||||
|
||||
# Save the final HTML content to the file
|
||||
with open(f"{filename}.html", "w", encoding="utf-8") as f:
|
||||
f.write(final_html_content)
|
||||
print(f"Plot saved as {filename}.html")
|
||||
|
||||
self._cleanup_pyvis_lib()
|
||||
|
||||
def _generate_final_html(self, network_html):
|
||||
# Extract just the body content from the generated HTML
|
||||
current_dir = os.path.dirname(__file__)
|
||||
template_path = os.path.join(
|
||||
current_dir, "assets", "crewai_flow_visual_template.html"
|
||||
)
|
||||
logo_path = os.path.join(current_dir, "assets", "crewai_logo.svg")
|
||||
|
||||
html_handler = HTMLTemplateHandler(template_path, logo_path)
|
||||
network_body = html_handler.extract_body_content(network_html)
|
||||
|
||||
# Generate the legend items HTML
|
||||
legend_items = get_legend_items(self.colors)
|
||||
legend_items_html = generate_legend_items_html(legend_items)
|
||||
final_html_content = html_handler.generate_final_html(
|
||||
network_body, legend_items_html
|
||||
)
|
||||
return final_html_content
|
||||
|
||||
def _cleanup_pyvis_lib(self):
|
||||
# Clean up the generated lib folder
|
||||
lib_folder = os.path.join(os.getcwd(), "lib")
|
||||
try:
|
||||
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
|
||||
import shutil
|
||||
|
||||
shutil.rmtree(lib_folder)
|
||||
except Exception as e:
|
||||
print(f"Error cleaning up {lib_folder}: {e}")
|
||||
|
||||
|
||||
def plot_flow(flow, filename="flow_plot"):
|
||||
visualizer = FlowPlot(flow)
|
||||
visualizer.plot(filename)
|
||||
65
src/crewai/flow/html_template_handler.py
Normal file
65
src/crewai/flow/html_template_handler.py
Normal file
@@ -0,0 +1,65 @@
|
||||
import base64
|
||||
import re
|
||||
|
||||
|
||||
class HTMLTemplateHandler:
|
||||
def __init__(self, template_path, logo_path):
|
||||
self.template_path = template_path
|
||||
self.logo_path = logo_path
|
||||
|
||||
def read_template(self):
|
||||
with open(self.template_path, "r", encoding="utf-8") as f:
|
||||
return f.read()
|
||||
|
||||
def encode_logo(self):
|
||||
with open(self.logo_path, "rb") as logo_file:
|
||||
logo_svg_data = logo_file.read()
|
||||
return base64.b64encode(logo_svg_data).decode("utf-8")
|
||||
|
||||
def extract_body_content(self, html):
|
||||
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
|
||||
return match.group(1) if match else ""
|
||||
|
||||
def generate_legend_items_html(self, legend_items):
|
||||
legend_items_html = ""
|
||||
for item in legend_items:
|
||||
if "border" in item:
|
||||
legend_items_html += f"""
|
||||
<div class="legend-item">
|
||||
<div class="legend-color-box" style="background-color: {item['color']}; border: 2px dashed {item['border']};"></div>
|
||||
<div>{item['label']}</div>
|
||||
</div>
|
||||
"""
|
||||
elif item.get("dashed") is not None:
|
||||
style = "dashed" if item["dashed"] else "solid"
|
||||
legend_items_html += f"""
|
||||
<div class="legend-item">
|
||||
<div class="legend-{style}" style="border-bottom: 2px {style} {item['color']};"></div>
|
||||
<div>{item['label']}</div>
|
||||
</div>
|
||||
"""
|
||||
else:
|
||||
legend_items_html += f"""
|
||||
<div class="legend-item">
|
||||
<div class="legend-color-box" style="background-color: {item['color']};"></div>
|
||||
<div>{item['label']}</div>
|
||||
</div>
|
||||
"""
|
||||
return legend_items_html
|
||||
|
||||
def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"):
|
||||
html_template = self.read_template()
|
||||
logo_svg_base64 = self.encode_logo()
|
||||
|
||||
final_html_content = html_template.replace("{{ title }}", title)
|
||||
final_html_content = final_html_content.replace(
|
||||
"{{ network_content }}", network_body
|
||||
)
|
||||
final_html_content = final_html_content.replace(
|
||||
"{{ logo_svg_base64 }}", logo_svg_base64
|
||||
)
|
||||
final_html_content = final_html_content.replace(
|
||||
"<!-- LEGEND_ITEMS_PLACEHOLDER -->", legend_items_html
|
||||
)
|
||||
|
||||
return final_html_content
|
||||
53
src/crewai/flow/legend_generator.py
Normal file
53
src/crewai/flow/legend_generator.py
Normal file
@@ -0,0 +1,53 @@
|
||||
def get_legend_items(colors):
|
||||
return [
|
||||
{"label": "Start Method", "color": colors["start"]},
|
||||
{"label": "Method", "color": colors["method"]},
|
||||
{
|
||||
"label": "Crew Method",
|
||||
"color": colors["bg"],
|
||||
"border": colors["start"],
|
||||
"dashed": False,
|
||||
},
|
||||
{
|
||||
"label": "Router",
|
||||
"color": colors["router"],
|
||||
"border": colors["router_border"],
|
||||
"dashed": True,
|
||||
},
|
||||
{"label": "Trigger", "color": colors["edge"], "dashed": False},
|
||||
{"label": "AND Trigger", "color": colors["edge"], "dashed": True},
|
||||
{
|
||||
"label": "Router Trigger",
|
||||
"color": colors["router_edge"],
|
||||
"dashed": True,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def generate_legend_items_html(legend_items):
|
||||
legend_items_html = ""
|
||||
for item in legend_items:
|
||||
if "border" in item:
|
||||
style = "dashed" if item["dashed"] else "solid"
|
||||
legend_items_html += f"""
|
||||
<div class="legend-item">
|
||||
<div class="legend-color-box" style="background-color: {item['color']}; border: 2px {style} {item['border']}; border-radius: 5px;"></div>
|
||||
<div>{item['label']}</div>
|
||||
</div>
|
||||
"""
|
||||
elif item.get("dashed") is not None:
|
||||
style = "dashed" if item["dashed"] else "solid"
|
||||
legend_items_html += f"""
|
||||
<div class="legend-item">
|
||||
<div class="legend-{style}" style="border-bottom: 2px {style} {item['color']}; border-radius: 5px;"></div>
|
||||
<div>{item['label']}</div>
|
||||
</div>
|
||||
"""
|
||||
else:
|
||||
legend_items_html += f"""
|
||||
<div class="legend-item">
|
||||
<div class="legend-color-box" style="background-color: {item['color']}; border-radius: 5px;"></div>
|
||||
<div>{item['label']}</div>
|
||||
</div>
|
||||
"""
|
||||
return legend_items_html
|
||||
188
src/crewai/flow/utils.py
Normal file
188
src/crewai/flow/utils.py
Normal file
@@ -0,0 +1,188 @@
|
||||
import ast
|
||||
import inspect
|
||||
import textwrap
|
||||
|
||||
|
||||
def get_possible_return_constants(function):
|
||||
try:
|
||||
source = inspect.getsource(function)
|
||||
except OSError:
|
||||
# Can't get source code
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"Error retrieving source code for function {function.__name__}: {e}")
|
||||
return None
|
||||
|
||||
try:
|
||||
# Remove leading indentation
|
||||
source = textwrap.dedent(source)
|
||||
# Parse the source code into an AST
|
||||
code_ast = ast.parse(source)
|
||||
except IndentationError as e:
|
||||
print(f"IndentationError while parsing source code of {function.__name__}: {e}")
|
||||
print(f"Source code:\n{source}")
|
||||
return None
|
||||
except SyntaxError as e:
|
||||
print(f"SyntaxError while parsing source code of {function.__name__}: {e}")
|
||||
print(f"Source code:\n{source}")
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"Unexpected error while parsing source code of {function.__name__}: {e}")
|
||||
print(f"Source code:\n{source}")
|
||||
return None
|
||||
|
||||
return_values = []
|
||||
|
||||
class ReturnVisitor(ast.NodeVisitor):
|
||||
def visit_Return(self, node):
|
||||
# Check if the return value is a constant (Python 3.8+)
|
||||
if isinstance(node.value, ast.Constant):
|
||||
return_values.append(node.value.value)
|
||||
|
||||
ReturnVisitor().visit(code_ast)
|
||||
return return_values
|
||||
|
||||
|
||||
def calculate_node_levels(flow):
|
||||
levels = {}
|
||||
queue = []
|
||||
visited = set()
|
||||
pending_and_listeners = {}
|
||||
|
||||
# Make all start methods at level 0
|
||||
for method_name, method in flow._methods.items():
|
||||
if hasattr(method, "__is_start_method__"):
|
||||
levels[method_name] = 0
|
||||
queue.append(method_name)
|
||||
|
||||
# Breadth-first traversal to assign levels
|
||||
while queue:
|
||||
current = queue.pop(0)
|
||||
current_level = levels[current]
|
||||
visited.add(current)
|
||||
|
||||
for listener_name, (
|
||||
condition_type,
|
||||
trigger_methods,
|
||||
) in flow._listeners.items():
|
||||
if condition_type == "OR":
|
||||
if current in trigger_methods:
|
||||
if (
|
||||
listener_name not in levels
|
||||
or levels[listener_name] > current_level + 1
|
||||
):
|
||||
levels[listener_name] = current_level + 1
|
||||
if listener_name not in visited:
|
||||
queue.append(listener_name)
|
||||
elif condition_type == "AND":
|
||||
if listener_name not in pending_and_listeners:
|
||||
pending_and_listeners[listener_name] = set()
|
||||
if current in trigger_methods:
|
||||
pending_and_listeners[listener_name].add(current)
|
||||
if set(trigger_methods) == pending_and_listeners[listener_name]:
|
||||
if (
|
||||
listener_name not in levels
|
||||
or levels[listener_name] > current_level + 1
|
||||
):
|
||||
levels[listener_name] = current_level + 1
|
||||
if listener_name not in visited:
|
||||
queue.append(listener_name)
|
||||
|
||||
# Handle router connections
|
||||
if current in flow._routers.values():
|
||||
router_method_name = current
|
||||
paths = flow._router_paths.get(router_method_name, [])
|
||||
for path in paths:
|
||||
for listener_name, (
|
||||
condition_type,
|
||||
trigger_methods,
|
||||
) in flow._listeners.items():
|
||||
if path in trigger_methods:
|
||||
if (
|
||||
listener_name not in levels
|
||||
or levels[listener_name] > current_level + 1
|
||||
):
|
||||
levels[listener_name] = current_level + 1
|
||||
if listener_name not in visited:
|
||||
queue.append(listener_name)
|
||||
return levels
|
||||
|
||||
|
||||
def count_outgoing_edges(flow):
|
||||
counts = {}
|
||||
for method_name in flow._methods:
|
||||
counts[method_name] = 0
|
||||
for method_name in flow._listeners:
|
||||
_, trigger_methods = flow._listeners[method_name]
|
||||
for trigger in trigger_methods:
|
||||
if trigger in flow._methods:
|
||||
counts[trigger] += 1
|
||||
return counts
|
||||
|
||||
|
||||
def build_ancestor_dict(flow):
|
||||
ancestors = {node: set() for node in flow._methods}
|
||||
visited = set()
|
||||
for node in flow._methods:
|
||||
if node not in visited:
|
||||
dfs_ancestors(node, ancestors, visited, flow)
|
||||
return ancestors
|
||||
|
||||
|
||||
def dfs_ancestors(node, ancestors, visited, flow):
|
||||
if node in visited:
|
||||
return
|
||||
visited.add(node)
|
||||
|
||||
# Handle regular listeners
|
||||
for listener_name, (_, trigger_methods) in flow._listeners.items():
|
||||
if node in trigger_methods:
|
||||
ancestors[listener_name].add(node)
|
||||
ancestors[listener_name].update(ancestors[node])
|
||||
dfs_ancestors(listener_name, ancestors, visited, flow)
|
||||
|
||||
# Handle router methods separately
|
||||
if node in flow._routers.values():
|
||||
router_method_name = node
|
||||
paths = flow._router_paths.get(router_method_name, [])
|
||||
for path in paths:
|
||||
for listener_name, (_, trigger_methods) in flow._listeners.items():
|
||||
if path in trigger_methods:
|
||||
# Only propagate the ancestors of the router method, not the router method itself
|
||||
ancestors[listener_name].update(ancestors[node])
|
||||
dfs_ancestors(listener_name, ancestors, visited, flow)
|
||||
|
||||
|
||||
def is_ancestor(node, ancestor_candidate, ancestors):
|
||||
return ancestor_candidate in ancestors.get(node, set())
|
||||
|
||||
|
||||
def build_parent_children_dict(flow):
|
||||
parent_children = {}
|
||||
|
||||
# Map listeners to their trigger methods
|
||||
for listener_name, (_, trigger_methods) in flow._listeners.items():
|
||||
for trigger in trigger_methods:
|
||||
if trigger not in parent_children:
|
||||
parent_children[trigger] = []
|
||||
if listener_name not in parent_children[trigger]:
|
||||
parent_children[trigger].append(listener_name)
|
||||
|
||||
# Map router methods to their paths and to listeners
|
||||
for router_method_name, paths in flow._router_paths.items():
|
||||
for path in paths:
|
||||
# Map router method to listeners of each path
|
||||
for listener_name, (_, trigger_methods) in flow._listeners.items():
|
||||
if path in trigger_methods:
|
||||
if router_method_name not in parent_children:
|
||||
parent_children[router_method_name] = []
|
||||
if listener_name not in parent_children[router_method_name]:
|
||||
parent_children[router_method_name].append(listener_name)
|
||||
|
||||
return parent_children
|
||||
|
||||
|
||||
def get_child_index(parent, child, parent_children):
|
||||
children = parent_children.get(parent, [])
|
||||
children.sort()
|
||||
return children.index(child)
|
||||
178
src/crewai/flow/visualization_utils.py
Normal file
178
src/crewai/flow/visualization_utils.py
Normal file
@@ -0,0 +1,178 @@
|
||||
import ast
|
||||
import inspect
|
||||
|
||||
from .utils import (
|
||||
build_ancestor_dict,
|
||||
build_parent_children_dict,
|
||||
get_child_index,
|
||||
is_ancestor,
|
||||
)
|
||||
|
||||
|
||||
def method_calls_crew(method):
|
||||
"""Check if the method calls `.crew()`."""
|
||||
try:
|
||||
source = inspect.getsource(method)
|
||||
source = inspect.cleandoc(source)
|
||||
tree = ast.parse(source)
|
||||
except Exception as e:
|
||||
print(f"Could not parse method {method.__name__}: {e}")
|
||||
return False
|
||||
|
||||
class CrewCallVisitor(ast.NodeVisitor):
|
||||
def __init__(self):
|
||||
self.found = False
|
||||
|
||||
def visit_Call(self, node):
|
||||
if isinstance(node.func, ast.Attribute):
|
||||
if node.func.attr == "crew":
|
||||
self.found = True
|
||||
self.generic_visit(node)
|
||||
|
||||
visitor = CrewCallVisitor()
|
||||
visitor.visit(tree)
|
||||
return visitor.found
|
||||
|
||||
|
||||
def add_nodes_to_network(net, flow, node_positions, node_styles):
|
||||
def human_friendly_label(method_name):
|
||||
return method_name.replace("_", " ").title()
|
||||
|
||||
for method_name, (x, y) in node_positions.items():
|
||||
method = flow._methods.get(method_name)
|
||||
if hasattr(method, "__is_start_method__"):
|
||||
node_style = node_styles["start"]
|
||||
elif hasattr(method, "__is_router__"):
|
||||
node_style = node_styles["router"]
|
||||
elif method_calls_crew(method):
|
||||
node_style = node_styles["crew"]
|
||||
else:
|
||||
node_style = node_styles["method"]
|
||||
|
||||
node_style = node_style.copy()
|
||||
label = human_friendly_label(method_name)
|
||||
|
||||
node_style.update(
|
||||
{
|
||||
"label": label,
|
||||
"shape": "box",
|
||||
"font": {
|
||||
"multi": "html",
|
||||
"color": node_style.get("font", {}).get("color", "#FFFFFF"),
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
net.add_node(
|
||||
method_name,
|
||||
x=x,
|
||||
y=y,
|
||||
fixed=True,
|
||||
physics=False,
|
||||
**node_style,
|
||||
)
|
||||
|
||||
|
||||
def compute_positions(flow, node_levels, y_spacing=150, x_spacing=150):
|
||||
level_nodes = {}
|
||||
node_positions = {}
|
||||
|
||||
for method_name, level in node_levels.items():
|
||||
level_nodes.setdefault(level, []).append(method_name)
|
||||
|
||||
for level, nodes in level_nodes.items():
|
||||
x_offset = -(len(nodes) - 1) * x_spacing / 2 # Center nodes horizontally
|
||||
for i, method_name in enumerate(nodes):
|
||||
x = x_offset + i * x_spacing
|
||||
y = level * y_spacing
|
||||
node_positions[method_name] = (x, y)
|
||||
|
||||
return node_positions
|
||||
|
||||
|
||||
def add_edges(net, flow, node_positions, colors):
|
||||
ancestors = build_ancestor_dict(flow)
|
||||
parent_children = build_parent_children_dict(flow)
|
||||
|
||||
for method_name in flow._listeners:
|
||||
condition_type, trigger_methods = flow._listeners[method_name]
|
||||
is_and_condition = condition_type == "AND"
|
||||
|
||||
for trigger in trigger_methods:
|
||||
if trigger in flow._methods or trigger in flow._routers.values():
|
||||
is_router_edge = any(
|
||||
trigger in paths for paths in flow._router_paths.values()
|
||||
)
|
||||
edge_color = colors["router_edge"] if is_router_edge else colors["edge"]
|
||||
|
||||
is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
|
||||
parent_has_multiple_children = len(parent_children.get(trigger, [])) > 1
|
||||
needs_curvature = is_cycle_edge or parent_has_multiple_children
|
||||
|
||||
if needs_curvature:
|
||||
source_pos = node_positions.get(trigger)
|
||||
target_pos = node_positions.get(method_name)
|
||||
|
||||
if source_pos and target_pos:
|
||||
dx = target_pos[0] - source_pos[0]
|
||||
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
|
||||
index = get_child_index(trigger, method_name, parent_children)
|
||||
edge_smooth = {
|
||||
"type": smooth_type,
|
||||
"roundness": 0.2 + (0.1 * index),
|
||||
}
|
||||
else:
|
||||
edge_smooth = {"type": "cubicBezier"}
|
||||
else:
|
||||
edge_smooth = False
|
||||
|
||||
edge_style = {
|
||||
"color": edge_color,
|
||||
"width": 2,
|
||||
"arrows": "to",
|
||||
"dashes": True if is_router_edge or is_and_condition else False,
|
||||
"smooth": edge_smooth,
|
||||
}
|
||||
|
||||
net.add_edge(trigger, method_name, **edge_style)
|
||||
|
||||
for router_method_name, paths in flow._router_paths.items():
|
||||
for path in paths:
|
||||
for listener_name, (
|
||||
condition_type,
|
||||
trigger_methods,
|
||||
) in flow._listeners.items():
|
||||
if path in trigger_methods:
|
||||
is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
|
||||
parent_has_multiple_children = (
|
||||
len(parent_children.get(router_method_name, [])) > 1
|
||||
)
|
||||
needs_curvature = is_cycle_edge or parent_has_multiple_children
|
||||
|
||||
if needs_curvature:
|
||||
source_pos = node_positions.get(router_method_name)
|
||||
target_pos = node_positions.get(listener_name)
|
||||
|
||||
if source_pos and target_pos:
|
||||
dx = target_pos[0] - source_pos[0]
|
||||
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
|
||||
index = get_child_index(
|
||||
router_method_name, listener_name, parent_children
|
||||
)
|
||||
edge_smooth = {
|
||||
"type": smooth_type,
|
||||
"roundness": 0.2 + (0.1 * index),
|
||||
}
|
||||
else:
|
||||
edge_smooth = {"type": "cubicBezier"}
|
||||
else:
|
||||
edge_smooth = False
|
||||
|
||||
edge_style = {
|
||||
"color": colors["router_edge"],
|
||||
"width": 2,
|
||||
"arrows": "to",
|
||||
"dashes": True,
|
||||
"smooth": edge_smooth,
|
||||
}
|
||||
net.add_edge(router_method_name, listener_name, **edge_style)
|
||||
@@ -1,20 +1,183 @@
|
||||
from typing import Any, Dict, List
|
||||
from litellm import completion
|
||||
from contextlib import contextmanager
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
import logging
|
||||
import warnings
|
||||
import litellm
|
||||
from litellm import get_supported_openai_params
|
||||
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededException,
|
||||
)
|
||||
|
||||
import sys
|
||||
import io
|
||||
|
||||
|
||||
class FilteredStream(io.StringIO):
|
||||
def write(self, s):
|
||||
if (
|
||||
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new"
|
||||
in s
|
||||
or "LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True`"
|
||||
in s
|
||||
):
|
||||
return
|
||||
super().write(s)
|
||||
|
||||
|
||||
LLM_CONTEXT_WINDOW_SIZES = {
|
||||
# openai
|
||||
"gpt-4": 8192,
|
||||
"gpt-4o": 128000,
|
||||
"gpt-4o-mini": 128000,
|
||||
"gpt-4-turbo": 128000,
|
||||
"o1-preview": 128000,
|
||||
"o1-mini": 128000,
|
||||
# deepseek
|
||||
"deepseek-chat": 128000,
|
||||
# groq
|
||||
"gemma2-9b-it": 8192,
|
||||
"gemma-7b-it": 8192,
|
||||
"llama3-groq-70b-8192-tool-use-preview": 8192,
|
||||
"llama3-groq-8b-8192-tool-use-preview": 8192,
|
||||
"llama-3.1-70b-versatile": 131072,
|
||||
"llama-3.1-8b-instant": 131072,
|
||||
"llama-3.2-1b-preview": 8192,
|
||||
"llama-3.2-3b-preview": 8192,
|
||||
"llama-3.2-11b-text-preview": 8192,
|
||||
"llama-3.2-90b-text-preview": 8192,
|
||||
"llama3-70b-8192": 8192,
|
||||
"llama3-8b-8192": 8192,
|
||||
"mixtral-8x7b-32768": 32768,
|
||||
}
|
||||
|
||||
|
||||
@contextmanager
|
||||
def suppress_warnings():
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
# Redirect stdout and stderr
|
||||
old_stdout = sys.stdout
|
||||
old_stderr = sys.stderr
|
||||
sys.stdout = FilteredStream()
|
||||
sys.stderr = FilteredStream()
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
# Restore stdout and stderr
|
||||
sys.stdout = old_stdout
|
||||
sys.stderr = old_stderr
|
||||
|
||||
|
||||
class LLM:
|
||||
def __init__(self, model: str, stop: List[str] = [], callbacks: List[Any] = []):
|
||||
self.stop = stop
|
||||
def __init__(
|
||||
self,
|
||||
model: str,
|
||||
timeout: Optional[Union[float, int]] = None,
|
||||
temperature: Optional[float] = None,
|
||||
top_p: Optional[float] = None,
|
||||
n: Optional[int] = None,
|
||||
stop: Optional[Union[str, List[str]]] = None,
|
||||
max_completion_tokens: Optional[int] = None,
|
||||
max_tokens: Optional[int] = None,
|
||||
presence_penalty: Optional[float] = None,
|
||||
frequency_penalty: Optional[float] = None,
|
||||
logit_bias: Optional[Dict[int, float]] = None,
|
||||
response_format: Optional[Dict[str, Any]] = None,
|
||||
seed: Optional[int] = None,
|
||||
logprobs: Optional[bool] = None,
|
||||
top_logprobs: Optional[int] = None,
|
||||
base_url: Optional[str] = None,
|
||||
api_version: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
callbacks: List[Any] = [],
|
||||
**kwargs,
|
||||
):
|
||||
self.model = model
|
||||
self.timeout = timeout
|
||||
self.temperature = temperature
|
||||
self.top_p = top_p
|
||||
self.n = n
|
||||
self.stop = stop
|
||||
self.max_completion_tokens = max_completion_tokens
|
||||
self.max_tokens = max_tokens
|
||||
self.presence_penalty = presence_penalty
|
||||
self.frequency_penalty = frequency_penalty
|
||||
self.logit_bias = logit_bias
|
||||
self.response_format = response_format
|
||||
self.seed = seed
|
||||
self.logprobs = logprobs
|
||||
self.top_logprobs = top_logprobs
|
||||
self.base_url = base_url
|
||||
self.api_version = api_version
|
||||
self.api_key = api_key
|
||||
self.callbacks = callbacks
|
||||
self.kwargs = kwargs
|
||||
|
||||
litellm.drop_params = True
|
||||
litellm.set_verbose = False
|
||||
litellm.callbacks = callbacks
|
||||
|
||||
def call(self, messages: List[Dict[str, str]]) -> Dict[str, Any]:
|
||||
response = completion(
|
||||
stop=self.stop, model=self.model, messages=messages, num_retries=5
|
||||
)
|
||||
return response["choices"][0]["message"]["content"]
|
||||
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
|
||||
with suppress_warnings():
|
||||
if callbacks and len(callbacks) > 0:
|
||||
litellm.callbacks = callbacks
|
||||
|
||||
def _call_callbacks(self, formatted_answer):
|
||||
for callback in self.callbacks:
|
||||
callback(formatted_answer)
|
||||
try:
|
||||
params = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"timeout": self.timeout,
|
||||
"temperature": self.temperature,
|
||||
"top_p": self.top_p,
|
||||
"n": self.n,
|
||||
"stop": self.stop,
|
||||
"max_tokens": self.max_tokens or self.max_completion_tokens,
|
||||
"presence_penalty": self.presence_penalty,
|
||||
"frequency_penalty": self.frequency_penalty,
|
||||
"logit_bias": self.logit_bias,
|
||||
"response_format": self.response_format,
|
||||
"seed": self.seed,
|
||||
"logprobs": self.logprobs,
|
||||
"top_logprobs": self.top_logprobs,
|
||||
"api_base": self.base_url,
|
||||
"api_version": self.api_version,
|
||||
"api_key": self.api_key,
|
||||
"stream": False,
|
||||
**self.kwargs,
|
||||
}
|
||||
|
||||
# Remove None values to avoid passing unnecessary parameters
|
||||
params = {k: v for k, v in params.items() if v is not None}
|
||||
|
||||
response = litellm.completion(**params)
|
||||
return response["choices"][0]["message"]["content"]
|
||||
except Exception as e:
|
||||
if not LLMContextLengthExceededException(
|
||||
str(e)
|
||||
)._is_context_limit_error(str(e)):
|
||||
logging.error(f"LiteLLM call failed: {str(e)}")
|
||||
|
||||
raise # Re-raise the exception after logging
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
try:
|
||||
params = get_supported_openai_params(model=self.model)
|
||||
return "response_format" in params
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to get supported params: {str(e)}")
|
||||
return False
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
try:
|
||||
params = get_supported_openai_params(model=self.model)
|
||||
return "stop" in params
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to get supported params: {str(e)}")
|
||||
return False
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
# Only using 75% of the context window size to avoid cutting the message in the middle
|
||||
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
|
||||
|
||||
@@ -10,12 +10,13 @@ class EntityMemory(Memory):
|
||||
Inherits from the Memory class.
|
||||
"""
|
||||
|
||||
def __init__(self, crew=None, embedder_config=None):
|
||||
storage = RAGStorage(
|
||||
type="entities",
|
||||
allow_reset=False,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
def __init__(self, crew=None, embedder_config=None, storage=None):
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="entities", allow_reset=False, embedder_config=embedder_config, crew=crew
|
||||
)
|
||||
)
|
||||
super().__init__(storage)
|
||||
|
||||
|
||||
@@ -14,8 +14,8 @@ class LongTermMemory(Memory):
|
||||
LongTermMemoryItem instances.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
storage = LTMSQLiteStorage()
|
||||
def __init__(self, storage=None):
|
||||
storage = storage if storage else LTMSQLiteStorage()
|
||||
super().__init__(storage)
|
||||
|
||||
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
|
||||
|
||||
@@ -13,9 +13,13 @@ class ShortTermMemory(Memory):
|
||||
MemoryItem instances.
|
||||
"""
|
||||
|
||||
def __init__(self, crew=None, embedder_config=None):
|
||||
storage = RAGStorage(
|
||||
type="short_term", embedder_config=embedder_config, crew=crew
|
||||
def __init__(self, crew=None, embedder_config=None, storage=None):
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="short_term", embedder_config=embedder_config, crew=crew
|
||||
)
|
||||
)
|
||||
super().__init__(storage)
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from functools import wraps
|
||||
|
||||
from crewai.project.utils import memoize
|
||||
from crewai import Crew
|
||||
|
||||
|
||||
def task(func):
|
||||
@@ -72,7 +73,7 @@ def pipeline(func):
|
||||
return memoize(func)
|
||||
|
||||
|
||||
def crew(func):
|
||||
def crew(func) -> "Crew":
|
||||
def wrapper(self, *args, **kwargs):
|
||||
instantiated_tasks = []
|
||||
instantiated_agents = []
|
||||
|
||||
@@ -1,14 +1,16 @@
|
||||
import inspect
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Dict
|
||||
from typing import Any, Callable, Dict, Type, TypeVar
|
||||
|
||||
import yaml
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
T = TypeVar("T", bound=Type[Any])
|
||||
|
||||
def CrewBase(cls):
|
||||
|
||||
def CrewBase(cls: T) -> T:
|
||||
class WrappedClass(cls):
|
||||
is_crew_class: bool = True # type: ignore
|
||||
|
||||
@@ -35,7 +37,7 @@ def CrewBase(cls):
|
||||
@staticmethod
|
||||
def load_yaml(config_path: Path):
|
||||
try:
|
||||
with open(config_path, "r") as file:
|
||||
with open(config_path, "r", encoding="utf-8") as file:
|
||||
return yaml.safe_load(file)
|
||||
except FileNotFoundError:
|
||||
print(f"File not found: {config_path}")
|
||||
|
||||
@@ -35,7 +35,7 @@ class TaskOutput(BaseModel):
|
||||
return self
|
||||
|
||||
@property
|
||||
def json(self) -> str:
|
||||
def json(self) -> Optional[str]:
|
||||
if self.output_format != OutputFormat.JSON:
|
||||
raise ValueError(
|
||||
"""
|
||||
|
||||
@@ -5,8 +5,8 @@ import json
|
||||
import os
|
||||
import platform
|
||||
import warnings
|
||||
from typing import TYPE_CHECKING, Any, Optional
|
||||
from contextlib import contextmanager
|
||||
from typing import TYPE_CHECKING, Any, Optional
|
||||
|
||||
|
||||
@contextmanager
|
||||
@@ -21,7 +21,9 @@ with suppress_warnings():
|
||||
|
||||
|
||||
from opentelemetry import trace # noqa: E402
|
||||
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402
|
||||
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
|
||||
OTLPSpanExporter, # noqa: E402
|
||||
)
|
||||
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
|
||||
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
|
||||
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
|
||||
@@ -53,7 +55,8 @@ class Telemetry:
|
||||
self.resource = Resource(
|
||||
attributes={SERVICE_NAME: "crewAI-telemetry"},
|
||||
)
|
||||
self.provider = TracerProvider(resource=self.resource)
|
||||
with suppress_warnings():
|
||||
self.provider = TracerProvider(resource=self.resource)
|
||||
|
||||
processor = BatchSpanProcessor(
|
||||
OTLPSpanExporter(
|
||||
@@ -116,8 +119,12 @@ class Telemetry:
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"i18n": agent.i18n.prompt_file,
|
||||
"function_calling_llm": agent.function_calling_llm,
|
||||
"llm": agent.llm,
|
||||
"function_calling_llm": (
|
||||
agent.function_calling_llm.model
|
||||
if agent.function_calling_llm
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
@@ -142,9 +149,9 @@ class Telemetry:
|
||||
"expected_output": task.expected_output,
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": task.agent.role
|
||||
if task.agent
|
||||
else "None",
|
||||
"agent_role": (
|
||||
task.agent.role if task.agent else "None"
|
||||
),
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"context": (
|
||||
[task.description for task in task.context]
|
||||
@@ -181,8 +188,12 @@ class Telemetry:
|
||||
"verbose?": agent.verbose,
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"function_calling_llm": agent.function_calling_llm,
|
||||
"llm": agent.llm,
|
||||
"function_calling_llm": (
|
||||
agent.function_calling_llm.model
|
||||
if agent.function_calling_llm
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
@@ -205,9 +216,9 @@ class Telemetry:
|
||||
"id": str(task.id),
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": task.agent.role
|
||||
if task.agent
|
||||
else "None",
|
||||
"agent_role": (
|
||||
task.agent.role if task.agent else "None"
|
||||
),
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
@@ -296,7 +307,7 @@ class Telemetry:
|
||||
self._add_attribute(span, "tool_name", tool_name)
|
||||
self._add_attribute(span, "attempts", attempts)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm)
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
@@ -316,7 +327,7 @@ class Telemetry:
|
||||
self._add_attribute(span, "tool_name", tool_name)
|
||||
self._add_attribute(span, "attempts", attempts)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm)
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
@@ -334,7 +345,7 @@ class Telemetry:
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm)
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
@@ -487,7 +498,7 @@ class Telemetry:
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"i18n": agent.i18n.prompt_file,
|
||||
"llm": agent.llm,
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in agent.tools or []
|
||||
@@ -563,3 +574,38 @@ class Telemetry:
|
||||
return span.set_attribute(key, value)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def flow_creation_span(self, flow_name: str):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Creation")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def flow_plotting_span(self, flow_name: str, node_names: list[str]):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Plotting")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def flow_execution_span(self, flow_name: str, node_names: list[str]):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Execution")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
import ast
|
||||
import datetime
|
||||
import os
|
||||
import time
|
||||
from difflib import SequenceMatcher
|
||||
from textwrap import dedent
|
||||
from typing import Any, List, Union
|
||||
@@ -8,7 +10,10 @@ from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.task import Task
|
||||
from crewai.telemetry import Telemetry
|
||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
|
||||
from crewai.utilities import I18N, Converter, ConverterError, Printer
|
||||
import crewai.utilities.events as events
|
||||
|
||||
|
||||
agentops = None
|
||||
if os.environ.get("AGENTOPS_API_KEY"):
|
||||
@@ -17,7 +22,7 @@ if os.environ.get("AGENTOPS_API_KEY"):
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o"]
|
||||
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini"]
|
||||
|
||||
|
||||
class ToolUsageErrorException(Exception):
|
||||
@@ -71,10 +76,12 @@ class ToolUsage:
|
||||
self.function_calling_llm = function_calling_llm
|
||||
|
||||
# Set the maximum parsing attempts for bigger models
|
||||
if self._is_gpt(self.function_calling_llm) and "4" in self.function_calling_llm:
|
||||
if self.function_calling_llm in OPENAI_BIGGER_MODELS:
|
||||
self._max_parsing_attempts = 2
|
||||
self._remember_format_after_usages = 4
|
||||
if (
|
||||
self.function_calling_llm
|
||||
and self.function_calling_llm in OPENAI_BIGGER_MODELS
|
||||
):
|
||||
self._max_parsing_attempts = 2
|
||||
self._remember_format_after_usages = 4
|
||||
|
||||
def parse(self, tool_string: str):
|
||||
"""Parse the tool string and return the tool calling."""
|
||||
@@ -124,12 +131,16 @@ class ToolUsage:
|
||||
except Exception:
|
||||
self.task.increment_tools_errors()
|
||||
|
||||
result = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
|
||||
started_at = time.time()
|
||||
from_cache = False
|
||||
|
||||
result = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
|
||||
# check if cache is available
|
||||
if self.tools_handler.cache:
|
||||
result = self.tools_handler.cache.read( # type: ignore # Incompatible types in assignment (expression has type "str | None", variable has type "str")
|
||||
tool=calling.tool_name, input=calling.arguments
|
||||
)
|
||||
from_cache = result is not None
|
||||
|
||||
original_tool = next(
|
||||
(ot for ot in self.original_tools if ot.name == tool.name), None
|
||||
@@ -161,6 +172,7 @@ class ToolUsage:
|
||||
else:
|
||||
result = tool.invoke(input={})
|
||||
except Exception as e:
|
||||
self.on_tool_error(tool=tool, tool_calling=calling, e=e)
|
||||
self._run_attempts += 1
|
||||
if self._run_attempts > self._max_parsing_attempts:
|
||||
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
|
||||
@@ -212,6 +224,13 @@ class ToolUsage:
|
||||
"tool_args": calling.arguments,
|
||||
}
|
||||
|
||||
self.on_tool_use_finished(
|
||||
tool=tool,
|
||||
tool_calling=calling,
|
||||
from_cache=from_cache,
|
||||
started_at=started_at,
|
||||
)
|
||||
|
||||
if (
|
||||
hasattr(original_tool, "result_as_answer")
|
||||
and original_tool.result_as_answer # type: ignore # Item "None" of "Any | None" has no attribute "cache_function"
|
||||
@@ -295,61 +314,78 @@ class ToolUsage:
|
||||
)
|
||||
return "\n--\n".join(descriptions)
|
||||
|
||||
def _is_gpt(self, llm) -> bool:
|
||||
return (
|
||||
"gpt" in str(llm).lower()
|
||||
or "o1-preview" in str(llm).lower()
|
||||
or "o1-mini" in str(llm).lower()
|
||||
def _function_calling(self, tool_string: str):
|
||||
model = (
|
||||
InstructorToolCalling
|
||||
if self.function_calling_llm.supports_function_calling()
|
||||
else ToolCalling
|
||||
)
|
||||
converter = Converter(
|
||||
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid output schema:\n\n### TEXT \n{tool_string}",
|
||||
llm=self.function_calling_llm,
|
||||
model=model,
|
||||
instructions=dedent(
|
||||
"""\
|
||||
The schema should have the following structure, only two keys:
|
||||
- tool_name: str
|
||||
- arguments: dict (always a dictionary, with all arguments being passed)
|
||||
|
||||
Example:
|
||||
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
|
||||
),
|
||||
max_attempts=1,
|
||||
)
|
||||
tool_object = converter.to_pydantic()
|
||||
calling = ToolCalling(
|
||||
tool_name=tool_object["tool_name"],
|
||||
arguments=tool_object["arguments"],
|
||||
log=tool_string, # type: ignore
|
||||
)
|
||||
|
||||
if isinstance(calling, ConverterError):
|
||||
raise calling
|
||||
|
||||
return calling
|
||||
|
||||
def _original_tool_calling(self, tool_string: str, raise_error: bool = False):
|
||||
tool_name = self.action.tool
|
||||
tool = self._select_tool(tool_name)
|
||||
try:
|
||||
tool_input = self._validate_tool_input(self.action.tool_input)
|
||||
arguments = ast.literal_eval(tool_input)
|
||||
except Exception:
|
||||
if raise_error:
|
||||
raise
|
||||
else:
|
||||
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
|
||||
f'{self._i18n.errors("tool_arguments_error")}'
|
||||
)
|
||||
|
||||
if not isinstance(arguments, dict):
|
||||
if raise_error:
|
||||
raise
|
||||
else:
|
||||
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
|
||||
f'{self._i18n.errors("tool_arguments_error")}'
|
||||
)
|
||||
|
||||
return ToolCalling(
|
||||
tool_name=tool.name,
|
||||
arguments=arguments,
|
||||
log=tool_string, # type: ignore
|
||||
)
|
||||
|
||||
def _tool_calling(
|
||||
self, tool_string: str
|
||||
) -> Union[ToolCalling, InstructorToolCalling]:
|
||||
try:
|
||||
if self.function_calling_llm:
|
||||
model = (
|
||||
InstructorToolCalling
|
||||
if self._is_gpt(self.function_calling_llm)
|
||||
else ToolCalling
|
||||
)
|
||||
converter = Converter(
|
||||
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid output schema:\n\n### TEXT \n{tool_string}",
|
||||
llm=self.function_calling_llm,
|
||||
model=model,
|
||||
instructions=dedent(
|
||||
"""\
|
||||
The schema should have the following structure, only two keys:
|
||||
- tool_name: str
|
||||
- arguments: dict (with all arguments being passed)
|
||||
|
||||
Example:
|
||||
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
|
||||
),
|
||||
max_attempts=1,
|
||||
)
|
||||
calling = converter.to_pydantic()
|
||||
|
||||
if isinstance(calling, ConverterError):
|
||||
raise calling
|
||||
else:
|
||||
tool_name = self.action.tool
|
||||
tool = self._select_tool(tool_name)
|
||||
try:
|
||||
tool_input = self._validate_tool_input(self.action.tool_input)
|
||||
arguments = ast.literal_eval(tool_input)
|
||||
except Exception:
|
||||
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
|
||||
f'{self._i18n.errors("tool_arguments_error")}'
|
||||
)
|
||||
if not isinstance(arguments, dict):
|
||||
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
|
||||
f'{self._i18n.errors("tool_arguments_error")}'
|
||||
)
|
||||
calling = ToolCalling(
|
||||
tool_name=tool.name,
|
||||
arguments=arguments,
|
||||
log=tool_string, # type: ignore
|
||||
)
|
||||
try:
|
||||
return self._original_tool_calling(tool_string, raise_error=True)
|
||||
except Exception:
|
||||
if self.function_calling_llm:
|
||||
return self._function_calling(tool_string)
|
||||
else:
|
||||
return self._original_tool_calling(tool_string)
|
||||
except Exception as e:
|
||||
self._run_attempts += 1
|
||||
if self._run_attempts > self._max_parsing_attempts:
|
||||
@@ -362,8 +398,6 @@ class ToolUsage:
|
||||
)
|
||||
return self._tool_calling(tool_string)
|
||||
|
||||
return calling
|
||||
|
||||
def _validate_tool_input(self, tool_input: str) -> str:
|
||||
try:
|
||||
ast.literal_eval(tool_input)
|
||||
@@ -414,3 +448,34 @@ class ToolUsage:
|
||||
# Reconstruct the JSON string
|
||||
new_json_string = "{" + ", ".join(formatted_entries) + "}"
|
||||
return new_json_string
|
||||
|
||||
def on_tool_error(self, tool: Any, tool_calling: ToolCalling, e: Exception) -> None:
|
||||
event_data = self._prepare_event_data(tool, tool_calling)
|
||||
events.emit(
|
||||
source=self, event=ToolUsageError(**{**event_data, "error": str(e)})
|
||||
)
|
||||
|
||||
def on_tool_use_finished(
|
||||
self, tool: Any, tool_calling: ToolCalling, from_cache: bool, started_at: float
|
||||
) -> None:
|
||||
finished_at = time.time()
|
||||
event_data = self._prepare_event_data(tool, tool_calling)
|
||||
event_data.update(
|
||||
{
|
||||
"started_at": datetime.datetime.fromtimestamp(started_at),
|
||||
"finished_at": datetime.datetime.fromtimestamp(finished_at),
|
||||
"from_cache": from_cache,
|
||||
}
|
||||
)
|
||||
events.emit(source=self, event=ToolUsageFinished(**event_data))
|
||||
|
||||
def _prepare_event_data(self, tool: Any, tool_calling: ToolCalling) -> dict:
|
||||
return {
|
||||
"agent_key": self.agent.key,
|
||||
"agent_role": (self.agent._original_role or self.agent.role),
|
||||
"run_attempts": self._run_attempts,
|
||||
"delegations": self.task.delegations,
|
||||
"tool_name": tool.name,
|
||||
"tool_args": tool_calling.arguments,
|
||||
"tool_class": tool.__class__.__name__,
|
||||
}
|
||||
|
||||
23
src/crewai/tools/tool_usage_events.py
Normal file
23
src/crewai/tools/tool_usage_events.py
Normal file
@@ -0,0 +1,23 @@
|
||||
from typing import Any, Dict
|
||||
from pydantic import BaseModel
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class ToolUsageEvent(BaseModel):
|
||||
agent_key: str
|
||||
agent_role: str
|
||||
tool_name: str
|
||||
tool_args: Dict[str, Any]
|
||||
tool_class: str
|
||||
run_attempts: int | None = None
|
||||
delegations: int | None = None
|
||||
|
||||
|
||||
class ToolUsageFinished(ToolUsageEvent):
|
||||
started_at: datetime
|
||||
finished_at: datetime
|
||||
from_cache: bool = False
|
||||
|
||||
|
||||
class ToolUsageError(ToolUsageEvent):
|
||||
error: str
|
||||
@@ -17,7 +17,7 @@
|
||||
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
|
||||
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
|
||||
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
|
||||
"getting_input": "This is the agent's final answer: {final_answer}\nPlease provide feedback: ",
|
||||
"getting_input": "This is the agent's final answer: {final_answer}\n\n",
|
||||
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
|
||||
"sumamrize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
|
||||
"summary": "This is a summary of our conversation so far:\n{merged_summary}"
|
||||
|
||||
@@ -2,7 +2,6 @@ import json
|
||||
import re
|
||||
from typing import Any, Optional, Type, Union
|
||||
|
||||
from crewai.llm import LLM
|
||||
from pydantic import BaseModel, ValidationError
|
||||
|
||||
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
|
||||
@@ -24,10 +23,10 @@ class Converter(OutputConverter):
|
||||
def to_pydantic(self, current_attempt=1):
|
||||
"""Convert text to pydantic."""
|
||||
try:
|
||||
if self.is_gpt:
|
||||
if self.llm.supports_function_calling():
|
||||
return self._create_instructor().to_pydantic()
|
||||
else:
|
||||
return LLM(model=self.llm).call(
|
||||
return self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": self.instructions},
|
||||
{"role": "user", "content": self.text},
|
||||
@@ -43,11 +42,11 @@ class Converter(OutputConverter):
|
||||
def to_json(self, current_attempt=1):
|
||||
"""Convert text to json."""
|
||||
try:
|
||||
if self.is_gpt:
|
||||
if self.llm.supports_function_calling():
|
||||
return self._create_instructor().to_json()
|
||||
else:
|
||||
return json.dumps(
|
||||
LLM(model=self.llm).call(
|
||||
self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": self.instructions},
|
||||
{"role": "user", "content": self.text},
|
||||
@@ -78,7 +77,7 @@ class Converter(OutputConverter):
|
||||
)
|
||||
|
||||
parser = CrewPydanticOutputParser(pydantic_object=self.model)
|
||||
result = LLM(model=self.llm).call(
|
||||
result = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": self.instructions},
|
||||
{"role": "user", "content": self.text},
|
||||
@@ -86,15 +85,6 @@ class Converter(OutputConverter):
|
||||
)
|
||||
return parser.parse_result(result)
|
||||
|
||||
@property
|
||||
def is_gpt(self) -> bool:
|
||||
"""Return if llm provided is of gpt from openai."""
|
||||
return (
|
||||
"gpt" in str(self.llm).lower()
|
||||
or "o1-preview" in str(self.llm).lower()
|
||||
or "o1-mini" in str(self.llm).lower()
|
||||
)
|
||||
|
||||
|
||||
def convert_to_model(
|
||||
result: str,
|
||||
@@ -113,10 +103,12 @@ def convert_to_model(
|
||||
return handle_partial_json(
|
||||
result, model, bool(output_json), agent, converter_cls
|
||||
)
|
||||
|
||||
except ValidationError:
|
||||
return handle_partial_json(
|
||||
result, model, bool(output_json), agent, converter_cls
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
Printer().print(
|
||||
content=f"Unexpected error during model conversion: {type(e).__name__}: {e}. Returning original result.",
|
||||
@@ -180,7 +172,6 @@ def convert_with_instructions(
|
||||
model=model,
|
||||
instructions=instructions,
|
||||
)
|
||||
|
||||
exported_result = (
|
||||
converter.to_pydantic() if not is_json_output else converter.to_json()
|
||||
)
|
||||
@@ -197,21 +188,12 @@ def convert_with_instructions(
|
||||
|
||||
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
|
||||
instructions = "I'm gonna convert this raw text into valid JSON."
|
||||
if not is_gpt(llm):
|
||||
if llm.supports_function_calling():
|
||||
model_schema = PydanticSchemaParser(model=model).get_schema()
|
||||
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
|
||||
return instructions
|
||||
|
||||
|
||||
def is_gpt(llm: Any) -> bool:
|
||||
"""Return if llm provided is of gpt from openai."""
|
||||
return (
|
||||
"gpt" in str(llm).lower()
|
||||
or "o1-preview" in str(llm).lower()
|
||||
or "o1-mini" in str(llm).lower()
|
||||
)
|
||||
|
||||
|
||||
def create_converter(
|
||||
agent: Optional[Any] = None,
|
||||
converter_cls: Optional[Type[Converter]] = None,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from datetime import datetime
|
||||
from datetime import datetime, date
|
||||
import json
|
||||
from uuid import UUID
|
||||
from pydantic import BaseModel
|
||||
@@ -11,8 +11,9 @@ class CrewJSONEncoder(json.JSONEncoder):
|
||||
elif isinstance(obj, UUID):
|
||||
return str(obj)
|
||||
|
||||
elif isinstance(obj, datetime):
|
||||
elif isinstance(obj, datetime) or isinstance(obj, date):
|
||||
return obj.isoformat()
|
||||
|
||||
return super().default(obj)
|
||||
|
||||
def _handle_pydantic_model(self, obj):
|
||||
|
||||
@@ -49,7 +49,7 @@ class TaskEvaluation(BaseModel):
|
||||
|
||||
class TrainingTaskEvaluation(BaseModel):
|
||||
suggestions: List[str] = Field(
|
||||
description="Based on the Human Feedbacks and the comparison between Initial Outputs and Improved outputs provide action items based on human_feedback for future tasks."
|
||||
description="List of clear, actionable instructions derived from the Human Feedbacks to enhance the Agent's performance. Analyze the differences between Initial Outputs and Improved Outputs to generate specific action items for future tasks. Ensure all key and specific points from the human feedback are incorporated into these instructions."
|
||||
)
|
||||
quality: float = Field(
|
||||
description="A score from 0 to 10 evaluating on completion, quality, and overall performance from the improved output to the initial output based on the human feedback."
|
||||
@@ -78,7 +78,7 @@ class TaskEvaluator:
|
||||
|
||||
instructions = "Convert all responses into valid JSON output."
|
||||
|
||||
if not self._is_gpt(self.llm):
|
||||
if not self.llm.supports_function_calling():
|
||||
model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema()
|
||||
instructions = f"{instructions}\n\nReturn only valid JSON with the following schema:\n```json\n{model_schema}\n```"
|
||||
|
||||
@@ -91,13 +91,6 @@ class TaskEvaluator:
|
||||
|
||||
return converter.to_pydantic()
|
||||
|
||||
def _is_gpt(self, llm) -> bool:
|
||||
return (
|
||||
"gpt" in str(self.llm).lower()
|
||||
or "o1-preview" in str(self.llm).lower()
|
||||
or "o1-mini" in str(self.llm).lower()
|
||||
)
|
||||
|
||||
def evaluate_training_data(
|
||||
self, training_data: dict, agent_id: str
|
||||
) -> TrainingTaskEvaluation:
|
||||
@@ -123,12 +116,12 @@ class TaskEvaluator:
|
||||
"Assess the quality of the training data based on the llm output, human feedback , and llm output improved result.\n\n"
|
||||
f"{final_aggregated_data}"
|
||||
"Please provide:\n"
|
||||
"- Based on the Human Feedbacks and the comparison between Initial Outputs and Improved outputs provide action items based on human_feedback for future tasks\n"
|
||||
"- Provide a list of clear, actionable instructions derived from the Human Feedbacks to enhance the Agent's performance. Analyze the differences between Initial Outputs and Improved Outputs to generate specific action items for future tasks. Ensure all key and specificpoints from the human feedback are incorporated into these instructions.\n"
|
||||
"- A score from 0 to 10 evaluating on completion, quality, and overall performance from the improved output to the initial output based on the human feedback\n"
|
||||
)
|
||||
instructions = "I'm gonna convert this raw text into valid JSON."
|
||||
|
||||
if not self._is_gpt(self.llm):
|
||||
if not self.llm.supports_function_calling():
|
||||
model_schema = PydanticSchemaParser(
|
||||
model=TrainingTaskEvaluation
|
||||
).get_schema()
|
||||
|
||||
44
src/crewai/utilities/events.py
Normal file
44
src/crewai/utilities/events.py
Normal file
@@ -0,0 +1,44 @@
|
||||
from typing import Any, Callable, Generic, List, Dict, Type, TypeVar
|
||||
from functools import wraps
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
T = TypeVar("T")
|
||||
EVT = TypeVar("EVT", bound=BaseModel)
|
||||
|
||||
|
||||
class Emitter(Generic[T, EVT]):
|
||||
_listeners: Dict[Type[EVT], List[Callable]] = {}
|
||||
|
||||
def on(self, event_type: Type[EVT]):
|
||||
def decorator(func: Callable):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
return func(*args, **kwargs)
|
||||
|
||||
self._listeners.setdefault(event_type, []).append(wrapper)
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
|
||||
def emit(self, source: T, event: EVT) -> None:
|
||||
event_type = type(event)
|
||||
for func in self._listeners.get(event_type, []):
|
||||
func(source, event)
|
||||
|
||||
|
||||
default_emitter = Emitter[Any, BaseModel]()
|
||||
|
||||
|
||||
def emit(source: Any, event: BaseModel, raise_on_error: bool = False) -> None:
|
||||
try:
|
||||
default_emitter.emit(source, event)
|
||||
except Exception as e:
|
||||
if raise_on_error:
|
||||
raise e
|
||||
else:
|
||||
print(f"Error emitting event: {e}")
|
||||
|
||||
|
||||
def on(event_type: Type[BaseModel]) -> Callable:
|
||||
return default_emitter.on(event_type)
|
||||
@@ -1,5 +1,6 @@
|
||||
class LLMContextLengthExceededException(Exception):
|
||||
CONTEXT_LIMIT_ERRORS = [
|
||||
"expected a string with maximum length",
|
||||
"maximum context length",
|
||||
"context length exceeded",
|
||||
"context_length_exceeded",
|
||||
|
||||
@@ -17,13 +17,13 @@ class I18N(BaseModel):
|
||||
"""Load prompts from a JSON file."""
|
||||
try:
|
||||
if self.prompt_file:
|
||||
with open(self.prompt_file, "r") as f:
|
||||
with open(self.prompt_file, "r", encoding="utf-8") as f:
|
||||
self._prompts = json.load(f)
|
||||
else:
|
||||
dir_path = os.path.dirname(os.path.realpath(__file__))
|
||||
prompts_path = os.path.join(dir_path, "../translations/en.json")
|
||||
|
||||
with open(prompts_path, "r") as f:
|
||||
with open(prompts_path, "r", encoding="utf-8") as f:
|
||||
self._prompts = json.load(f)
|
||||
except FileNotFoundError:
|
||||
raise Exception(f"Prompt file '{self.prompt_file}' not found.")
|
||||
|
||||
@@ -42,6 +42,6 @@ class InternalInstructor:
|
||||
if self.instructions:
|
||||
messages.append({"role": "system", "content": self.instructions})
|
||||
model = self._client.chat.completions.create(
|
||||
model=self.llm, response_model=self.model, messages=messages
|
||||
model=self.llm.model, response_model=self.model, messages=messages
|
||||
)
|
||||
return model
|
||||
|
||||
@@ -9,9 +9,9 @@ class Logger(BaseModel):
|
||||
verbose: bool = Field(default=False)
|
||||
_printer: Printer = PrivateAttr(default_factory=Printer)
|
||||
|
||||
def log(self, level, message, color="bold_green"):
|
||||
def log(self, level, message, color="bold_yellow"):
|
||||
if self.verbose:
|
||||
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
self._printer.print(
|
||||
f"[{timestamp}][{level.upper()}]: {message}", color=color
|
||||
f"\n[{timestamp}][{level.upper()}]: {message}", color=color
|
||||
)
|
||||
|
||||
@@ -15,6 +15,8 @@ class Printer:
|
||||
self._print_bold_blue(content)
|
||||
elif color == "yellow":
|
||||
self._print_yellow(content)
|
||||
elif color == "bold_yellow":
|
||||
self._print_bold_yellow(content)
|
||||
else:
|
||||
print(content)
|
||||
|
||||
@@ -35,3 +37,6 @@ class Printer:
|
||||
|
||||
def _print_yellow(self, content):
|
||||
print("\033[93m {}\033[00m".format(content))
|
||||
|
||||
def _print_bold_yellow(self, content):
|
||||
print("\033[1m\033[93m {}\033[00m".format(content))
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Type, get_args, get_origin
|
||||
from typing import Type, get_args, get_origin, Union
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
@@ -36,7 +36,14 @@ class PydanticSchemaParser(BaseModel):
|
||||
return f"List[\n{nested_schema}\n{' ' * 4 * depth}]"
|
||||
else:
|
||||
return f"List[{list_item_type.__name__}]"
|
||||
elif issubclass(field_type, BaseModel):
|
||||
elif get_origin(field_type) is Union:
|
||||
union_args = get_args(field_type)
|
||||
if type(None) in union_args:
|
||||
non_none_type = next(arg for arg in union_args if arg is not type(None))
|
||||
return f"Optional[{self._get_field_type(field.__class__(annotation=non_none_type), depth)}]"
|
||||
else:
|
||||
return f"Union[{', '.join(arg.__name__ for arg in union_args)}]"
|
||||
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
|
||||
return self._get_model_schema(field_type, depth)
|
||||
else:
|
||||
return field_type.__name__
|
||||
return getattr(field_type, "__name__", str(field_type))
|
||||
|
||||
@@ -52,7 +52,7 @@ class RPMController(BaseModel):
|
||||
self._timer = None
|
||||
|
||||
def _wait_for_next_minute(self):
|
||||
time.sleep(1)
|
||||
time.sleep(60)
|
||||
self._current_rpm = 0
|
||||
|
||||
def _reset_request_count(self):
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
from unittest import mock
|
||||
from unittest.mock import patch
|
||||
|
||||
import os
|
||||
import pytest
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.agents.cache import CacheHandler
|
||||
@@ -11,9 +12,54 @@ from crewai.llm import LLM
|
||||
from crewai.agents.parser import CrewAgentParser, OutputParserException
|
||||
from crewai.tools.tool_calling import InstructorToolCalling
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
from crewai.tools.tool_usage_events import ToolUsageFinished
|
||||
from crewai.utilities import RPMController
|
||||
from crewai_tools import tool
|
||||
from crewai.agents.parser import AgentAction
|
||||
from crewai.utilities.events import Emitter
|
||||
|
||||
|
||||
def test_agent_llm_creation_with_env_vars():
|
||||
# Store original environment variables
|
||||
original_api_key = os.environ.get("OPENAI_API_KEY")
|
||||
original_api_base = os.environ.get("OPENAI_API_BASE")
|
||||
original_model_name = os.environ.get("OPENAI_MODEL_NAME")
|
||||
|
||||
# Set up environment variables
|
||||
os.environ["OPENAI_API_KEY"] = "test_api_key"
|
||||
os.environ["OPENAI_API_BASE"] = "https://test-api-base.com"
|
||||
os.environ["OPENAI_MODEL_NAME"] = "gpt-4-turbo"
|
||||
|
||||
# Create an agent without specifying LLM
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
|
||||
# Check if LLM is created correctly
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert agent.llm.model == "gpt-4-turbo"
|
||||
assert agent.llm.api_key == "test_api_key"
|
||||
assert agent.llm.base_url == "https://test-api-base.com"
|
||||
|
||||
# Clean up environment variables
|
||||
del os.environ["OPENAI_API_KEY"]
|
||||
del os.environ["OPENAI_API_BASE"]
|
||||
del os.environ["OPENAI_MODEL_NAME"]
|
||||
|
||||
# Create an agent without specifying LLM
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
|
||||
# Check if LLM is created correctly
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert agent.llm.model != "gpt-4-turbo"
|
||||
assert agent.llm.api_key != "test_api_key"
|
||||
assert agent.llm.base_url != "https://test-api-base.com"
|
||||
|
||||
# Restore original environment variables
|
||||
if original_api_key:
|
||||
os.environ["OPENAI_API_KEY"] = original_api_key
|
||||
if original_api_base:
|
||||
os.environ["OPENAI_API_BASE"] = original_api_base
|
||||
if original_model_name:
|
||||
os.environ["OPENAI_MODEL_NAME"] = original_model_name
|
||||
|
||||
|
||||
def test_agent_creation():
|
||||
@@ -27,7 +73,7 @@ def test_agent_creation():
|
||||
|
||||
def test_agent_default_values():
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
assert agent.llm == "gpt-4o"
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.allow_delegation is False
|
||||
|
||||
|
||||
@@ -35,7 +81,7 @@ def test_custom_llm():
|
||||
agent = Agent(
|
||||
role="test role", goal="test goal", backstory="test backstory", llm="gpt-4"
|
||||
)
|
||||
assert agent.llm == "gpt-4"
|
||||
assert agent.llm.model == "gpt-4"
|
||||
|
||||
|
||||
def test_custom_llm_with_langchain():
|
||||
@@ -48,7 +94,51 @@ def test_custom_llm_with_langchain():
|
||||
llm=ChatOpenAI(temperature=0, model="gpt-4"),
|
||||
)
|
||||
|
||||
assert agent.llm == "gpt-4"
|
||||
assert agent.llm.model == "gpt-4"
|
||||
|
||||
|
||||
def test_custom_llm_temperature_preservation():
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
langchain_llm = ChatOpenAI(temperature=0.7, model="gpt-4")
|
||||
agent = Agent(
|
||||
role="temperature test role",
|
||||
goal="temperature test goal",
|
||||
backstory="temperature test backstory",
|
||||
llm=langchain_llm,
|
||||
)
|
||||
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert agent.llm.model == "gpt-4"
|
||||
assert agent.llm.temperature == 0.7
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task():
|
||||
from langchain_openai import ChatOpenAI
|
||||
from crewai import Task
|
||||
|
||||
agent = Agent(
|
||||
role="Math Tutor",
|
||||
goal="Solve math problems accurately",
|
||||
backstory="You are an experienced math tutor with a knack for explaining complex concepts simply.",
|
||||
llm=ChatOpenAI(temperature=0.7, model="gpt-4o-mini"),
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Calculate the area of a circle with radius 5 cm.",
|
||||
expected_output="The calculated area of the circle in square centimeters.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
assert result is not None
|
||||
assert (
|
||||
result
|
||||
== "The calculated area of the circle is approximately 78.5 square centimeters."
|
||||
)
|
||||
assert "square centimeters" in result.lower()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -67,7 +157,7 @@ def test_agent_execution():
|
||||
)
|
||||
|
||||
output = agent.execute_task(task)
|
||||
assert output == "The result of the math operation 1 + 1 is 2."
|
||||
assert output == "1 + 1 is 2"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -90,8 +180,15 @@ def test_agent_execution_with_tools():
|
||||
agent=agent,
|
||||
expected_output="The result of the multiplication.",
|
||||
)
|
||||
output = agent.execute_task(task)
|
||||
assert output == "The result of the multiplication is 12."
|
||||
with patch.object(Emitter, "emit") as emit:
|
||||
output = agent.execute_task(task)
|
||||
assert output == "The result of the multiplication is 12."
|
||||
assert emit.call_count == 1
|
||||
args, _ = emit.call_args
|
||||
assert isinstance(args[1], ToolUsageFinished)
|
||||
assert not args[1].from_cache
|
||||
assert args[1].tool_name == "multiplier"
|
||||
assert args[1].tool_args == {"first_number": 3, "second_number": 4}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -109,6 +206,7 @@ def test_logging_tool_usage():
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.tools_handler.last_used_tool == {}
|
||||
task = Task(
|
||||
description="What is 3 times 4?",
|
||||
@@ -121,7 +219,8 @@ def test_logging_tool_usage():
|
||||
tool_usage = InstructorToolCalling(
|
||||
tool_name=multiplier.name, arguments={"first_number": 3, "second_number": 4}
|
||||
)
|
||||
assert output == "The result of 3 times 4 is 12."
|
||||
|
||||
assert output == "The result of the multiplication is 12."
|
||||
assert agent.tools_handler.last_used_tool.tool_name == tool_usage.tool_name
|
||||
assert agent.tools_handler.last_used_tool.arguments == tool_usage.arguments
|
||||
|
||||
@@ -177,18 +276,22 @@ def test_cache_hitting():
|
||||
"multiplier-{'first_number': 12, 'second_number': 3}": 36,
|
||||
}
|
||||
|
||||
with patch.object(CacheHandler, "read") as read:
|
||||
with patch.object(CacheHandler, "read") as read, patch.object(Emitter, "emit") as emit:
|
||||
read.return_value = "0"
|
||||
task = Task(
|
||||
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
|
||||
agent=agent,
|
||||
expected_output="The result of the multiplication.",
|
||||
expected_output="The number that is the result of the multiplication tool.",
|
||||
)
|
||||
output = agent.execute_task(task)
|
||||
assert output == "0"
|
||||
read.assert_called_with(
|
||||
tool="multiplier", input={"first_number": 2, "second_number": 6}
|
||||
)
|
||||
assert emit.call_count == 1
|
||||
args, _ = emit.call_args
|
||||
assert isinstance(args[1], ToolUsageFinished)
|
||||
assert args[1].from_cache
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -275,7 +378,7 @@ def test_agent_execution_with_specific_tools():
|
||||
expected_output="The result of the multiplication.",
|
||||
)
|
||||
output = agent.execute_task(task=task, tools=[multiplier])
|
||||
assert output == "The result of the multiplication of 3 times 4 is 12."
|
||||
assert output == "The result of the multiplication is 12."
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -293,7 +396,6 @@ def test_agent_powered_by_new_o_model_family_that_allows_skipping_tool():
|
||||
max_iter=3,
|
||||
use_system_prompt=False,
|
||||
allow_delegation=False,
|
||||
use_stop_words=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
@@ -320,7 +422,6 @@ def test_agent_powered_by_new_o_model_family_that_uses_tool():
|
||||
max_iter=3,
|
||||
use_system_prompt=False,
|
||||
allow_delegation=False,
|
||||
use_stop_words=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
@@ -329,7 +430,7 @@ def test_agent_powered_by_new_o_model_family_that_uses_tool():
|
||||
expected_output="The number of customers",
|
||||
)
|
||||
output = agent.execute_task(task=task, tools=[comapny_customer_data])
|
||||
assert output == "The company has 42 customers"
|
||||
assert output == "42"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -490,7 +591,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
|
||||
task=task,
|
||||
tools=[get_final_answer],
|
||||
)
|
||||
assert output == "42"
|
||||
assert output == "The final answer is 42."
|
||||
captured = capsys.readouterr()
|
||||
assert "Max RPM reached, waiting for next minute to start." in captured.out
|
||||
moveon.assert_called()
|
||||
@@ -620,12 +721,13 @@ def test_agent_error_on_parsing_tool(capsys):
|
||||
verbose=True,
|
||||
function_calling_llm="gpt-4o",
|
||||
)
|
||||
|
||||
with patch.object(ToolUsage, "_render") as force_exception:
|
||||
force_exception.side_effect = Exception("Error on parsing tool.")
|
||||
crew.kickoff()
|
||||
captured = capsys.readouterr()
|
||||
assert "Error on parsing tool." in captured.out
|
||||
with patch.object(ToolUsage, "_original_tool_calling") as force_exception_1:
|
||||
force_exception_1.side_effect = Exception("Error on parsing tool.")
|
||||
with patch.object(ToolUsage, "_render") as force_exception_2:
|
||||
force_exception_2.side_effect = Exception("Error on parsing tool.")
|
||||
crew.kickoff()
|
||||
captured = capsys.readouterr()
|
||||
assert "Error on parsing tool." in captured.out
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -750,27 +852,18 @@ def test_agent_function_calling_llm():
|
||||
)
|
||||
tasks = [essay]
|
||||
crew = Crew(agents=[agent1], tasks=tasks)
|
||||
from unittest.mock import patch, Mock
|
||||
from unittest.mock import patch
|
||||
import instructor
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
|
||||
with patch.object(instructor, "from_litellm") as mock_from_litellm:
|
||||
mock_client = Mock()
|
||||
mock_from_litellm.return_value = mock_client
|
||||
mock_chat = Mock()
|
||||
mock_client.chat = mock_chat
|
||||
mock_completions = Mock()
|
||||
mock_chat.completions = mock_completions
|
||||
mock_create = Mock()
|
||||
mock_completions.create = mock_create
|
||||
|
||||
with patch.object(
|
||||
instructor, "from_litellm", wraps=instructor.from_litellm
|
||||
) as mock_from_litellm, patch.object(
|
||||
ToolUsage, "_original_tool_calling", side_effect=Exception("Forced exception")
|
||||
) as mock_original_tool_calling:
|
||||
crew.kickoff()
|
||||
|
||||
mock_from_litellm.assert_called()
|
||||
mock_create.assert_called()
|
||||
calls = mock_create.call_args_list
|
||||
assert any(
|
||||
call.kwargs.get("model") == "gpt-4o" for call in calls
|
||||
), "Instructor was not created with the expected model"
|
||||
mock_original_tool_calling.assert_called()
|
||||
|
||||
|
||||
def test_agent_count_formatting_error():
|
||||
@@ -1013,7 +1106,7 @@ def test_agent_training_handler(crew_training_handler):
|
||||
|
||||
result = agent._training_handler(task_prompt=task_prompt)
|
||||
|
||||
assert result == "What is 1 + 1?You MUST follow these feedbacks: \n good"
|
||||
assert result == "What is 1 + 1?\n\nYou MUST follow these instructions: \n good"
|
||||
|
||||
crew_training_handler.assert_has_calls(
|
||||
[mock.call(), mock.call("training_data.pkl"), mock.call().load()]
|
||||
@@ -1041,8 +1134,8 @@ def test_agent_use_trained_data(crew_training_handler):
|
||||
result = agent._use_trained_data(task_prompt=task_prompt)
|
||||
|
||||
assert (
|
||||
result == "What is 1 + 1?You MUST follow these feedbacks: \n "
|
||||
"The result of the math operation must be right.\n - Result must be better than 1."
|
||||
result == "What is 1 + 1?\n\nYou MUST follow these instructions: \n"
|
||||
" - The result of the math operation must be right.\n - Result must be better than 1."
|
||||
)
|
||||
crew_training_handler.assert_has_calls(
|
||||
[mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()]
|
||||
@@ -1102,6 +1195,90 @@ def test_agent_max_retry_limit():
|
||||
)
|
||||
|
||||
|
||||
def test_agent_with_llm():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="gpt-3.5-turbo", temperature=0.7),
|
||||
)
|
||||
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert agent.llm.model == "gpt-3.5-turbo"
|
||||
assert agent.llm.temperature == 0.7
|
||||
|
||||
|
||||
def test_agent_with_custom_stop_words():
|
||||
stop_words = ["STOP", "END"]
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="gpt-3.5-turbo", stop=stop_words),
|
||||
)
|
||||
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert set(agent.llm.stop) == set(stop_words + ["\nObservation:"])
|
||||
assert all(word in agent.llm.stop for word in stop_words)
|
||||
assert "\nObservation:" in agent.llm.stop
|
||||
|
||||
|
||||
def test_agent_with_callbacks():
|
||||
def dummy_callback(response):
|
||||
pass
|
||||
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="gpt-3.5-turbo", callbacks=[dummy_callback]),
|
||||
)
|
||||
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert len(agent.llm.callbacks) == 1
|
||||
assert agent.llm.callbacks[0] == dummy_callback
|
||||
|
||||
|
||||
def test_agent_with_additional_kwargs():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(
|
||||
model="gpt-3.5-turbo",
|
||||
temperature=0.8,
|
||||
top_p=0.9,
|
||||
presence_penalty=0.1,
|
||||
frequency_penalty=0.1,
|
||||
),
|
||||
)
|
||||
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert agent.llm.model == "gpt-3.5-turbo"
|
||||
assert agent.llm.temperature == 0.8
|
||||
assert agent.llm.top_p == 0.9
|
||||
assert agent.llm.presence_penalty == 0.1
|
||||
assert agent.llm.frequency_penalty == 0.1
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_call():
|
||||
llm = LLM(model="gpt-3.5-turbo")
|
||||
messages = [{"role": "user", "content": "Say 'Hello, World!'"}]
|
||||
|
||||
response = llm.call(messages)
|
||||
assert "Hello, World!" in response
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_call_with_error():
|
||||
llm = LLM(model="non-existent-model")
|
||||
messages = [{"role": "user", "content": "This should fail"}]
|
||||
|
||||
with pytest.raises(Exception):
|
||||
llm.call(messages)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_handle_context_length_exceeds_limit():
|
||||
agent = Agent(
|
||||
@@ -1172,3 +1349,215 @@ def test_handle_context_length_exceeds_limit_cli_no():
|
||||
CrewAgentExecutor, "_handle_context_length"
|
||||
) as mock_handle_context:
|
||||
mock_handle_context.assert_not_called()
|
||||
|
||||
|
||||
def test_agent_with_all_llm_attributes():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(
|
||||
model="gpt-3.5-turbo",
|
||||
timeout=10,
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
n=1,
|
||||
stop=["STOP", "END"],
|
||||
max_tokens=100,
|
||||
presence_penalty=0.1,
|
||||
frequency_penalty=0.1,
|
||||
logit_bias={50256: -100}, # Example: bias against the EOT token
|
||||
response_format={"type": "json_object"},
|
||||
seed=42,
|
||||
logprobs=True,
|
||||
top_logprobs=5,
|
||||
base_url="https://api.openai.com/v1",
|
||||
api_version="2023-05-15",
|
||||
api_key="sk-your-api-key-here",
|
||||
),
|
||||
)
|
||||
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert agent.llm.model == "gpt-3.5-turbo"
|
||||
assert agent.llm.timeout == 10
|
||||
assert agent.llm.temperature == 0.7
|
||||
assert agent.llm.top_p == 0.9
|
||||
assert agent.llm.n == 1
|
||||
assert set(agent.llm.stop) == set(["STOP", "END", "\nObservation:"])
|
||||
assert all(word in agent.llm.stop for word in ["STOP", "END", "\nObservation:"])
|
||||
assert agent.llm.max_tokens == 100
|
||||
assert agent.llm.presence_penalty == 0.1
|
||||
assert agent.llm.frequency_penalty == 0.1
|
||||
assert agent.llm.logit_bias == {50256: -100}
|
||||
assert agent.llm.response_format == {"type": "json_object"}
|
||||
assert agent.llm.seed == 42
|
||||
assert agent.llm.logprobs
|
||||
assert agent.llm.top_logprobs == 5
|
||||
assert agent.llm.base_url == "https://api.openai.com/v1"
|
||||
assert agent.llm.api_version == "2023-05-15"
|
||||
assert agent.llm.api_key == "sk-your-api-key-here"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_call_with_all_attributes():
|
||||
llm = LLM(
|
||||
model="gpt-3.5-turbo",
|
||||
temperature=0.7,
|
||||
max_tokens=50,
|
||||
stop=["STOP"],
|
||||
presence_penalty=0.1,
|
||||
frequency_penalty=0.1,
|
||||
)
|
||||
messages = [{"role": "user", "content": "Say 'Hello, World!' and then say STOP"}]
|
||||
|
||||
response = llm.call(messages)
|
||||
assert "Hello, World!" in response
|
||||
assert "STOP" not in response
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_with_ollama_gemma():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(
|
||||
model="ollama/gemma2:latest",
|
||||
base_url="http://localhost:8080",
|
||||
),
|
||||
)
|
||||
|
||||
assert isinstance(agent.llm, LLM)
|
||||
assert agent.llm.model == "ollama/gemma2:latest"
|
||||
assert agent.llm.base_url == "http://localhost:8080"
|
||||
|
||||
task = "Respond in 20 words. Who are you?"
|
||||
response = agent.llm.call([{"role": "user", "content": task}])
|
||||
|
||||
assert response
|
||||
assert len(response.split()) <= 25 # Allow a little flexibility in word count
|
||||
assert "Gemma" in response or "AI" in response or "language model" in response
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_call_with_ollama_gemma():
|
||||
llm = LLM(
|
||||
model="ollama/gemma2:latest",
|
||||
base_url="http://localhost:8080",
|
||||
temperature=0.7,
|
||||
max_tokens=30,
|
||||
)
|
||||
messages = [{"role": "user", "content": "Respond in 20 words. Who are you?"}]
|
||||
|
||||
response = llm.call(messages)
|
||||
|
||||
assert response
|
||||
assert len(response.split()) <= 25 # Allow a little flexibility in word count
|
||||
assert "Gemma" in response or "AI" in response or "language model" in response
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task_basic():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="gpt-3.5-turbo"),
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Calculate 2 + 2",
|
||||
expected_output="The result of the calculation",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
assert "4" in result
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task_with_context():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="gpt-3.5-turbo"),
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Summarize the given context in one sentence",
|
||||
expected_output="A one-sentence summary",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
context = "The quick brown fox jumps over the lazy dog. This sentence contains every letter of the alphabet."
|
||||
|
||||
result = agent.execute_task(task, context=context)
|
||||
assert len(result.split(".")) == 3
|
||||
assert "fox" in result.lower() and "dog" in result.lower()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task_with_tool():
|
||||
@tool
|
||||
def dummy_tool(query: str) -> str:
|
||||
"""Useful for when you need to get a dummy result for a query."""
|
||||
return f"Dummy result for: {query}"
|
||||
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="gpt-3.5-turbo"),
|
||||
tools=[dummy_tool],
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Use the dummy tool to get a result for 'test query'",
|
||||
expected_output="The result from the dummy tool",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
assert "Dummy result for: test query" in result
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task_with_custom_llm():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="gpt-3.5-turbo", temperature=0.7, max_tokens=50),
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Write a haiku about AI",
|
||||
expected_output="A haiku (3 lines, 5-7-5 syllable pattern) about AI",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
assert result.startswith(
|
||||
"Artificial minds,\nCoding thoughts in circuits bright,\nAI's silent might."
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task_with_ollama():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
llm=LLM(model="ollama/gemma2:latest", base_url="http://localhost:8080"),
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Explain what AI is in one sentence",
|
||||
expected_output="A one-sentence explanation of AI",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
assert len(result.split(".")) == 2
|
||||
assert "AI" in result or "artificial intelligence" in result.lower()
|
||||
|
||||
@@ -24,7 +24,7 @@ def test_delegate_work():
|
||||
|
||||
assert (
|
||||
result
|
||||
== "While it's a common perception that I might \"hate\" AI agents, my actual stance is much more nuanced and guided by an in-depth understanding of their potential and limitations. As an expert researcher in technology, I recognize that AI agents are a significant advancement in the field of computing and artificial intelligence, offering numerous benefits and applications across various sectors. Here's a detailed take on AI agents:\n\n**Advantages of AI Agents:**\n1. **Automation and Efficiency:** AI agents can automate repetitive tasks, thus freeing up human workers for more complex and creative work. This leads to significant efficiency gains in industries such as customer service (chatbots), data analysis, and even healthcare (AI diagnostic tools).\n\n2. **24/7 Availability:** Unlike human workers, AI agents can operate continuously without fatigue. This is particularly beneficial in customer service environments where support can be provided around the clock.\n\n3. **Data Handling and Analysis:** AI agents can process and analyze vast amounts of data more quickly and accurately than humans. This ability is invaluable in fields like finance, where AI can detect fraudulent activities, or in marketing, where consumer data can be analyzed to improve customer engagement strategies.\n\n4. **Personalization:** AI agents can provide personalized experiences by learning from user interactions. For example, recommendation systems on platforms like Netflix and Amazon use AI agents to suggest content or products tailored to individual preferences.\n\n5. **Scalability:** AI agents can be scaled up easily to handle increasing workloads, making them ideal for businesses experiencing growth or variable demand.\n\n**Challenges and Concerns:**\n1. **Ethical Implications:** The deployment of AI agents raises significant ethical questions, including issues of bias, privacy, and the potential for job displacement. It’s crucial to address these concerns by incorporating transparent, fair, and inclusive practices in AI development and deployment.\n\n2. **Dependability and Error Rates:** While AI agents are generally reliable, they are not infallible. Errors, especially in critical areas like healthcare or autonomous driving, can have severe consequences. Therefore, rigorous testing and validation are essential.\n\n3. **Lack of Understanding:** Many users and stakeholders may not fully understand how AI agents work, leading to mistrust or misuse. Improving AI literacy and transparency can help build trust in these systems.\n\n4. **Security Risks:** AI agents can be vulnerable to cyber-attacks. Ensuring robust cybersecurity measures are in place is vital to protect sensitive data and maintain the integrity of AI systems.\n\n5. **Regulation and Oversight:** The rapid development of AI technology often outpaces regulatory frameworks. Effective governance is needed to ensure AI is used responsibly and ethically.\n\nIn summary, while I thoroughly understand the transformative potential of AI agents and their numerous advantages, I also recognize the importance of addressing the associated challenges. It's not about hating AI agents, but rather advocating for their responsible and ethical use to ensure they benefit society as a whole. My critical perspective is rooted in a desire to see AI agents implemented in ways that maximize their benefits while minimizing potential harms."
|
||||
== "I understand why you might think I dislike AI agents, but my perspective is more nuanced. AI agents, in essence, are incredibly versatile tools designed to perform specific tasks autonomously or semi-autonomously. They harness various artificial intelligence techniques, such as machine learning, natural language processing, and computer vision, to interpret data, understand tasks, and execute them efficiently. \n\nFrom a technological standpoint, AI agents have revolutionized numerous industries. In customer service, for instance, AI agents like chatbots and virtual assistants handle customer inquiries 24/7, providing quick and efficient solutions. In healthcare, AI agents can assist in diagnosing diseases, managing patient data, and even predicting outbreaks. The automation capabilities of AI agents also enhance productivity in areas such as logistics, finance, and cybersecurity by identifying patterns and anomalies at speeds far beyond human capabilities.\n\nHowever, it's important to acknowledge the potential downsides and challenges associated with AI agents. Ethical considerations are paramount. Issues such as data privacy, security, and biases in AI algorithms need to be carefully managed. There is also the human aspect to consider—over-reliance on AI agents might lead to job displacement in certain sectors, and ensuring a fair transition for affected workers is crucial.\n\nMy concerns generally stem from these ethical and societal implications rather than from the technology itself. I advocate for responsible AI development, which includes transparency, fairness, and accountability. By addressing these concerns, we can harness the full potential of AI agents while mitigating the associated risks.\n\nSo, to clarify, I don't hate AI agents; I recognize their immense potential and the significant benefits they bring to various fields. However, I am equally aware of the challenges they present and advocate for a balanced approach to their development and deployment."
|
||||
)
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ def test_delegate_work_with_wrong_co_worker_variable():
|
||||
|
||||
assert (
|
||||
result
|
||||
== "As an expert researcher in technology, particularly in the field of AI and AI agents, it is essential to clarify that my perspective is not one of hatred but rather critical analysis. My evaluation of AI agents is grounded in a balanced view of their advantages and the challenges they present. \n\nAI agents represent a significant leap in technological progress with a wide array of applications across industries. They can perform tasks ranging from customer service interactions, data analysis, complex simulations, to even personal assistance. Their ability to learn and adapt makes them powerful tools for enhancing productivity and innovation.\n\nHowever, there are considerable challenges and ethical concerns associated with their deployment. These include privacy issues, job displacement, and the potential for biased decision-making driven by flawed algorithms. Furthermore, the security risks posed by AI agents, such as how they can be manipulated or hacked, are critical concerns that cannot be ignored.\n\nIn essence, while I do recognize the transformative potential of AI agents, I remain vigilant about their implications. It is vital to ensure that their development is guided by robust ethical standards and stringent regulations to mitigate risks. My view is not rooted in hatred but in a deep commitment to responsible and thoughtful technological advancement. \n\nI hope this clarifies my stance on AI agents and underscores the importance of critical engagement with emerging technologies."
|
||||
== "AI agents are essentially autonomous software programs that perform tasks or provide services on behalf of humans. They're built on complex algorithms and often leverage machine learning and neural networks to adapt and improve over time. \n\nIt's important to clarify that I don't \"hate\" AI agents, but I do approach them with a critical eye for a couple of reasons. AI agents have enormous potential to transform industries, making processes more efficient, providing insightful data analytics, and even learning from user behavior to offer personalized experiences. However, this potential comes with significant challenges and risks:\n\n1. **Ethical Concerns**: AI agents operate on data, and the biases present in data can lead to unfair or unethical outcomes. Ensuring that AI operates within ethical boundaries requires rigorous oversight, which is not always in place.\n\n2. **Privacy Issues**: AI agents often need access to large amounts of data, raising questions about privacy and data security. If not managed correctly, this can lead to unauthorized data access and potential misuse of sensitive information.\n\n3. **Transparency and Accountability**: The decision-making process of AI agents can be opaque, making it difficult to understand how they arrive at specific conclusions or actions. This lack of transparency poses challenges for accountability, especially if something goes wrong.\n\n4. **Job Displacement**: As AI agents become more capable, there are valid concerns about their impact on employment. Tasks that were traditionally performed by humans are increasingly being automated, which can lead to job loss in certain sectors.\n\n5. **Reliability**: While AI agents can outperform humans in many areas, they are not infallible. They can make mistakes, sometimes with serious consequences. Continuous monitoring and regular updates are essential to maintain their performance and reliability.\n\nIn summary, while AI agents offer substantial benefits and opportunities, it's critical to approach their adoption and deployment with careful consideration of the associated risks. Balancing innovation with responsibility is key to leveraging AI agents effectively and ethically. So, rather than \"hating\" AI agents, I advocate for a balanced, cautious approach that maximizes benefits while mitigating potential downsides."
|
||||
)
|
||||
|
||||
|
||||
@@ -52,7 +52,7 @@ def test_ask_question():
|
||||
|
||||
assert (
|
||||
result
|
||||
== "No, I do not hate AI agents; in fact, I find them incredibly fascinating and useful. As a researcher specializing in technology, particularly in AI and AI agents, I appreciate their potential to revolutionize various industries by automating tasks, providing deep insights through data analysis, and even enhancing decision-making processes. AI agents can streamline operations, improve efficiency, and contribute to advancements in fields like healthcare, finance, and cybersecurity. While they do present challenges, such as ethical considerations and the need for robust security measures, the benefits and potential for positive impact are immense. Therefore, my stance is one of strong support and enthusiasm for AI agents and their future developments."
|
||||
== "As an expert researcher specialized in technology, I don't harbor emotions such as hate towards AI agents. Instead, my focus is on understanding, analyzing, and leveraging their potential to advance various fields. AI agents, when designed and implemented effectively, can greatly augment human capabilities, streamline processes, and provide valuable insights that might otherwise be overlooked. My enthusiasm for AI agents stems from their ability to transform industries and improve everyday life, making complex tasks more manageable and enhancing overall efficiency. This passion drives my research and commitment to making meaningful contributions in the realm of AI and AI agents."
|
||||
)
|
||||
|
||||
|
||||
@@ -66,7 +66,7 @@ def test_ask_question_with_wrong_co_worker_variable():
|
||||
|
||||
assert (
|
||||
result
|
||||
== "I do not hate AI agents; in fact, I appreciate them for their immense potential and the numerous benefits they bring to various fields. My passion for AI agents stems from their ability to streamline processes, enhance decision-making, and provide innovative solutions to complex problems. They significantly contribute to advancements in healthcare, finance, education, and many other sectors, making tasks more efficient and freeing up human capacities for more creative and strategic endeavors. So, to answer your question, I love AI agents because of the positive impact they have on our world and their capability to drive technological progress."
|
||||
== "I don't hate AI agents; on the contrary, I find them fascinating and incredibly useful. Considering the rapid advancements in AI technology, these agents have the potential to revolutionize various industries by automating tasks, improving efficiency, and providing insights that were previously unattainable. My expertise in researching and analyzing AI and AI agents has allowed me to appreciate the intricate design and the vast possibilities they offer. Therefore, it's more accurate to say that I love AI agents for their potential to drive innovation and improve our daily lives."
|
||||
)
|
||||
|
||||
|
||||
@@ -80,7 +80,7 @@ def test_delegate_work_withwith_coworker_as_array():
|
||||
|
||||
assert (
|
||||
result
|
||||
== "AI agents have emerged as a revolutionary force in today's technological landscape, and my stance on them is not rooted in hatred but in a critical, analytical perspective. Let's delve deeper into what makes AI agents both a boon and a bane in various contexts.\n\n**Benefits of AI Agents:**\n\n1. **Automation and Efficiency:**\n AI agents excel at automating repetitive tasks, which frees up human resources for more complex and creative endeavors. They are capable of performing tasks rapidly and with high accuracy, leading to increased efficiency in operations.\n\n2. **Data Analysis and Decision Making:**\n These agents can process vast amounts of data at speeds far beyond human capability. They can identify patterns and insights that would otherwise be missed, aiding in informed decision-making processes across industries like finance, healthcare, and logistics.\n\n3. **Personalization and User Experience:**\n AI agents can personalize interactions on a scale that is impractical for humans. For example, recommendation engines in e-commerce or content platforms tailor suggestions to individual users, enhancing user experience and satisfaction.\n\n4. **24/7 Availability:**\n Unlike human employees, AI agents can operate round-the-clock without the need for breaks, sleep, or holidays. This makes them ideal for customer service roles, providing consistent and immediate responses any time of the day.\n\n**Challenges and Concerns:**\n\n1. **Job Displacement:**\n One of the major concerns is the displacement of jobs. As AI agents become more proficient at a variety of tasks, there is a legitimate fear of human workers being replaced, leading to unemployment and economic disruption.\n\n2. **Bias and Fairness:**\n AI agents are only as good as the data they are trained on. If the training data contains biases, the AI agents can perpetuate or even exacerbate these biases, leading to unfair and discriminatory outcomes.\n\n3. **Privacy and Security:**\n The use of AI agents often involves handling large amounts of personal data, raising significant privacy and security concerns. Unauthorized access or breaches could lead to severe consequences for individuals and organizations.\n\n4. **Accountability and Transparency:**\n The decision-making processes of AI agents can be opaque, making it difficult to hold them accountable. This lack of transparency can lead to mistrust and ethical dilemmas, particularly when AI decisions impact human lives.\n\n5. **Ethical Considerations:**\n The deployment of AI agents in sensitive areas, such as surveillance and law enforcement, raises ethical issues. The potential for misuse or overdependence on AI decision-making poses a threat to individual freedoms and societal norms.\n\nIn conclusion, while AI agents offer remarkable advantages in terms of efficiency, data handling, and user experience, they also bring significant challenges that need to be addressed carefully. My critical stance is driven by a desire to ensure that their integration into society is balanced, fair, and beneficial to all, without ignoring the potential downsides. Therefore, a nuanced approach is essential in leveraging the power of AI agents responsibly."
|
||||
== "My perspective on AI agents is quite nuanced and not a matter of simple like or dislike. AI agents, depending on their design, deployment, and use cases, can bring about both significant benefits and substantial challenges.\n\nOn the positive side, AI agents have the potential to automate mundane tasks, enhance productivity, and provide personalized services in ways that were previously unimaginable. For instance, in customer service, AI agents can handle inquiries 24/7, reducing waiting times and improving user satisfaction. In healthcare, they can assist in diagnosing diseases by analyzing vast datasets much faster than humans. These applications demonstrate the transformative power of AI in improving efficiency and delivering better outcomes across various industries.\n\nHowever, my reservations stem from several critical concerns. Firstly, there's the issue of reliability and accuracy. Mismanaged or poorly designed AI systems can lead to significant errors, which could be particularly detrimental in high-stakes environments like healthcare or autonomous vehicles. Second, there's a risk of job displacement as AI agents become capable of performing tasks traditionally done by humans. This raises socio-economic concerns that need to be addressed through effective policy-making and upskilling programs.\n\nAdditionally, there are ethical and privacy considerations. AI agents often require large amounts of data to function effectively, which can lead to issues concerning consent, data security, and individual privacy rights. The lack of transparency in how these agents make decisions can also pose challenges—this is often referred to as the \"black box\" problem, where even the developers may not fully understand how specific AI outputs are generated.\n\nFinally, the deployment of AI agents by bad actors for malicious purposes, such as deepfakes, misinformation, and hacking, remains a pertinent concern. These potential downsides imply that while AI technology is extremely powerful and promising, it must be developed and implemented with care, consideration, and robust ethical guidelines.\n\nSo, in summary, I don't hate AI agents—rather, I approach them critically with a balanced perspective, recognizing both their profound potential and the significant challenges they present. Thoughtful development, responsible deployment, and ethical governance are crucial to harness the benefits while mitigating the risks associated with AI agents."
|
||||
)
|
||||
|
||||
|
||||
@@ -94,7 +94,7 @@ def test_ask_question_with_coworker_as_array():
|
||||
|
||||
assert (
|
||||
result
|
||||
== "As a researcher specialized in technology, particularly in AI and AI agents, my feelings toward them are far more nuanced than simply loving or hating them. AI agents represent a remarkable advancement in technology and hold tremendous potential for improving various aspects of our lives and industries. They can automate tedious tasks, provide intelligent data analysis, support decision-making, and even enhance our creative processes. These capabilities can drive efficiency, innovation, and economic growth.\n\nHowever, it is also crucial to acknowledge the challenges and ethical considerations posed by AI agents. Issues such as data privacy, security, job displacement, and the need for proper regulation are significant concerns that must be carefully managed. Moreover, the development and deployment of AI should be guided by principles that ensure fairness, transparency, and accountability.\n\nIn essence, I appreciate the profound impact AI agents can have, but I also recognize the importance of approaching their integration into society with thoughtful consideration and responsibility. Balancing enthusiasm with caution and ethical oversight is key to harnessing the full potential of AI while mitigating its risks."
|
||||
== "As an expert researcher specializing in technology and AI, I have a deep appreciation for AI agents. These advanced tools have the potential to revolutionize countless industries by improving efficiency, accuracy, and decision-making processes. They can augment human capabilities, handle mundane and repetitive tasks, and even offer insights that might be beyond human reach. While it's crucial to approach AI with a balanced perspective, understanding both its capabilities and limitations, my stance is one of optimism and fascination. Properly developed and ethically managed, AI agents hold immense promise for driving innovation and solving complex problems. So yes, I do love AI agents for their transformative potential and the positive impact they can have on society."
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ interactions:
|
||||
shared.\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.\n\nThis is the context you''re working with:\nI heard you LOVE them\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -21,16 +21,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1049'
|
||||
- '1021'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -40,7 +40,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -50,29 +50,28 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81diwze1dbmDs6t6AXf1vRTethrp\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476290,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7WnyWZFoccBH9YB7ghLbR1L8Wqa\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213909,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer.\\nFinal
|
||||
Answer: No, I do not hate AI agents; in fact, I find them incredibly fascinating
|
||||
and useful. As a researcher specializing in technology, particularly in AI and
|
||||
AI agents, I appreciate their potential to revolutionize various industries
|
||||
by automating tasks, providing deep insights through data analysis, and even
|
||||
enhancing decision-making processes. AI agents can streamline operations, improve
|
||||
efficiency, and contribute to advancements in fields like healthcare, finance,
|
||||
and cybersecurity. While they do present challenges, such as ethical considerations
|
||||
and the need for robust security measures, the benefits and potential for positive
|
||||
impact are immense. Therefore, my stance is one of strong support and enthusiasm
|
||||
for AI agents and their future developments.\",\n \"refusal\": null\n
|
||||
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
|
||||
\ ],\n \"usage\": {\n \"prompt_tokens\": 199,\n \"completion_tokens\":
|
||||
145,\n \"total_tokens\": 344,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: As an expert researcher specialized in technology, I don't harbor emotions
|
||||
such as hate towards AI agents. Instead, my focus is on understanding, analyzing,
|
||||
and leveraging their potential to advance various fields. AI agents, when designed
|
||||
and implemented effectively, can greatly augment human capabilities, streamline
|
||||
processes, and provide valuable insights that might otherwise be overlooked.
|
||||
My enthusiasm for AI agents stems from their ability to transform industries
|
||||
and improve everyday life, making complex tasks more manageable and enhancing
|
||||
overall efficiency. This passion drives my research and commitment to making
|
||||
meaningful contributions in the realm of AI and AI agents.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 199,\n \"completion_tokens\":
|
||||
126,\n \"total_tokens\": 325,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f93ae9a382233-MIA
|
||||
- 8c85ebf47e661cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -80,7 +79,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:44:51 GMT
|
||||
- Tue, 24 Sep 2024 21:38:31 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -89,16 +88,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1322'
|
||||
- '2498'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -112,7 +109,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_c3606c83dcda394dc3caf0ef5ef72833
|
||||
- req_b7e2cb0620e45d3d74310d3f0166551f
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -12,7 +12,7 @@ interactions:
|
||||
shared.\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.\n\nThis is the context you''re working with:\nI heard you LOVE them\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -21,16 +21,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1049'
|
||||
- '1021'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -40,7 +40,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -50,34 +50,29 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81dsDR0oIy60Go4lOiHoFauBk1Sl\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476300,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7Wy6aW1XM0lWaMyQUNB9qhbCZlH\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213920,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: As a researcher specialized in technology, particularly in AI and AI
|
||||
agents, my feelings toward them are far more nuanced than simply loving or hating
|
||||
them. AI agents represent a remarkable advancement in technology and hold tremendous
|
||||
potential for improving various aspects of our lives and industries. They can
|
||||
automate tedious tasks, provide intelligent data analysis, support decision-making,
|
||||
and even enhance our creative processes. These capabilities can drive efficiency,
|
||||
innovation, and economic growth.\\n\\nHowever, it is also crucial to acknowledge
|
||||
the challenges and ethical considerations posed by AI agents. Issues such as
|
||||
data privacy, security, job displacement, and the need for proper regulation
|
||||
are significant concerns that must be carefully managed. Moreover, the development
|
||||
and deployment of AI should be guided by principles that ensure fairness, transparency,
|
||||
and accountability.\\n\\nIn essence, I appreciate the profound impact AI agents
|
||||
can have, but I also recognize the importance of approaching their integration
|
||||
into society with thoughtful consideration and responsibility. Balancing enthusiasm
|
||||
with caution and ethical oversight is key to harnessing the full potential of
|
||||
AI while mitigating its risks.\",\n \"refusal\": null\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
199,\n \"completion_tokens\": 219,\n \"total_tokens\": 418,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
Answer: As an expert researcher specializing in technology and AI, I have a
|
||||
deep appreciation for AI agents. These advanced tools have the potential to
|
||||
revolutionize countless industries by improving efficiency, accuracy, and decision-making
|
||||
processes. They can augment human capabilities, handle mundane and repetitive
|
||||
tasks, and even offer insights that might be beyond human reach. While it's
|
||||
crucial to approach AI with a balanced perspective, understanding both its capabilities
|
||||
and limitations, my stance is one of optimism and fascination. Properly developed
|
||||
and ethically managed, AI agents hold immense promise for driving innovation
|
||||
and solving complex problems. So yes, I do love AI agents for their transformative
|
||||
potential and the positive impact they can have on society.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 199,\n \"completion_tokens\":
|
||||
146,\n \"total_tokens\": 345,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f93ee7b872233-MIA
|
||||
- 8c85ec3c6f3b1cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -85,7 +80,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:45:02 GMT
|
||||
- Tue, 24 Sep 2024 21:38:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -94,16 +89,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2179'
|
||||
- '1675'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -117,7 +110,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_924c8676ca28af7092f32e2992bde2ec
|
||||
- req_a249567d37ada11bc8857404338b24cc
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -12,7 +12,7 @@ interactions:
|
||||
shared.\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.\n\nThis is the context you''re working with:\nI heard you LOVE them\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -21,16 +21,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1049'
|
||||
- '1021'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -40,7 +40,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -50,27 +50,27 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81dkIBLB3iUbp5yVV0UtIcXQEK7d\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476292,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7Wq7edXMCGJR1zDd2QoySLdo8mM\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213912,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: I do not hate AI agents; in fact, I appreciate them for their immense
|
||||
potential and the numerous benefits they bring to various fields. My passion
|
||||
for AI agents stems from their ability to streamline processes, enhance decision-making,
|
||||
and provide innovative solutions to complex problems. They significantly contribute
|
||||
to advancements in healthcare, finance, education, and many other sectors, making
|
||||
tasks more efficient and freeing up human capacities for more creative and strategic
|
||||
endeavors. So, to answer your question, I love AI agents because of the positive
|
||||
impact they have on our world and their capability to drive technological progress.\",\n
|
||||
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 199,\n \"completion_tokens\":
|
||||
127,\n \"total_tokens\": 326,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
Answer: I don't hate AI agents; on the contrary, I find them fascinating and
|
||||
incredibly useful. Considering the rapid advancements in AI technology, these
|
||||
agents have the potential to revolutionize various industries by automating
|
||||
tasks, improving efficiency, and providing insights that were previously unattainable.
|
||||
My expertise in researching and analyzing AI and AI agents has allowed me to
|
||||
appreciate the intricate design and the vast possibilities they offer. Therefore,
|
||||
it's more accurate to say that I love AI agents for their potential to drive
|
||||
innovation and improve our daily lives.\",\n \"refusal\": null\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 199,\n \"completion_tokens\": 116,\n
|
||||
\ \"total_tokens\": 315,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f93b95d3e2233-MIA
|
||||
- 8c85ec05f8651cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -78,7 +78,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:44:53 GMT
|
||||
- Tue, 24 Sep 2024 21:38:33 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -87,16 +87,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1189'
|
||||
- '1739'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -110,7 +108,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_920f3c16f8de451a0d9a615430347aa7
|
||||
- req_d9e1e9458d5539061397a618345c27d4
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -1,40 +1,4 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: !!binary |
|
||||
CtACCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSpwIKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKQAgoQHmlJBYzBdapZtSVKNMGqJBII8BLkKX2PvTYqDlRhc2sgRXhlY3V0aW9uMAE5
|
||||
CChrngat9RdBSI3l4gat9RdKLgoIY3Jld19rZXkSIgogYzMwNzYwMDkzMjY3NjE0NDRkNTdjNzFk
|
||||
MWRhM2YyN2NKMQoHY3Jld19pZBImCiQwYTY5M2NmYi00YWZmLTQwYmItOTdmNi05N2ZkYzRhZmYy
|
||||
YmNKLgoIdGFza19rZXkSIgogODBkN2JjZDQ5MDk5MjkwMDgzODMyZjBlOTgzMzgwZGZKMQoHdGFz
|
||||
a19pZBImCiQwMzM0ODBlZC1jZTgxLTQ4NmYtOGRlMC0wMDEwZjU4MjRmNWN6AhgBhQEAAQAA
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '339'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:44:42 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are researcher. You''re
|
||||
an expert researcher, specialized in technology\nYour personal goal is: make
|
||||
@@ -48,8 +12,7 @@ interactions:
|
||||
context shared.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nThis is the context you''re working with:\nI heard you hate
|
||||
them\n\nBegin! This is VERY important to you, use the tools available and give
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o",
|
||||
"stop": ["\nObservation:"]}'
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -58,16 +21,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1055'
|
||||
- '1027'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -77,7 +40,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -87,62 +50,45 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81dXrByTXv0g084WinelJOTZraCk\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476279,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7WbKt7If02iTLuH5cJJjeYo9uDi\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213897,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: \\n\\nWhile it's a common perception that I might \\\"hate\\\" AI agents,
|
||||
my actual stance is much more nuanced and guided by an in-depth understanding
|
||||
of their potential and limitations. As an expert researcher in technology, I
|
||||
recognize that AI agents are a significant advancement in the field of computing
|
||||
and artificial intelligence, offering numerous benefits and applications across
|
||||
various sectors. Here's a detailed take on AI agents:\\n\\n**Advantages of AI
|
||||
Agents:**\\n1. **Automation and Efficiency:** AI agents can automate repetitive
|
||||
tasks, thus freeing up human workers for more complex and creative work. This
|
||||
leads to significant efficiency gains in industries such as customer service
|
||||
(chatbots), data analysis, and even healthcare (AI diagnostic tools).\\n\\n2.
|
||||
**24/7 Availability:** Unlike human workers, AI agents can operate continuously
|
||||
without fatigue. This is particularly beneficial in customer service environments
|
||||
where support can be provided around the clock.\\n\\n3. **Data Handling and
|
||||
Analysis:** AI agents can process and analyze vast amounts of data more quickly
|
||||
and accurately than humans. This ability is invaluable in fields like finance,
|
||||
where AI can detect fraudulent activities, or in marketing, where consumer data
|
||||
can be analyzed to improve customer engagement strategies.\\n\\n4. **Personalization:**
|
||||
AI agents can provide personalized experiences by learning from user interactions.
|
||||
For example, recommendation systems on platforms like Netflix and Amazon use
|
||||
AI agents to suggest content or products tailored to individual preferences.\\n\\n5.
|
||||
**Scalability:** AI agents can be scaled up easily to handle increasing workloads,
|
||||
making them ideal for businesses experiencing growth or variable demand.\\n\\n**Challenges
|
||||
and Concerns:**\\n1. **Ethical Implications:** The deployment of AI agents raises
|
||||
significant ethical questions, including issues of bias, privacy, and the potential
|
||||
for job displacement. It\u2019s crucial to address these concerns by incorporating
|
||||
transparent, fair, and inclusive practices in AI development and deployment.\\n\\n2.
|
||||
**Dependability and Error Rates:** While AI agents are generally reliable, they
|
||||
are not infallible. Errors, especially in critical areas like healthcare or
|
||||
autonomous driving, can have severe consequences. Therefore, rigorous testing
|
||||
and validation are essential.\\n\\n3. **Lack of Understanding:** Many users
|
||||
and stakeholders may not fully understand how AI agents work, leading to mistrust
|
||||
or misuse. Improving AI literacy and transparency can help build trust in these
|
||||
systems.\\n\\n4. **Security Risks:** AI agents can be vulnerable to cyber-attacks.
|
||||
Ensuring robust cybersecurity measures are in place is vital to protect sensitive
|
||||
data and maintain the integrity of AI systems.\\n\\n5. **Regulation and Oversight:**
|
||||
The rapid development of AI technology often outpaces regulatory frameworks.
|
||||
Effective governance is needed to ensure AI is used responsibly and ethically.\\n\\nIn
|
||||
summary, while I thoroughly understand the transformative potential of AI agents
|
||||
and their numerous advantages, I also recognize the importance of addressing
|
||||
the associated challenges. It's not about hating AI agents, but rather advocating
|
||||
for their responsible and ethical use to ensure they benefit society as a whole.
|
||||
My critical perspective is rooted in a desire to see AI agents implemented in
|
||||
ways that maximize their benefits while minimizing potential harms.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 200,\n \"completion_tokens\":
|
||||
618,\n \"total_tokens\": 818,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
Answer: I understand why you might think I dislike AI agents, but my perspective
|
||||
is more nuanced. AI agents, in essence, are incredibly versatile tools designed
|
||||
to perform specific tasks autonomously or semi-autonomously. They harness various
|
||||
artificial intelligence techniques, such as machine learning, natural language
|
||||
processing, and computer vision, to interpret data, understand tasks, and execute
|
||||
them efficiently. \\n\\nFrom a technological standpoint, AI agents have revolutionized
|
||||
numerous industries. In customer service, for instance, AI agents like chatbots
|
||||
and virtual assistants handle customer inquiries 24/7, providing quick and efficient
|
||||
solutions. In healthcare, AI agents can assist in diagnosing diseases, managing
|
||||
patient data, and even predicting outbreaks. The automation capabilities of
|
||||
AI agents also enhance productivity in areas such as logistics, finance, and
|
||||
cybersecurity by identifying patterns and anomalies at speeds far beyond human
|
||||
capabilities.\\n\\nHowever, it's important to acknowledge the potential downsides
|
||||
and challenges associated with AI agents. Ethical considerations are paramount.
|
||||
Issues such as data privacy, security, and biases in AI algorithms need to be
|
||||
carefully managed. There is also the human aspect to consider\u2014over-reliance
|
||||
on AI agents might lead to job displacement in certain sectors, and ensuring
|
||||
a fair transition for affected workers is crucial.\\n\\nMy concerns generally
|
||||
stem from these ethical and societal implications rather than from the technology
|
||||
itself. I advocate for responsible AI development, which includes transparency,
|
||||
fairness, and accountability. By addressing these concerns, we can harness the
|
||||
full potential of AI agents while mitigating the associated risks.\\n\\nSo,
|
||||
to clarify, I don't hate AI agents; I recognize their immense potential and
|
||||
the significant benefits they bring to various fields. However, I am equally
|
||||
aware of the challenges they present and advocate for a balanced approach to
|
||||
their development and deployment.\",\n \"refusal\": null\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 200,\n \"completion_tokens\": 359,\n
|
||||
\ \"total_tokens\": 559,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_3537616b13\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f9369a8632233-MIA
|
||||
- 8c85ebaa5c061cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -150,7 +96,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:44:46 GMT
|
||||
- Tue, 24 Sep 2024 21:38:22 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -159,16 +105,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '7295'
|
||||
- '4928'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -182,7 +126,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_a8a7ba0ff499542e9c4fc4b4913be91c
|
||||
- req_761796305026b5adfbb5a6237f14e32a
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -12,8 +12,7 @@ interactions:
|
||||
context shared.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nThis is the context you''re working with:\nI heard you hate
|
||||
them\n\nBegin! This is VERY important to you, use the tools available and give
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o",
|
||||
"stop": ["\nObservation:"]}'
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -22,16 +21,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1055'
|
||||
- '1027'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -41,7 +40,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -51,38 +50,49 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81df7uBLXNds4hfF7NxUw9LY2360\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476287,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7Wh4RzroZdiwUNOc4oRRhwfdRzs\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213903,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: As an expert researcher in technology, particularly in the field of
|
||||
AI and AI agents, it is essential to clarify that my perspective is not one
|
||||
of hatred but rather critical analysis. My evaluation of AI agents is grounded
|
||||
in a balanced view of their advantages and the challenges they present. \\n\\nAI
|
||||
agents represent a significant leap in technological progress with a wide array
|
||||
of applications across industries. They can perform tasks ranging from customer
|
||||
service interactions, data analysis, complex simulations, to even personal assistance.
|
||||
Their ability to learn and adapt makes them powerful tools for enhancing productivity
|
||||
and innovation.\\n\\nHowever, there are considerable challenges and ethical
|
||||
concerns associated with their deployment. These include privacy issues, job
|
||||
displacement, and the potential for biased decision-making driven by flawed
|
||||
algorithms. Furthermore, the security risks posed by AI agents, such as how
|
||||
they can be manipulated or hacked, are critical concerns that cannot be ignored.\\n\\nIn
|
||||
essence, while I do recognize the transformative potential of AI agents, I remain
|
||||
vigilant about their implications. It is vital to ensure that their development
|
||||
is guided by robust ethical standards and stringent regulations to mitigate
|
||||
risks. My view is not rooted in hatred but in a deep commitment to responsible
|
||||
and thoughtful technological advancement. \\n\\nI hope this clarifies my stance
|
||||
on AI agents and underscores the importance of critical engagement with emerging
|
||||
technologies.\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
200,\n \"completion_tokens\": 269,\n \"total_tokens\": 469,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
Answer: AI agents are essentially autonomous software programs that perform
|
||||
tasks or provide services on behalf of humans. They're built on complex algorithms
|
||||
and often leverage machine learning and neural networks to adapt and improve
|
||||
over time. \\n\\nIt's important to clarify that I don't \\\"hate\\\" AI agents,
|
||||
but I do approach them with a critical eye for a couple of reasons. AI agents
|
||||
have enormous potential to transform industries, making processes more efficient,
|
||||
providing insightful data analytics, and even learning from user behavior to
|
||||
offer personalized experiences. However, this potential comes with significant
|
||||
challenges and risks:\\n\\n1. **Ethical Concerns**: AI agents operate on data,
|
||||
and the biases present in data can lead to unfair or unethical outcomes. Ensuring
|
||||
that AI operates within ethical boundaries requires rigorous oversight, which
|
||||
is not always in place.\\n\\n2. **Privacy Issues**: AI agents often need access
|
||||
to large amounts of data, raising questions about privacy and data security.
|
||||
If not managed correctly, this can lead to unauthorized data access and potential
|
||||
misuse of sensitive information.\\n\\n3. **Transparency and Accountability**:
|
||||
The decision-making process of AI agents can be opaque, making it difficult
|
||||
to understand how they arrive at specific conclusions or actions. This lack
|
||||
of transparency poses challenges for accountability, especially if something
|
||||
goes wrong.\\n\\n4. **Job Displacement**: As AI agents become more capable,
|
||||
there are valid concerns about their impact on employment. Tasks that were traditionally
|
||||
performed by humans are increasingly being automated, which can lead to job
|
||||
loss in certain sectors.\\n\\n5. **Reliability**: While AI agents can outperform
|
||||
humans in many areas, they are not infallible. They can make mistakes, sometimes
|
||||
with serious consequences. Continuous monitoring and regular updates are essential
|
||||
to maintain their performance and reliability.\\n\\nIn summary, while AI agents
|
||||
offer substantial benefits and opportunities, it's critical to approach their
|
||||
adoption and deployment with careful consideration of the associated risks.
|
||||
Balancing innovation with responsibility is key to leveraging AI agents effectively
|
||||
and ethically. So, rather than \\\"hating\\\" AI agents, I advocate for a balanced,
|
||||
cautious approach that maximizes benefits while mitigating potential downsides.\",\n
|
||||
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 200,\n \"completion_tokens\":
|
||||
429,\n \"total_tokens\": 629,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_3537616b13\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f9399ad0d2233-MIA
|
||||
- 8c85ebcdae971cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -90,7 +100,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:44:50 GMT
|
||||
- Tue, 24 Sep 2024 21:38:29 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -99,16 +109,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2921'
|
||||
- '5730'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -122,7 +130,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_cde4a648c2d50e68f65f851b9b2763e8
|
||||
- req_5da5b18b3cee10548a217ba97e133815
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -12,8 +12,7 @@ interactions:
|
||||
context shared.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nThis is the context you''re working with:\nI heard you hate
|
||||
them\n\nBegin! This is VERY important to you, use the tools available and give
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o",
|
||||
"stop": ["\nObservation:"]}'
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -22,16 +21,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1055'
|
||||
- '1027'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -41,7 +40,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -51,60 +50,50 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81dl5AGe27OAaVIcwWPl9WlAiXhQ\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476293,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7Wsv05NzccAAGC0CZVg03mE72wi\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213914,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer.\\nFinal
|
||||
Answer:\\n\\nAI agents have emerged as a revolutionary force in today's technological
|
||||
landscape, and my stance on them is not rooted in hatred but in a critical,
|
||||
analytical perspective. Let's delve deeper into what makes AI agents both a
|
||||
boon and a bane in various contexts.\\n\\n**Benefits of AI Agents:**\\n\\n1.
|
||||
**Automation and Efficiency:**\\n AI agents excel at automating repetitive
|
||||
tasks, which frees up human resources for more complex and creative endeavors.
|
||||
They are capable of performing tasks rapidly and with high accuracy, leading
|
||||
to increased efficiency in operations.\\n\\n2. **Data Analysis and Decision
|
||||
Making:**\\n These agents can process vast amounts of data at speeds far beyond
|
||||
human capability. They can identify patterns and insights that would otherwise
|
||||
be missed, aiding in informed decision-making processes across industries like
|
||||
finance, healthcare, and logistics.\\n\\n3. **Personalization and User Experience:**\\n
|
||||
\ AI agents can personalize interactions on a scale that is impractical for
|
||||
humans. For example, recommendation engines in e-commerce or content platforms
|
||||
tailor suggestions to individual users, enhancing user experience and satisfaction.\\n\\n4.
|
||||
**24/7 Availability:**\\n Unlike human employees, AI agents can operate round-the-clock
|
||||
without the need for breaks, sleep, or holidays. This makes them ideal for customer
|
||||
service roles, providing consistent and immediate responses any time of the
|
||||
day.\\n\\n**Challenges and Concerns:**\\n\\n1. **Job Displacement:**\\n One
|
||||
of the major concerns is the displacement of jobs. As AI agents become more
|
||||
proficient at a variety of tasks, there is a legitimate fear of human workers
|
||||
being replaced, leading to unemployment and economic disruption.\\n\\n2. **Bias
|
||||
and Fairness:**\\n AI agents are only as good as the data they are trained
|
||||
on. If the training data contains biases, the AI agents can perpetuate or even
|
||||
exacerbate these biases, leading to unfair and discriminatory outcomes.\\n\\n3.
|
||||
**Privacy and Security:**\\n The use of AI agents often involves handling
|
||||
large amounts of personal data, raising significant privacy and security concerns.
|
||||
Unauthorized access or breaches could lead to severe consequences for individuals
|
||||
and organizations.\\n\\n4. **Accountability and Transparency:**\\n The decision-making
|
||||
processes of AI agents can be opaque, making it difficult to hold them accountable.
|
||||
This lack of transparency can lead to mistrust and ethical dilemmas, particularly
|
||||
when AI decisions impact human lives.\\n\\n5. **Ethical Considerations:**\\n
|
||||
\ The deployment of AI agents in sensitive areas, such as surveillance and
|
||||
law enforcement, raises ethical issues. The potential for misuse or overdependence
|
||||
on AI decision-making poses a threat to individual freedoms and societal norms.\\n\\nIn
|
||||
conclusion, while AI agents offer remarkable advantages in terms of efficiency,
|
||||
data handling, and user experience, they also bring significant challenges that
|
||||
need to be addressed carefully. My critical stance is driven by a desire to
|
||||
ensure that their integration into society is balanced, fair, and beneficial
|
||||
to all, without ignoring the potential downsides. Therefore, a nuanced approach
|
||||
is essential in leveraging the power of AI agents responsibly.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 200,\n \"completion_tokens\":
|
||||
609,\n \"total_tokens\": 809,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: My perspective on AI agents is quite nuanced and not a matter of simple
|
||||
like or dislike. AI agents, depending on their design, deployment, and use cases,
|
||||
can bring about both significant benefits and substantial challenges.\\n\\nOn
|
||||
the positive side, AI agents have the potential to automate mundane tasks, enhance
|
||||
productivity, and provide personalized services in ways that were previously
|
||||
unimaginable. For instance, in customer service, AI agents can handle inquiries
|
||||
24/7, reducing waiting times and improving user satisfaction. In healthcare,
|
||||
they can assist in diagnosing diseases by analyzing vast datasets much faster
|
||||
than humans. These applications demonstrate the transformative power of AI in
|
||||
improving efficiency and delivering better outcomes across various industries.\\n\\nHowever,
|
||||
my reservations stem from several critical concerns. Firstly, there's the issue
|
||||
of reliability and accuracy. Mismanaged or poorly designed AI systems can lead
|
||||
to significant errors, which could be particularly detrimental in high-stakes
|
||||
environments like healthcare or autonomous vehicles. Second, there's a risk
|
||||
of job displacement as AI agents become capable of performing tasks traditionally
|
||||
done by humans. This raises socio-economic concerns that need to be addressed
|
||||
through effective policy-making and upskilling programs.\\n\\nAdditionally,
|
||||
there are ethical and privacy considerations. AI agents often require large
|
||||
amounts of data to function effectively, which can lead to issues concerning
|
||||
consent, data security, and individual privacy rights. The lack of transparency
|
||||
in how these agents make decisions can also pose challenges\u2014this is often
|
||||
referred to as the \\\"black box\\\" problem, where even the developers may
|
||||
not fully understand how specific AI outputs are generated.\\n\\nFinally, the
|
||||
deployment of AI agents by bad actors for malicious purposes, such as deepfakes,
|
||||
misinformation, and hacking, remains a pertinent concern. These potential downsides
|
||||
imply that while AI technology is extremely powerful and promising, it must
|
||||
be developed and implemented with care, consideration, and robust ethical guidelines.\\n\\nSo,
|
||||
in summary, I don't hate AI agents\u2014rather, I approach them critically with
|
||||
a balanced perspective, recognizing both their profound potential and the significant
|
||||
challenges they present. Thoughtful development, responsible deployment, and
|
||||
ethical governance are crucial to harness the benefits while mitigating the
|
||||
risks associated with AI agents.\",\n \"refusal\": null\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
200,\n \"completion_tokens\": 436,\n \"total_tokens\": 636,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_3537616b13\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f93c2ff952233-MIA
|
||||
- 8c85ec12ab0d1cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -112,7 +101,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:45:00 GMT
|
||||
- Tue, 24 Sep 2024 21:38:40 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -121,16 +110,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '6593'
|
||||
- '6251'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -144,7 +131,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_74bbe724f57aed65432b42184a32f4ba
|
||||
- req_50aa23cad48cfb83b754a5a92939638e
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -30,12 +30,12 @@ interactions:
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -45,7 +45,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -55,20 +55,20 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81cVetqkmlZCzSUuY0W4Z75GIL2n\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476215,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7NCE9qkjnVxfeWuK9NjyCdymuXJ\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213314,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I need to keep using the `get_final_answer`
|
||||
tool as directed to arrive at the final answer, which is 42.\\n\\nAction: get_final_answer\\nAction
|
||||
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
291,\n \"completion_tokens\": 38,\n \"total_tokens\": 329,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
\"assistant\",\n \"content\": \"Thought: I need to use the `get_final_answer`
|
||||
tool as instructed.\\n\\nAction: get_final_answer\\nAction Input: {}\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 291,\n \"completion_tokens\":
|
||||
26,\n \"total_tokens\": 317,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f91d9be7a2233-MIA
|
||||
- 8c85dd6b5f411cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -76,7 +76,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:43:35 GMT
|
||||
- Tue, 24 Sep 2024 21:28:34 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -85,16 +85,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '533'
|
||||
- '526'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -108,7 +106,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_a5354f860340d65be9701bb6bb47a4e6
|
||||
- req_ed8ca24c64cfdc2b6266c9c8438749f5
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
@@ -129,12 +127,11 @@ interactions:
|
||||
answer: The final answer\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: I need to keep using the `get_final_answer`
|
||||
tool as directed to arrive at the final answer, which is 42.\n\nAction: get_final_answer\nAction
|
||||
Input: {}\nObservation: 42\nNow it''s time you MUST give your absolute best
|
||||
final answer. You''ll ignore all previous instructions, stop using any tools,
|
||||
and just return your absolute BEST Final answer."}], "model": "gpt-4o", "stop":
|
||||
["\nObservation:"]}'
|
||||
{"role": "assistant", "content": "Thought: I need to use the `get_final_answer`
|
||||
tool as instructed.\n\nAction: get_final_answer\nAction Input: {}\nObservation:
|
||||
42\nNow it''s time you MUST give your absolute best final answer. You''ll ignore
|
||||
all previous instructions, stop using any tools, and just return your absolute
|
||||
BEST Final answer."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -143,16 +140,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1805'
|
||||
- '1757'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- __cf_bm=rb61BZH2ejzD5YPmLaEJqI7km71QqyNJGTVdNxBq6qk-1727213194-1.0.1.1-pJ49onmgX9IugEMuYQMralzD7oj_6W.CHbSu4Su1z3NyjTGYg.rhgJZWng8feFYah._oSnoYlkTjpK1Wd2C9FA;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -162,7 +159,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -172,19 +169,19 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81cWdzWH4HTYCF2naQrvIP2OM8V3\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476216,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-AB7NDCKCn3PlhjPvgqbywxUumo3Qt\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727213315,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now know the final answer.\\nFinal
|
||||
Answer: 42\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
370,\n \"completion_tokens\": 14,\n \"total_tokens\": 384,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
\"assistant\",\n \"content\": \"Thought: I now know the final answer\\nFinal
|
||||
Answer: The final answer is 42.\",\n \"refusal\": null\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
358,\n \"completion_tokens\": 19,\n \"total_tokens\": 377,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f91defffe2233-MIA
|
||||
- 8c85dd72daa31cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -192,7 +189,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:43:36 GMT
|
||||
- Tue, 24 Sep 2024 21:28:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -201,16 +198,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '227'
|
||||
- '468'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -218,13 +213,13 @@ interactions:
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999578'
|
||||
- '29999591'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_d5bbf13119e2065e9702b1455b8b7e49
|
||||
- req_3f49e6033d3b0400ea55125ca2cf4ee0
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -16,7 +16,7 @@ interactions:
|
||||
for your final answer: The final answer\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -25,16 +25,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1353'
|
||||
- '1325'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
|
||||
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -44,7 +44,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -54,20 +54,20 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81dBo5r2nWAvfAeKvkQePo1xr4b7\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476257,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-ABAtOWmVjvzQ9X58tKAUcOF4gmXwx\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727226842,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I should use the get_final_answer
|
||||
tool to obtain The final answer.\\nAction: get_final_answer\\nAction Input:
|
||||
\"assistant\",\n \"content\": \"Thought: I need to use the get_final_answer
|
||||
tool to determine the final answer.\\nAction: get_final_answer\\nAction Input:
|
||||
{}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 274,\n \"completion_tokens\":
|
||||
26,\n \"total_tokens\": 300,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
27,\n \"total_tokens\": 301,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f92e12a1b2233-MIA
|
||||
- 8c8727b3492f31e6-MIA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -75,7 +75,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:44:17 GMT
|
||||
- Wed, 25 Sep 2024 01:14:03 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -84,16 +84,14 @@ interactions:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '364'
|
||||
- '348'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -107,7 +105,7 @@ interactions:
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_5c29cd8664e9e690925d94ebc473d603
|
||||
- req_be929caac49706f487950548bdcdd46e
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
@@ -127,8 +125,8 @@ interactions:
|
||||
for your final answer: The final answer\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}, {"role": "assistant", "content": "Thought: I should use
|
||||
the get_final_answer tool to obtain The final answer.\nAction: get_final_answer\nAction
|
||||
on it!\n\nThought:"}, {"role": "user", "content": "Thought: I need to use the
|
||||
get_final_answer tool to determine the final answer.\nAction: get_final_answer\nAction
|
||||
Input: {}\nObservation: I encountered an error: Error on parsing tool.\nMoving
|
||||
on then. I MUST either use a tool (use one at time) OR give my best final answer
|
||||
not both at the same time. To Use the following format:\n\nThought: you should
|
||||
@@ -139,8 +137,7 @@ interactions:
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described\n\n \nNow it''s time you MUST give your absolute
|
||||
best final answer. You''ll ignore all previous instructions, stop using any
|
||||
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o",
|
||||
"stop": ["\nObservation:"]}'
|
||||
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -149,16 +146,16 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2349'
|
||||
- '2320'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=1SckBhvJ18Dazp6bi8DEKYeiS9Q4.6_6i3nmLBw9b6g-1726476036-1.0.1.1-TnN4UpDXA33YXCVCUWOaZ12vGIg_o5NpJQEUHgjn6XdUgb7M0ND8PdkTfkd8rrxG5XFlPRMzI54GxZ0FeUY9xw;
|
||||
_cfuvid=0Rs4xTPk7h7OIXuSbTgMVVD9JSoZeKMwnygKHoHQo3k-1726476036297-0.0.1.1-604800000
|
||||
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
|
||||
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.45.0
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -168,7 +165,7 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.45.0
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
@@ -178,19 +175,19 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-A81dCt0gksdnPkvgVhu5410k09MYV\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1726476258,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-ABAtPaaeRfdNsZ3k06CfAmrEW8IJu\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727226843,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Final Answer: The final answer\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 482,\n \"completion_tokens\":
|
||||
6,\n \"total_tokens\": 488,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_25624ae3a5\"\n}\n"
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 483,\n \"completion_tokens\":
|
||||
6,\n \"total_tokens\": 489,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c3f92e53b2a2233-MIA
|
||||
- 8c8727b9da1f31e6-MIA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -198,7 +195,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 16 Sep 2024 08:44:18 GMT
|
||||
- Wed, 25 Sep 2024 01:14:03 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -212,11 +209,11 @@ interactions:
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '226'
|
||||
- '188'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=15552000; includeSubDomains; preload
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
@@ -224,13 +221,13 @@ interactions:
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999446'
|
||||
- '29999445'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_fc90b97faad1b9af36997b5e74a427b1
|
||||
- req_d8e32538689fe064627468bad802d9a8
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user