mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 12:28:30 +00:00
Compare commits
28 Commits
0.74.0
...
feat/impro
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
276cb7b7e8 | ||
|
|
048aa6cbcc | ||
|
|
fa9949b9d0 | ||
|
|
500072d855 | ||
|
|
04bcfa6e2d | ||
|
|
26afee9bed | ||
|
|
f29f4abdd7 | ||
|
|
4589d6fe9d | ||
|
|
201e652fa2 | ||
|
|
8bc07e6071 | ||
|
|
6baaad045a | ||
|
|
74c1703310 | ||
|
|
a921828e51 | ||
|
|
e1fd83e6a7 | ||
|
|
7d68e287cc | ||
|
|
f39a975e20 | ||
|
|
b8a3c29745 | ||
|
|
9cd4ff05c9 | ||
|
|
4687779702 | ||
|
|
8731915330 | ||
|
|
093259389e | ||
|
|
6bcb3d1080 | ||
|
|
71a217b210 | ||
|
|
b98256e434 | ||
|
|
40f81aecf5 | ||
|
|
d1737a96fb | ||
|
|
84f48c465d | ||
|
|
60efcad481 |
@@ -351,7 +351,7 @@ pre-commit install
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
uvx pytest
|
||||
uv run pytest .
|
||||
```
|
||||
|
||||
### Running static type checks
|
||||
|
||||
@@ -31,16 +31,17 @@ Think of an agent as a member of a team, with specific skills and a particular j
|
||||
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
|
||||
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
|
||||
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
|
||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
|
||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. |
|
||||
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
|
||||
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
||||
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
|
||||
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
|
||||
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
|
||||
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
|
||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
|
||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
|
||||
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
|
||||
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
|
||||
| **Code Execution Mode** *(optional)* | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. |
|
||||
|
||||
## Creating an agent
|
||||
|
||||
@@ -83,6 +84,7 @@ agent = Agent(
|
||||
max_retry_limit=2, # Optional
|
||||
use_system_prompt=True, # Optional
|
||||
respect_context_window=True, # Optional
|
||||
code_execution_mode='safe', # Optional, defaults to 'safe'
|
||||
)
|
||||
```
|
||||
|
||||
@@ -156,4 +158,4 @@ crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
|
||||
## Conclusion
|
||||
|
||||
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents,
|
||||
you can create sophisticated AI systems that leverage the power of collaborative intelligence.
|
||||
you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options.
|
||||
|
||||
@@ -6,7 +6,7 @@ icon: terminal
|
||||
|
||||
# CrewAI CLI Documentation
|
||||
|
||||
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
|
||||
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -146,3 +146,34 @@ crewai run
|
||||
Make sure to run these commands from the directory where your CrewAI project is set up.
|
||||
Some commands may require additional configuration or setup within your project structure.
|
||||
</Note>
|
||||
|
||||
|
||||
### 9. API Keys
|
||||
|
||||
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
|
||||
|
||||
Once you've selected an LLM provider, you will be prompted for API keys.
|
||||
|
||||
#### Initial API key providers
|
||||
|
||||
The CLI will initially prompt for API keys for the following services:
|
||||
|
||||
* OpenAI
|
||||
* Groq
|
||||
* Anthropic
|
||||
* Google Gemini
|
||||
|
||||
When you select a provider, the CLI will prompt you to enter your API key.
|
||||
|
||||
#### Other Options
|
||||
|
||||
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
|
||||
|
||||
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
|
||||
|
||||
See the following link for each provider's key name:
|
||||
|
||||
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -23,9 +23,9 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
|
||||
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from dotenv import load_dotenv
|
||||
from litellm import completion
|
||||
|
||||
|
||||
@@ -67,19 +67,19 @@ class ExampleFlow(Flow):
|
||||
return fun_fact
|
||||
|
||||
|
||||
async def main():
|
||||
flow = ExampleFlow()
|
||||
result = await flow.kickoff()
|
||||
|
||||
print(f"Generated fun fact: {result}")
|
||||
flow = ExampleFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
asyncio.run(main())
|
||||
print(f"Generated fun fact: {result}")
|
||||
```
|
||||
|
||||
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
||||
|
||||
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
|
||||
|
||||
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
|
||||
|
||||
### @start()
|
||||
|
||||
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
|
||||
@@ -119,7 +119,6 @@ Here's how you can access the final output:
|
||||
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class OutputExampleFlow(Flow):
|
||||
@@ -131,26 +130,24 @@ class OutputExampleFlow(Flow):
|
||||
def second_method(self, first_output):
|
||||
return f"Second method received: {first_output}"
|
||||
|
||||
async def main():
|
||||
flow = OutputExampleFlow()
|
||||
final_output = await flow.kickoff()
|
||||
print("---- Final Output ----")
|
||||
print(final_output)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
flow = OutputExampleFlow()
|
||||
final_output = flow.kickoff()
|
||||
|
||||
print("---- Final Output ----")
|
||||
print(final_output)
|
||||
````
|
||||
|
||||
``` text Output
|
||||
---- Final Output ----
|
||||
Second method received: Output from first_method
|
||||
```
|
||||
````
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
||||
The `kickoff()` method will return the final output, which is then printed to the console.
|
||||
|
||||
|
||||
#### Accessing and Updating State
|
||||
|
||||
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
|
||||
@@ -160,7 +157,6 @@ Here's an example of how to update and access the state:
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
@@ -181,42 +177,38 @@ class StateExampleFlow(Flow[ExampleState]):
|
||||
self.state.counter += 1
|
||||
return self.state.message
|
||||
|
||||
async def main():
|
||||
flow = StateExampleFlow()
|
||||
final_output = await flow.kickoff()
|
||||
print(f"Final Output: {final_output}")
|
||||
print("Final State:")
|
||||
print(flow.state)
|
||||
|
||||
asyncio.run(main())
|
||||
flow = StateExampleFlow()
|
||||
final_output = flow.kickoff()
|
||||
print(f"Final Output: {final_output}")
|
||||
print("Final State:")
|
||||
print(flow.state)
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
Final Output: Hello from first_method - updated by second_method
|
||||
Final State:
|
||||
counter=2 message='Hello from first_method - updated by second_method'
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In this example, the state is updated by both `first_method` and `second_method`.
|
||||
In this example, the state is updated by both `first_method` and `second_method`.
|
||||
After the Flow has run, you can access the final state to see the updates made by these methods.
|
||||
|
||||
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
|
||||
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
|
||||
while also maintaining and accessing the state throughout the Flow's execution.
|
||||
|
||||
## Flow State Management
|
||||
|
||||
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
|
||||
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
|
||||
allowing developers to choose the approach that best fits their application's needs.
|
||||
|
||||
### Unstructured State Management
|
||||
|
||||
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
|
||||
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
|
||||
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class UntructuredExampleFlow(Flow):
|
||||
@@ -239,12 +231,8 @@ class UntructuredExampleFlow(Flow):
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = UntructuredExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = UntructuredExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
@@ -254,12 +242,10 @@ asyncio.run(main())
|
||||
|
||||
### Structured State Management
|
||||
|
||||
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
|
||||
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
|
||||
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
@@ -288,12 +274,8 @@ class StructuredExampleFlow(Flow[ExampleState]):
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = StructuredExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = StructuredExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
@@ -326,7 +308,6 @@ The `or_` function in Flows allows you to listen to multiple methods and trigger
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, or_, start
|
||||
|
||||
class OrExampleFlow(Flow):
|
||||
@@ -344,22 +325,19 @@ class OrExampleFlow(Flow):
|
||||
print(f"Logger: {result}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = OrExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = OrExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
Logger: Hello from the start method
|
||||
Logger: Hello from the second method
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
|
||||
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
|
||||
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
||||
|
||||
### Conditional Logic: `and`
|
||||
@@ -369,7 +347,6 @@ The `and_` function in Flows allows you to listen to multiple methods and trigge
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, and_, listen, start
|
||||
|
||||
class AndExampleFlow(Flow):
|
||||
@@ -387,34 +364,28 @@ class AndExampleFlow(Flow):
|
||||
print("---- Logger ----")
|
||||
print(self.state)
|
||||
|
||||
|
||||
async def main():
|
||||
flow = AndExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = AndExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
---- Logger ----
|
||||
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
|
||||
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
|
||||
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
||||
|
||||
### Router
|
||||
|
||||
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
|
||||
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
|
||||
You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
import random
|
||||
from crewai.flow.flow import Flow, listen, router, start
|
||||
from pydantic import BaseModel
|
||||
@@ -446,15 +417,11 @@ class RouterFlow(Flow[ExampleState]):
|
||||
print("Fourth method running")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = RouterFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = RouterFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
Starting the structured flow
|
||||
Third method running
|
||||
Fourth method running
|
||||
@@ -462,16 +429,16 @@ Fourth method running
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In the above example, the `start_method` generates a random boolean value and sets it in the state.
|
||||
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
|
||||
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
|
||||
In the above example, the `start_method` generates a random boolean value and sets it in the state.
|
||||
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
|
||||
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
|
||||
The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
||||
|
||||
## Adding Crews to Flows
|
||||
|
||||
Creating a flow with multiple crews in CrewAI is straightforward.
|
||||
Creating a flow with multiple crews in CrewAI is straightforward.
|
||||
|
||||
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
|
||||
|
||||
@@ -485,22 +452,21 @@ This command will generate a new CrewAI project with the necessary folder struct
|
||||
|
||||
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
|
||||
|
||||
| Directory/File | Description |
|
||||
|:---------------------------------|:------------------------------------------------------------------|
|
||||
| `name_of_flow/` | Root directory for the flow. |
|
||||
| ├── `crews/` | Contains directories for specific crews. |
|
||||
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts.|
|
||||
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
||||
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
||||
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
||||
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
|
||||
| ├── `tools/` | Directory for additional tools used in the flow. |
|
||||
| │ └── `custom_tool.py` | Custom tool implementation. |
|
||||
| ├── `main.py` | Main script for running the flow. |
|
||||
| ├── `README.md` | Project description and instructions. |
|
||||
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
||||
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
||||
|
||||
| Directory/File | Description |
|
||||
| :--------------------- | :----------------------------------------------------------------- |
|
||||
| `name_of_flow/` | Root directory for the flow. |
|
||||
| ├── `crews/` | Contains directories for specific crews. |
|
||||
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
|
||||
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
||||
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
||||
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
||||
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
|
||||
| ├── `tools/` | Directory for additional tools used in the flow. |
|
||||
| │ └── `custom_tool.py` | Custom tool implementation. |
|
||||
| ├── `main.py` | Main script for running the flow. |
|
||||
| ├── `README.md` | Project description and instructions. |
|
||||
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
||||
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
||||
|
||||
### Building Your Crews
|
||||
|
||||
@@ -520,7 +486,6 @@ Here's an example of how you can connect the `poem_crew` in the `main.py` file:
|
||||
|
||||
```python Code
|
||||
#!/usr/bin/env python
|
||||
import asyncio
|
||||
from random import randint
|
||||
|
||||
from pydantic import BaseModel
|
||||
@@ -536,14 +501,12 @@ class PoemFlow(Flow[PoemState]):
|
||||
@start()
|
||||
def generate_sentence_count(self):
|
||||
print("Generating sentence count")
|
||||
# Generate a number between 1 and 5
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
|
||||
@listen(generate_sentence_count)
|
||||
def generate_poem(self):
|
||||
print("Generating poem")
|
||||
poem_crew = PoemCrew().crew()
|
||||
result = poem_crew.kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
|
||||
print("Poem generated", result.raw)
|
||||
self.state.poem = result.raw
|
||||
@@ -554,18 +517,17 @@ class PoemFlow(Flow[PoemState]):
|
||||
with open("poem.txt", "w") as f:
|
||||
f.write(self.state.poem)
|
||||
|
||||
async def run():
|
||||
"""
|
||||
Run the flow.
|
||||
"""
|
||||
def kickoff():
|
||||
poem_flow = PoemFlow()
|
||||
await poem_flow.kickoff()
|
||||
poem_flow.kickoff()
|
||||
|
||||
def main():
|
||||
asyncio.run(run())
|
||||
|
||||
def plot():
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.plot()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
kickoff()
|
||||
```
|
||||
|
||||
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
|
||||
@@ -587,13 +549,13 @@ source .venv/bin/activate
|
||||
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
||||
|
||||
```bash
|
||||
crewai flow run
|
||||
crewai flow kickoff
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
uv run run_flow
|
||||
uv run kickoff
|
||||
```
|
||||
|
||||
The flow will execute, and you should see the output in the console.
|
||||
@@ -637,13 +599,114 @@ The generated plot will display nodes representing the tasks in your flow, with
|
||||
|
||||
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
|
||||
## Advanced
|
||||
|
||||
In this section, we explore more complex use cases of CrewAI Flows, starting with a self-evaluation loop. This pattern is crucial for developing AI systems that can iteratively improve their outputs through feedback.
|
||||
|
||||
### 1) Self-Evaluation Loop
|
||||
|
||||
The self-evaluation loop is a powerful pattern that allows AI workflows to automatically assess and refine their outputs. This example demonstrates how to set up a flow that generates content, evaluates it, and iterates based on feedback until the desired quality is achieved.
|
||||
|
||||
#### Overview
|
||||
|
||||
The self-evaluation loop involves two main Crews:
|
||||
|
||||
1. **ShakespeareanXPostCrew**: Generates a Shakespearean-style post on a given topic.
|
||||
2. **XPostReviewCrew**: Evaluates the generated post, providing feedback on its validity and quality.
|
||||
|
||||
The process iterates until the post meets the criteria or a maximum retry limit is reached. This approach ensures high-quality outputs through iterative refinement.
|
||||
|
||||
#### Importance
|
||||
|
||||
This pattern is essential for building robust AI systems that can adapt and improve over time. By automating the evaluation and feedback loop, developers can ensure that their AI workflows produce reliable and high-quality results.
|
||||
|
||||
#### Main Code Highlights
|
||||
|
||||
Below is the `main.py` file for the self-evaluation loop flow:
|
||||
|
||||
```python
|
||||
from typing import Optional
|
||||
from crewai.flow.flow import Flow, listen, router, start
|
||||
from pydantic import BaseModel
|
||||
from self_evaluation_loop_flow.crews.shakespeare_crew.shakespeare_crew import (
|
||||
ShakespeareanXPostCrew,
|
||||
)
|
||||
from self_evaluation_loop_flow.crews.x_post_review_crew.x_post_review_crew import (
|
||||
XPostReviewCrew,
|
||||
)
|
||||
|
||||
class ShakespeareXPostFlowState(BaseModel):
|
||||
x_post: str = ""
|
||||
feedback: Optional[str] = None
|
||||
valid: bool = False
|
||||
retry_count: int = 0
|
||||
|
||||
class ShakespeareXPostFlow(Flow[ShakespeareXPostFlowState]):
|
||||
|
||||
@start("retry")
|
||||
def generate_shakespeare_x_post(self):
|
||||
print("Generating Shakespearean X post")
|
||||
topic = "Flying cars"
|
||||
result = (
|
||||
ShakespeareanXPostCrew()
|
||||
.crew()
|
||||
.kickoff(inputs={"topic": topic, "feedback": self.state.feedback})
|
||||
)
|
||||
print("X post generated", result.raw)
|
||||
self.state.x_post = result.raw
|
||||
|
||||
@router(generate_shakespeare_x_post)
|
||||
def evaluate_x_post(self):
|
||||
if self.state.retry_count > 3:
|
||||
return "max_retry_exceeded"
|
||||
result = XPostReviewCrew().crew().kickoff(inputs={"x_post": self.state.x_post})
|
||||
self.state.valid = result["valid"]
|
||||
self.state.feedback = result["feedback"]
|
||||
print("valid", self.state.valid)
|
||||
print("feedback", self.state.feedback)
|
||||
self.state.retry_count += 1
|
||||
if self.state.valid:
|
||||
return "complete"
|
||||
return "retry"
|
||||
|
||||
@listen("complete")
|
||||
def save_result(self):
|
||||
print("X post is valid")
|
||||
print("X post:", self.state.x_post)
|
||||
with open("x_post.txt", "w") as file:
|
||||
file.write(self.state.x_post)
|
||||
|
||||
@listen("max_retry_exceeded")
|
||||
def max_retry_exceeded_exit(self):
|
||||
print("Max retry count exceeded")
|
||||
print("X post:", self.state.x_post)
|
||||
print("Feedback:", self.state.feedback)
|
||||
|
||||
def kickoff():
|
||||
shakespeare_flow = ShakespeareXPostFlow()
|
||||
shakespeare_flow.kickoff()
|
||||
|
||||
def plot():
|
||||
shakespeare_flow = ShakespeareXPostFlow()
|
||||
shakespeare_flow.plot()
|
||||
|
||||
if __name__ == "__main__":
|
||||
kickoff()
|
||||
```
|
||||
|
||||
#### Code Highlights
|
||||
|
||||
- **Retry Mechanism**: The flow uses a retry mechanism to regenerate the post if it doesn't meet the criteria, up to a maximum of three retries.
|
||||
- **Feedback Loop**: Feedback from the `XPostReviewCrew` is used to refine the post iteratively.
|
||||
- **State Management**: The flow maintains state using a Pydantic model, ensuring type safety and clarity.
|
||||
|
||||
For a complete example and further details, please refer to the [Self Evaluation Loop Flow repository](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow).
|
||||
|
||||
|
||||
## Next Steps
|
||||
|
||||
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
|
||||
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are five specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
|
||||
|
||||
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
|
||||
|
||||
@@ -653,17 +716,19 @@ If you're interested in exploring additional examples of flows, we have a variet
|
||||
|
||||
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
||||
|
||||
5. **Self Evaluation Loop Flow**: This flow demonstrates a self-evaluation loop where AI workflows automatically assess and refine their outputs through feedback. It involves generating content, evaluating it, and iterating until the desired quality is achieved. This pattern is crucial for developing robust AI systems that can adapt and improve over time. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow)
|
||||
|
||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||
|
||||
Also, check out our YouTube video on how to use flows in CrewAI below!
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
@@ -62,6 +62,8 @@ os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
2. Using LLM class attributes:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
@@ -95,9 +97,11 @@ When configuring an LLM for your agent, you have access to a wide range of param
|
||||
| **api_key** | `str` | Your API key for authentication. |
|
||||
|
||||
|
||||
Example:
|
||||
## OpenAI Example Configuration
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.8,
|
||||
@@ -112,15 +116,31 @@ llm = LLM(
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## Cerebras Example Configuration
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="cerebras/llama-3.1-70b",
|
||||
base_url="https://api.cerebras.ai/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## Using Ollama (Local LLMs)
|
||||
|
||||
crewAI supports using Ollama for running open-source models locally:
|
||||
CrewAI supports using Ollama for running open-source models locally:
|
||||
|
||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||
2. Run a model: `ollama run llama2`
|
||||
3. Configure agent:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
agent = Agent(
|
||||
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
|
||||
...
|
||||
@@ -132,6 +152,8 @@ agent = Agent(
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
|
||||
@@ -105,9 +105,48 @@ my_crew = Crew(
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder=embedding_functions.OpenAIEmbeddingFunction(
|
||||
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
|
||||
)
|
||||
embedder={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": 'text-embedding-3-small'
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
||||
|
||||
Example:
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
|
||||
)
|
||||
```
|
||||
|
||||
### Using Ollama embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "mxbai-embed-large"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
@@ -122,16 +161,20 @@ my_crew = Crew(
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder=embedding_functions.OpenAIEmbeddingFunction(
|
||||
api_key=os.getenv("OPENAI_API_KEY"),
|
||||
model_name="text-embedding-ada-002"
|
||||
)
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"config": {
|
||||
"api_key": "<YOUR_API_KEY>",
|
||||
"model_name": "<model_name>"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Azure OpenAI embeddings
|
||||
|
||||
```python Code
|
||||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
@@ -140,7 +183,7 @@ my_crew = Crew(
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder=embedding_functions.OpenAIEmbeddingFunction(
|
||||
embedder=OpenAIEmbeddingFunction(
|
||||
api_key="YOUR_API_KEY",
|
||||
api_base="YOUR_API_BASE_PATH",
|
||||
api_type="azure",
|
||||
@@ -153,6 +196,7 @@ my_crew = Crew(
|
||||
### Using Vertex AI embeddings
|
||||
|
||||
```python Code
|
||||
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
@@ -161,7 +205,7 @@ my_crew = Crew(
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder=embedding_functions.GoogleVertexEmbeddingFunction(
|
||||
embedder=GoogleVertexEmbeddingFunction(
|
||||
project_id="YOUR_PROJECT_ID",
|
||||
region="YOUR_REGION",
|
||||
api_key="YOUR_API_KEY",
|
||||
@@ -181,10 +225,32 @@ my_crew = Crew(
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder=embedding_functions.CohereEmbeddingFunction(
|
||||
api_key=YOUR_API_KEY,
|
||||
model_name="<model_name>"
|
||||
)
|
||||
embedder={
|
||||
"provider": "cohere",
|
||||
"config": {
|
||||
"api_key": "YOUR_API_KEY",
|
||||
"model_name": "<model_name>"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
### Using HuggingFace embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "huggingface",
|
||||
"config": {
|
||||
"api_url": "<api_url>",
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
@@ -20,14 +20,21 @@ pip install 'crewai[tools]'
|
||||
|
||||
### Subclassing `BaseTool`
|
||||
|
||||
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes and the `_run` method.
|
||||
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes, including the `args_schema` for input validation, and the `_run` method.
|
||||
|
||||
```python Code
|
||||
from typing import Type
|
||||
from crewai_tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class MyToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = "What this tool does. It's vital for effective utilization."
|
||||
args_schema: Type[BaseModel] = MyToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Your tool's logic here
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "crewai"
|
||||
version = "0.74.0"
|
||||
version = "0.76.2"
|
||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
@@ -16,7 +16,7 @@ dependencies = [
|
||||
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
||||
"instructor>=1.3.3",
|
||||
"regex>=2024.9.11",
|
||||
"crewai-tools>=0.13.1",
|
||||
"crewai-tools>=0.13.2",
|
||||
"click>=8.1.7",
|
||||
"python-dotenv>=1.0.0",
|
||||
"appdirs>=1.4.4",
|
||||
@@ -25,9 +25,10 @@ dependencies = [
|
||||
"auth0-python>=4.7.1",
|
||||
"litellm>=1.44.22",
|
||||
"pyvis>=0.3.2",
|
||||
"uv>=0.4.18",
|
||||
"uv>=0.4.25",
|
||||
"tomli-w>=1.1.0",
|
||||
"chromadb>=0.4.24",
|
||||
"tomli>=2.0.2",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
@@ -36,7 +37,7 @@ Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools>=0.12.1"]
|
||||
tools = ["crewai-tools>=0.13.2"]
|
||||
agentops = ["agentops>=0.3.0"]
|
||||
|
||||
[tool.uv]
|
||||
@@ -51,7 +52,7 @@ dev-dependencies = [
|
||||
"mkdocs-material-extensions>=1.3.1",
|
||||
"pillow>=10.2.0",
|
||||
"cairosvg>=2.7.1",
|
||||
"crewai-tools>=0.12.1",
|
||||
"crewai-tools>=0.13.2",
|
||||
"pytest>=8.0.0",
|
||||
"pytest-vcr>=1.0.2",
|
||||
"python-dotenv>=1.0.0",
|
||||
|
||||
@@ -14,5 +14,5 @@ warnings.filterwarnings(
|
||||
category=UserWarning,
|
||||
module="pydantic.main",
|
||||
)
|
||||
__version__ = "0.74.0"
|
||||
__version__ = "0.76.2"
|
||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import os
|
||||
from inspect import signature
|
||||
from typing import Any, List, Optional, Union
|
||||
import shutil
|
||||
import subprocess
|
||||
from typing import Any, List, Literal, Optional, Union
|
||||
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
|
||||
|
||||
@@ -112,6 +113,10 @@ class Agent(BaseAgent):
|
||||
default=2,
|
||||
description="Maximum number of retries for an agent to execute a task when an error occurs.",
|
||||
)
|
||||
code_execution_mode: Literal["safe", "unsafe"] = Field(
|
||||
default="safe",
|
||||
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def post_init_setup(self):
|
||||
@@ -173,6 +178,9 @@ class Agent(BaseAgent):
|
||||
if not self.agent_executor:
|
||||
self._setup_agent_executor()
|
||||
|
||||
if self.allow_code_execution:
|
||||
self._validate_docker_installation()
|
||||
|
||||
return self
|
||||
|
||||
def _setup_agent_executor(self):
|
||||
@@ -308,7 +316,9 @@ class Agent(BaseAgent):
|
||||
try:
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
return [CodeInterpreterTool()]
|
||||
# Set the unsafe_mode based on the code_execution_mode attribute
|
||||
unsafe_mode = self.code_execution_mode == "unsafe"
|
||||
return [CodeInterpreterTool(unsafe_mode=unsafe_mode)]
|
||||
except ModuleNotFoundError:
|
||||
self._logger.log(
|
||||
"info", "Coding tools not available. Install crewai_tools. "
|
||||
@@ -384,30 +394,49 @@ class Agent(BaseAgent):
|
||||
def _render_text_description_and_args(self, tools: List[Any]) -> str:
|
||||
"""Render the tool name, description, and args in plain text.
|
||||
|
||||
Output will be in the format of:
|
||||
Output will be in the format of:
|
||||
|
||||
.. code-block:: markdown
|
||||
.. code-block:: markdown
|
||||
|
||||
search: This tool is used for search, args: {"query": {"type": "string"}}
|
||||
calculator: This tool is used for math, \
|
||||
args: {"expression": {"type": "string"}}
|
||||
args: {"expression": {"type": "string"}}
|
||||
"""
|
||||
tool_strings = []
|
||||
for tool in tools:
|
||||
args_schema = str(tool.args)
|
||||
if hasattr(tool, "func") and tool.func:
|
||||
sig = signature(tool.func)
|
||||
description = (
|
||||
f"Tool Name: {tool.name}{sig}\nTool Description: {tool.description}"
|
||||
)
|
||||
else:
|
||||
description = (
|
||||
f"Tool Name: {tool.name}\nTool Description: {tool.description}"
|
||||
)
|
||||
args_schema = {
|
||||
name: {
|
||||
"description": field.description,
|
||||
"type": field.annotation.__name__,
|
||||
}
|
||||
for name, field in tool.args_schema.model_fields.items()
|
||||
}
|
||||
description = (
|
||||
f"Tool Name: {tool.name}\nTool Description: {tool.description}"
|
||||
)
|
||||
tool_strings.append(f"{description}\nTool Arguments: {args_schema}")
|
||||
|
||||
return "\n".join(tool_strings)
|
||||
|
||||
def _validate_docker_installation(self) -> None:
|
||||
"""Check if Docker is installed and running."""
|
||||
if not shutil.which("docker"):
|
||||
raise RuntimeError(
|
||||
f"Docker is not installed. Please install Docker to use code execution with agent: {self.role}"
|
||||
)
|
||||
|
||||
try:
|
||||
subprocess.run(
|
||||
["docker", "info"],
|
||||
check=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
except subprocess.CalledProcessError:
|
||||
raise RuntimeError(
|
||||
f"Docker is not running. Please start Docker to use code execution with agent: {self.role}"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def __tools_names(tools) -> str:
|
||||
return ", ".join([t.name for t in tools])
|
||||
|
||||
@@ -2,6 +2,7 @@ import json
|
||||
import re
|
||||
from typing import Any, Dict, List, Union
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
||||
from crewai.agents.parser import (
|
||||
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE,
|
||||
@@ -19,7 +20,6 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
)
|
||||
from crewai.utilities.logger import Logger
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
|
||||
|
||||
class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
@@ -323,9 +323,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||
train_iteration = self.crew._train_iteration
|
||||
if agent_id in training_data and isinstance(train_iteration, int):
|
||||
training_data[agent_id][train_iteration]["improved_output"] = (
|
||||
result.output
|
||||
)
|
||||
training_data[agent_id][train_iteration][
|
||||
"improved_output"
|
||||
] = result.output
|
||||
training_handler.save(training_data)
|
||||
else:
|
||||
self._logger.log(
|
||||
|
||||
@@ -14,11 +14,11 @@ from .authentication.main import AuthenticationCommand
|
||||
from .deploy.main import DeployCommand
|
||||
from .evaluate_crew import evaluate_crew
|
||||
from .install_crew import install_crew
|
||||
from .kickoff_flow import kickoff_flow
|
||||
from .plot_flow import plot_flow
|
||||
from .replay_from_task import replay_task_command
|
||||
from .reset_memories_command import reset_memories_command
|
||||
from .run_crew import run_crew
|
||||
from .run_flow import run_flow
|
||||
from .tools.main import ToolCommand
|
||||
from .train_crew import train_crew
|
||||
from .update_crew import update_crew
|
||||
@@ -32,10 +32,12 @@ def crewai():
|
||||
@crewai.command()
|
||||
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
|
||||
@click.argument("name")
|
||||
def create(type, name):
|
||||
@click.option("--provider", type=str, help="The provider to use for the crew")
|
||||
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
|
||||
def create(type, name, provider, skip_provider=False):
|
||||
"""Create a new crew, pipeline, or flow."""
|
||||
if type == "crew":
|
||||
create_crew(name)
|
||||
create_crew(name, provider, skip_provider)
|
||||
elif type == "pipeline":
|
||||
create_pipeline(name)
|
||||
elif type == "flow":
|
||||
@@ -176,10 +178,14 @@ def test(n_iterations: int, model: str):
|
||||
evaluate_crew(n_iterations, model)
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def install():
|
||||
@crewai.command(context_settings=dict(
|
||||
ignore_unknown_options=True,
|
||||
allow_extra_args=True,
|
||||
))
|
||||
@click.pass_context
|
||||
def install(context):
|
||||
"""Install the Crew."""
|
||||
install_crew()
|
||||
install_crew(context.args)
|
||||
|
||||
|
||||
@crewai.command()
|
||||
@@ -304,11 +310,11 @@ def flow():
|
||||
pass
|
||||
|
||||
|
||||
@flow.command(name="run")
|
||||
@flow.command(name="kickoff")
|
||||
def flow_run():
|
||||
"""Run the Flow."""
|
||||
"""Kickoff the Flow."""
|
||||
click.echo("Running the Flow")
|
||||
run_flow()
|
||||
kickoff_flow()
|
||||
|
||||
|
||||
@flow.command(name="plot")
|
||||
|
||||
@@ -1,8 +1,16 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
|
||||
from crewai.cli.provider import get_provider_data, select_provider, PROVIDERS
|
||||
|
||||
from crewai.cli.constants import ENV_VARS
|
||||
from crewai.cli.provider import (
|
||||
PROVIDERS,
|
||||
get_provider_data,
|
||||
select_model,
|
||||
select_provider,
|
||||
)
|
||||
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
|
||||
|
||||
|
||||
def create_folder_structure(name, parent_folder=None):
|
||||
@@ -14,11 +22,19 @@ def create_folder_structure(name, parent_folder=None):
|
||||
else:
|
||||
folder_path = Path(folder_name)
|
||||
|
||||
click.secho(
|
||||
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
|
||||
fg="green",
|
||||
bold=True,
|
||||
)
|
||||
if folder_path.exists():
|
||||
if not click.confirm(
|
||||
f"Folder {folder_name} already exists. Do you want to override it?"
|
||||
):
|
||||
click.secho("Operation cancelled.", fg="yellow")
|
||||
sys.exit(0)
|
||||
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
|
||||
else:
|
||||
click.secho(
|
||||
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
|
||||
fg="green",
|
||||
bold=True,
|
||||
)
|
||||
|
||||
if not folder_path.exists():
|
||||
folder_path.mkdir(parents=True)
|
||||
@@ -27,11 +43,6 @@ def create_folder_structure(name, parent_folder=None):
|
||||
(folder_path / "src" / folder_name).mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
||||
else:
|
||||
click.secho(
|
||||
f"\tFolder {folder_name} already exists.",
|
||||
fg="yellow",
|
||||
)
|
||||
|
||||
return folder_path, folder_name, class_name
|
||||
|
||||
@@ -70,37 +81,84 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
|
||||
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||
|
||||
|
||||
def create_crew(name, parent_folder=None):
|
||||
def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
|
||||
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
|
||||
env_vars = load_env_vars(folder_path)
|
||||
if not skip_provider:
|
||||
if not provider:
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
existing_provider = None
|
||||
for provider, env_keys in ENV_VARS.items():
|
||||
if any(key in env_vars for key in env_keys):
|
||||
existing_provider = provider
|
||||
break
|
||||
|
||||
selected_provider = select_provider(provider_models)
|
||||
if not selected_provider:
|
||||
return
|
||||
provider = selected_provider
|
||||
if existing_provider:
|
||||
if not click.confirm(
|
||||
f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?"
|
||||
):
|
||||
click.secho("Keeping existing provider configuration.", fg="yellow")
|
||||
return
|
||||
|
||||
# selected_model = select_model(provider, provider_models)
|
||||
# if not selected_model:
|
||||
# return
|
||||
# model = selected_model
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
|
||||
if provider in PROVIDERS:
|
||||
api_key_var = ENV_VARS[provider][0]
|
||||
else:
|
||||
api_key_var = click.prompt(
|
||||
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||
type=str,
|
||||
while True:
|
||||
selected_provider = select_provider(provider_models)
|
||||
if selected_provider is None: # User typed 'q'
|
||||
click.secho("Exiting...", fg="yellow")
|
||||
sys.exit(0)
|
||||
if selected_provider: # Valid selection
|
||||
break
|
||||
click.secho(
|
||||
"No provider selected. Please try again or press 'q' to exit.", fg="red"
|
||||
)
|
||||
|
||||
while True:
|
||||
selected_model = select_model(selected_provider, provider_models)
|
||||
if selected_model is None: # User typed 'q'
|
||||
click.secho("Exiting...", fg="yellow")
|
||||
sys.exit(0)
|
||||
if selected_model: # Valid selection
|
||||
break
|
||||
click.secho(
|
||||
"No model selected. Please try again or press 'q' to exit.", fg="red"
|
||||
)
|
||||
|
||||
if selected_provider in PROVIDERS:
|
||||
api_key_var = ENV_VARS[selected_provider][0]
|
||||
else:
|
||||
api_key_var = click.prompt(
|
||||
f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
|
||||
type=str,
|
||||
default="",
|
||||
)
|
||||
|
||||
api_key_value = ""
|
||||
click.echo(
|
||||
f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
|
||||
nl=False,
|
||||
)
|
||||
try:
|
||||
api_key_value = input()
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
api_key_value = ""
|
||||
|
||||
env_vars = {api_key_var: "YOUR_API_KEY_HERE"}
|
||||
write_env_file(folder_path, env_vars)
|
||||
if api_key_value.strip():
|
||||
env_vars = {api_key_var: api_key_value}
|
||||
write_env_file(folder_path, env_vars)
|
||||
click.secho("API key saved to .env file", fg="green")
|
||||
else:
|
||||
click.secho(
|
||||
"No API key provided. Skipping .env file creation.", fg="yellow"
|
||||
)
|
||||
|
||||
# env_vars['MODEL'] = model
|
||||
# click.secho(f"Selected model: {model}", fg="green")
|
||||
env_vars["MODEL"] = selected_model
|
||||
click.secho(f"Selected model: {selected_model}", fg="green")
|
||||
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "crew"
|
||||
|
||||
@@ -3,12 +3,13 @@ import subprocess
|
||||
import click
|
||||
|
||||
|
||||
def install_crew() -> None:
|
||||
def install_crew(proxy_options: list[str]) -> None:
|
||||
"""
|
||||
Install the crew by running the UV command to lock and install.
|
||||
"""
|
||||
try:
|
||||
subprocess.run(["uv", "sync"], check=True, capture_output=False, text=True)
|
||||
command = ["uv", "sync"] + proxy_options
|
||||
subprocess.run(command, check=True, capture_output=False, text=True)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while running the crew: {e}", err=True)
|
||||
|
||||
@@ -3,11 +3,11 @@ import subprocess
|
||||
import click
|
||||
|
||||
|
||||
def run_flow() -> None:
|
||||
def kickoff_flow() -> None:
|
||||
"""
|
||||
Run the flow by running a command in the UV environment.
|
||||
Kickoff the flow by running a command in the UV environment.
|
||||
"""
|
||||
command = ["uv", "run", "run_flow"]
|
||||
command = ["uv", "run", "kickoff"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
@@ -7,7 +7,7 @@ def plot_flow() -> None:
|
||||
"""
|
||||
Plot the flow by running a command in the UV environment.
|
||||
"""
|
||||
command = ["uv", "run", "plot_flow"]
|
||||
command = ["uv", "run", "plot"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
@@ -1,67 +1,91 @@
|
||||
import json
|
||||
import time
|
||||
import requests
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
from pathlib import Path
|
||||
from crewai.cli.constants import PROVIDERS, MODELS, JSON_URL
|
||||
import requests
|
||||
|
||||
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS
|
||||
|
||||
|
||||
def select_choice(prompt_message, choices):
|
||||
"""
|
||||
Presents a list of choices to the user and prompts them to select one.
|
||||
|
||||
|
||||
Args:
|
||||
- prompt_message (str): The message to display to the user before presenting the choices.
|
||||
- choices (list): A list of options to present to the user.
|
||||
|
||||
|
||||
Returns:
|
||||
- str: The selected choice from the list, or None if the operation is aborted or an invalid selection is made.
|
||||
- str: The selected choice from the list, or None if the user chooses to quit.
|
||||
"""
|
||||
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
click.secho(prompt_message, fg="cyan")
|
||||
for idx, choice in enumerate(choices, start=1):
|
||||
click.secho(f"{idx}. {choice}", fg="cyan")
|
||||
try:
|
||||
selected_index = click.prompt("Enter the number of your choice", type=int) - 1
|
||||
except click.exceptions.Abort:
|
||||
click.secho("Operation aborted by the user.", fg="red")
|
||||
return None
|
||||
if not (0 <= selected_index < len(choices)):
|
||||
click.secho("Invalid selection.", fg="red")
|
||||
return None
|
||||
return choices[selected_index]
|
||||
click.secho("q. Quit", fg="cyan")
|
||||
|
||||
while True:
|
||||
choice = click.prompt(
|
||||
"Enter the number of your choice or 'q' to quit", type=str
|
||||
)
|
||||
|
||||
if choice.lower() == "q":
|
||||
return None
|
||||
|
||||
try:
|
||||
selected_index = int(choice) - 1
|
||||
if 0 <= selected_index < len(choices):
|
||||
return choices[selected_index]
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
click.secho(
|
||||
"Invalid selection. Please select a number between 1 and 6 or 'q' to quit.",
|
||||
fg="red",
|
||||
)
|
||||
|
||||
|
||||
def select_provider(provider_models):
|
||||
"""
|
||||
Presents a list of providers to the user and prompts them to select one.
|
||||
|
||||
|
||||
Args:
|
||||
- provider_models (dict): A dictionary of provider models.
|
||||
|
||||
|
||||
Returns:
|
||||
- str: The selected provider, or None if the operation is aborted or an invalid selection is made.
|
||||
- str: The selected provider
|
||||
- None: If user explicitly quits
|
||||
"""
|
||||
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
|
||||
|
||||
provider = select_choice("Select a provider to set up:", predefined_providers + ['other'])
|
||||
if not provider:
|
||||
provider = select_choice(
|
||||
"Select a provider to set up:", predefined_providers + ["other"]
|
||||
)
|
||||
if provider is None: # User typed 'q'
|
||||
return None
|
||||
provider = provider.lower()
|
||||
|
||||
if provider == 'other':
|
||||
if provider == "other":
|
||||
provider = select_choice("Select a provider from the full list:", all_providers)
|
||||
if not provider:
|
||||
if provider is None: # User typed 'q'
|
||||
return None
|
||||
return provider
|
||||
|
||||
return provider.lower() if provider else False
|
||||
|
||||
|
||||
def select_model(provider, provider_models):
|
||||
"""
|
||||
Presents a list of models for a given provider to the user and prompts them to select one.
|
||||
|
||||
|
||||
Args:
|
||||
- provider (str): The provider for which to select a model.
|
||||
- provider_models (dict): A dictionary of provider models.
|
||||
|
||||
|
||||
Returns:
|
||||
- str: The selected model, or None if the operation is aborted or an invalid selection is made.
|
||||
"""
|
||||
@@ -76,37 +100,49 @@ def select_model(provider, provider_models):
|
||||
click.secho(f"No models available for provider '{provider}'.", fg="red")
|
||||
return None
|
||||
|
||||
selected_model = select_choice(f"Select a model to use for {provider.capitalize()}:", available_models)
|
||||
selected_model = select_choice(
|
||||
f"Select a model to use for {provider.capitalize()}:", available_models
|
||||
)
|
||||
return selected_model
|
||||
|
||||
|
||||
def load_provider_data(cache_file, cache_expiry):
|
||||
"""
|
||||
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
|
||||
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
- cache_expiry (int): The cache expiry time in seconds.
|
||||
|
||||
|
||||
Returns:
|
||||
- dict or None: The loaded provider data or None if the operation fails.
|
||||
"""
|
||||
current_time = time.time()
|
||||
if cache_file.exists() and (current_time - cache_file.stat().st_mtime) < cache_expiry:
|
||||
if (
|
||||
cache_file.exists()
|
||||
and (current_time - cache_file.stat().st_mtime) < cache_expiry
|
||||
):
|
||||
data = read_cache_file(cache_file)
|
||||
if data:
|
||||
return data
|
||||
click.secho("Cache is corrupted. Fetching provider data from the web...", fg="yellow")
|
||||
click.secho(
|
||||
"Cache is corrupted. Fetching provider data from the web...", fg="yellow"
|
||||
)
|
||||
else:
|
||||
click.secho("Cache expired or not found. Fetching provider data from the web...", fg="cyan")
|
||||
click.secho(
|
||||
"Cache expired or not found. Fetching provider data from the web...",
|
||||
fg="cyan",
|
||||
)
|
||||
return fetch_provider_data(cache_file)
|
||||
|
||||
|
||||
def read_cache_file(cache_file):
|
||||
"""
|
||||
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
|
||||
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
|
||||
|
||||
Returns:
|
||||
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
|
||||
"""
|
||||
@@ -116,13 +152,14 @@ def read_cache_file(cache_file):
|
||||
except json.JSONDecodeError:
|
||||
return None
|
||||
|
||||
|
||||
def fetch_provider_data(cache_file):
|
||||
"""
|
||||
Fetches provider data from a specified URL and caches it to a file.
|
||||
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
|
||||
|
||||
Returns:
|
||||
- dict or None: The fetched provider data or None if the operation fails.
|
||||
"""
|
||||
@@ -139,38 +176,42 @@ def fetch_provider_data(cache_file):
|
||||
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
|
||||
return None
|
||||
|
||||
|
||||
def download_data(response):
|
||||
"""
|
||||
Downloads data from a given HTTP response and returns the JSON content.
|
||||
|
||||
|
||||
Args:
|
||||
- response (requests.Response): The HTTP response object.
|
||||
|
||||
|
||||
Returns:
|
||||
- dict: The JSON content of the response.
|
||||
"""
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
total_size = int(response.headers.get("content-length", 0))
|
||||
block_size = 8192
|
||||
data_chunks = []
|
||||
with click.progressbar(length=total_size, label='Downloading', show_pos=True) as progress_bar:
|
||||
with click.progressbar(
|
||||
length=total_size, label="Downloading", show_pos=True
|
||||
) as progress_bar:
|
||||
for chunk in response.iter_content(block_size):
|
||||
if chunk:
|
||||
data_chunks.append(chunk)
|
||||
progress_bar.update(len(chunk))
|
||||
data_content = b''.join(data_chunks)
|
||||
return json.loads(data_content.decode('utf-8'))
|
||||
data_content = b"".join(data_chunks)
|
||||
return json.loads(data_content.decode("utf-8"))
|
||||
|
||||
|
||||
def get_provider_data():
|
||||
"""
|
||||
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
|
||||
|
||||
|
||||
Returns:
|
||||
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
|
||||
"""
|
||||
cache_dir = Path.home() / '.crewai'
|
||||
cache_dir = Path.home() / ".crewai"
|
||||
cache_dir.mkdir(exist_ok=True)
|
||||
cache_file = cache_dir / 'provider_cache.json'
|
||||
cache_expiry = 24 * 3600
|
||||
cache_file = cache_dir / "provider_cache.json"
|
||||
cache_expiry = 24 * 3600
|
||||
|
||||
data = load_provider_data(cache_file, cache_expiry)
|
||||
if not data:
|
||||
@@ -179,8 +220,8 @@ def get_provider_data():
|
||||
provider_models = defaultdict(list)
|
||||
for model_name, properties in data.items():
|
||||
provider = properties.get("litellm_provider", "").strip().lower()
|
||||
if 'http' in provider or provider == 'other':
|
||||
if "http" in provider or provider == "other":
|
||||
continue
|
||||
if provider:
|
||||
provider_models[provider].append(model_name)
|
||||
return provider_models
|
||||
return provider_models
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
import tomllib
|
||||
from packaging import version
|
||||
|
||||
from crewai.cli.utils import get_crewai_version
|
||||
from crewai.cli.utils import get_crewai_version, read_toml
|
||||
|
||||
|
||||
def run_crew() -> None:
|
||||
@@ -15,10 +14,9 @@ def run_crew() -> None:
|
||||
crewai_version = get_crewai_version()
|
||||
min_required_version = "0.71.0"
|
||||
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
pyproject_data = read_toml()
|
||||
|
||||
if data.get("tool", {}).get("poetry") and (
|
||||
if pyproject_data.get("tool", {}).get("poetry") and (
|
||||
version.parse(crewai_version) < version.parse(min_required_version)
|
||||
):
|
||||
click.secho(
|
||||
@@ -35,10 +33,7 @@ def run_crew() -> None:
|
||||
click.echo(f"An error occurred while running the crew: {e}", err=True)
|
||||
click.echo(e.output, err=True, nl=True)
|
||||
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
|
||||
if data.get("tool", {}).get("poetry"):
|
||||
if pyproject_data.get("tool", {}).get("poetry"):
|
||||
click.secho(
|
||||
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
|
||||
fg="yellow",
|
||||
|
||||
@@ -3,7 +3,7 @@ import sys
|
||||
from {{folder_name}}.crew import {{crew_name}}Crew
|
||||
|
||||
# This main file is intended to be a way for you to run your
|
||||
# crew locally, so refrain from adding necessary logic into this file.
|
||||
# crew locally, so refrain from adding unnecessary logic into this file.
|
||||
# Replace with inputs you want to test with, it will automatically
|
||||
# interpolate any tasks and agents information
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.74.0,<1.0.0"
|
||||
"crewai[tools]>=0.76.2,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -1,11 +1,17 @@
|
||||
from typing import Type
|
||||
from crewai_tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class MyCustomToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = (
|
||||
"Clear description for what this tool is useful for, you agent will need this information to use it."
|
||||
)
|
||||
args_schema: Type[BaseModel] = MyCustomToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Implementation goes here
|
||||
|
||||
@@ -1,65 +1,53 @@
|
||||
#!/usr/bin/env python
|
||||
import asyncio
|
||||
from random import randint
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
from .crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
|
||||
class PoemState(BaseModel):
|
||||
sentence_count: int = 1
|
||||
poem: str = ""
|
||||
|
||||
|
||||
class PoemFlow(Flow[PoemState]):
|
||||
|
||||
@start()
|
||||
def generate_sentence_count(self):
|
||||
print("Generating sentence count")
|
||||
# Generate a number between 1 and 5
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
|
||||
@listen(generate_sentence_count)
|
||||
def generate_poem(self):
|
||||
print("Generating poem")
|
||||
print(f"State before poem: {self.state}")
|
||||
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
|
||||
result = (
|
||||
PoemCrew()
|
||||
.crew()
|
||||
.kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
)
|
||||
|
||||
print("Poem generated", result.raw)
|
||||
self.state.poem = result.raw
|
||||
|
||||
print(f"State after generate_poem: {self.state}")
|
||||
|
||||
@listen(generate_poem)
|
||||
def save_poem(self):
|
||||
print("Saving poem")
|
||||
print(f"State before save_poem: {self.state}")
|
||||
with open("poem.txt", "w") as f:
|
||||
f.write(self.state.poem)
|
||||
print(f"State after save_poem: {self.state}")
|
||||
|
||||
async def run_flow():
|
||||
"""
|
||||
Run the flow.
|
||||
"""
|
||||
|
||||
def kickoff():
|
||||
poem_flow = PoemFlow()
|
||||
await poem_flow.kickoff()
|
||||
poem_flow.kickoff()
|
||||
|
||||
async def plot_flow():
|
||||
"""
|
||||
Plot the flow.
|
||||
"""
|
||||
|
||||
def plot():
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.plot()
|
||||
|
||||
|
||||
def main():
|
||||
asyncio.run(run_flow())
|
||||
|
||||
|
||||
def plot():
|
||||
asyncio.run(plot_flow())
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
kickoff()
|
||||
|
||||
@@ -5,14 +5,12 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.74.0,<1.0.0",
|
||||
"asyncio"
|
||||
"crewai[tools]>=0.76.2,<1.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:main"
|
||||
run_flow = "{{folder_name}}.main:main"
|
||||
plot_flow = "{{folder_name}}.main:plot"
|
||||
kickoff = "{{folder_name}}.main:kickoff"
|
||||
plot = "{{folder_name}}.main:plot"
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
|
||||
@@ -1,4 +1,13 @@
|
||||
from typing import Type
|
||||
|
||||
from crewai_tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class MyCustomToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
@@ -6,6 +15,7 @@ class MyCustomTool(BaseTool):
|
||||
description: str = (
|
||||
"Clear description for what this tool is useful for, you agent will need this information to use it."
|
||||
)
|
||||
args_schema: Type[BaseModel] = MyCustomToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Implementation goes here
|
||||
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.74.0,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.76.2,<1.0.0" }
|
||||
asyncio = "*"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
@@ -1,11 +1,17 @@
|
||||
from typing import Type
|
||||
from crewai_tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class MyCustomToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = (
|
||||
"Clear description for what this tool is useful for, you agent will need this information to use it."
|
||||
)
|
||||
args_schema: Type[BaseModel] = MyCustomToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Implementation goes here
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.74.0,<1.0.0"
|
||||
"crewai[tools]>=0.76.2,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -1,11 +1,17 @@
|
||||
from typing import Type
|
||||
from crewai_tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class MyCustomToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = (
|
||||
"Clear description for what this tool is useful for, you agent will need this information to use it."
|
||||
)
|
||||
args_schema: Type[BaseModel] = MyCustomToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Implementation goes here
|
||||
|
||||
@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.74.0"
|
||||
"crewai[tools]>=0.76.2"
|
||||
]
|
||||
|
||||
|
||||
@@ -28,8 +28,6 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
A class to handle tool repository related operations for CrewAI projects.
|
||||
"""
|
||||
|
||||
BASE_URL = "https://app.crewai.com/pypi/"
|
||||
|
||||
def __init__(self):
|
||||
BaseCommand.__init__(self)
|
||||
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
||||
@@ -178,12 +176,14 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
def _add_package(self, tool_details):
|
||||
tool_handle = tool_details["handle"]
|
||||
repository_handle = tool_details["repository"]["handle"]
|
||||
repository_url = tool_details["repository"]["url"]
|
||||
index = f"{repository_handle}={repository_url}"
|
||||
|
||||
add_package_command = [
|
||||
"uv",
|
||||
"add",
|
||||
"--extra-index-url",
|
||||
self.BASE_URL + repository_handle,
|
||||
"--index",
|
||||
index,
|
||||
tool_handle,
|
||||
]
|
||||
add_package_result = subprocess.run(
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import tomli_w
|
||||
import tomllib
|
||||
|
||||
from crewai.cli.utils import read_toml
|
||||
|
||||
|
||||
def update_crew() -> None:
|
||||
@@ -17,10 +19,9 @@ def migrate_pyproject(input_file, output_file):
|
||||
And it will be used to migrate the pyproject.toml to the new format when uv is used.
|
||||
When the time comes that uv supports the new format, this function will be deprecated.
|
||||
"""
|
||||
|
||||
poetry_data = {}
|
||||
# Read the input pyproject.toml
|
||||
with open(input_file, "rb") as f:
|
||||
pyproject = tomllib.load(f)
|
||||
pyproject_data = read_toml()
|
||||
|
||||
# Initialize the new project structure
|
||||
new_pyproject = {
|
||||
@@ -29,30 +30,30 @@ def migrate_pyproject(input_file, output_file):
|
||||
}
|
||||
|
||||
# Migrate project metadata
|
||||
if "tool" in pyproject and "poetry" in pyproject["tool"]:
|
||||
poetry = pyproject["tool"]["poetry"]
|
||||
new_pyproject["project"]["name"] = poetry.get("name")
|
||||
new_pyproject["project"]["version"] = poetry.get("version")
|
||||
new_pyproject["project"]["description"] = poetry.get("description")
|
||||
if "tool" in pyproject_data and "poetry" in pyproject_data["tool"]:
|
||||
poetry_data = pyproject_data["tool"]["poetry"]
|
||||
new_pyproject["project"]["name"] = poetry_data.get("name")
|
||||
new_pyproject["project"]["version"] = poetry_data.get("version")
|
||||
new_pyproject["project"]["description"] = poetry_data.get("description")
|
||||
new_pyproject["project"]["authors"] = [
|
||||
{
|
||||
"name": author.split("<")[0].strip(),
|
||||
"email": author.split("<")[1].strip(">").strip(),
|
||||
}
|
||||
for author in poetry.get("authors", [])
|
||||
for author in poetry_data.get("authors", [])
|
||||
]
|
||||
new_pyproject["project"]["requires-python"] = poetry.get("python")
|
||||
new_pyproject["project"]["requires-python"] = poetry_data.get("python")
|
||||
else:
|
||||
# If it's already in the new format, just copy the project section
|
||||
new_pyproject["project"] = pyproject.get("project", {})
|
||||
new_pyproject["project"] = pyproject_data.get("project", {})
|
||||
|
||||
# Migrate or copy dependencies
|
||||
if "dependencies" in new_pyproject["project"]:
|
||||
# If dependencies are already in the new format, keep them as is
|
||||
pass
|
||||
elif "dependencies" in poetry:
|
||||
elif poetry_data and "dependencies" in poetry_data:
|
||||
new_pyproject["project"]["dependencies"] = []
|
||||
for dep, version in poetry["dependencies"].items():
|
||||
for dep, version in poetry_data["dependencies"].items():
|
||||
if isinstance(version, dict): # Handle extras
|
||||
extras = ",".join(version.get("extras", []))
|
||||
new_dep = f"{dep}[{extras}]"
|
||||
@@ -66,10 +67,10 @@ def migrate_pyproject(input_file, output_file):
|
||||
new_pyproject["project"]["dependencies"].append(new_dep)
|
||||
|
||||
# Migrate or copy scripts
|
||||
if "scripts" in poetry:
|
||||
new_pyproject["project"]["scripts"] = poetry["scripts"]
|
||||
elif "scripts" in pyproject.get("project", {}):
|
||||
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"]
|
||||
if poetry_data and "scripts" in poetry_data:
|
||||
new_pyproject["project"]["scripts"] = poetry_data["scripts"]
|
||||
elif pyproject_data.get("project", {}) and "scripts" in pyproject_data["project"]:
|
||||
new_pyproject["project"]["scripts"] = pyproject_data["project"]["scripts"]
|
||||
else:
|
||||
new_pyproject["project"]["scripts"] = {}
|
||||
|
||||
@@ -86,14 +87,23 @@ def migrate_pyproject(input_file, output_file):
|
||||
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
|
||||
|
||||
# Migrate optional dependencies
|
||||
if "extras" in poetry:
|
||||
new_pyproject["project"]["optional-dependencies"] = poetry["extras"]
|
||||
if poetry_data and "extras" in poetry_data:
|
||||
new_pyproject["project"]["optional-dependencies"] = poetry_data["extras"]
|
||||
|
||||
# Backup the old pyproject.toml
|
||||
backup_file = "pyproject-old.toml"
|
||||
shutil.copy2(input_file, backup_file)
|
||||
print(f"Original pyproject.toml backed up as {backup_file}")
|
||||
|
||||
# Rename the poetry.lock file
|
||||
lock_file = "poetry.lock"
|
||||
lock_backup = "poetry-old.lock"
|
||||
if os.path.exists(lock_file):
|
||||
os.rename(lock_file, lock_backup)
|
||||
print(f"Original poetry.lock renamed to {lock_backup}")
|
||||
else:
|
||||
print("No poetry.lock file found to rename.")
|
||||
|
||||
# Write the new pyproject.toml
|
||||
with open(output_file, "wb") as f:
|
||||
tomli_w.dump(new_pyproject, f)
|
||||
|
||||
@@ -6,6 +6,7 @@ from functools import reduce
|
||||
from typing import Any, Dict, List
|
||||
|
||||
import click
|
||||
import tomli
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.cli.authentication.utils import TokenManager
|
||||
@@ -54,6 +55,13 @@ def simple_toml_parser(content):
|
||||
return result
|
||||
|
||||
|
||||
def read_toml(file_path: str = "pyproject.toml"):
|
||||
"""Read the content of a TOML file and return it as a dictionary."""
|
||||
with open(file_path, "rb") as f:
|
||||
toml_dict = tomli.load(f)
|
||||
return toml_dict
|
||||
|
||||
|
||||
def parse_toml(content):
|
||||
if sys.version_info >= (3, 11):
|
||||
return tomllib.loads(content)
|
||||
|
||||
@@ -435,15 +435,16 @@ class Crew(BaseModel):
|
||||
self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {}
|
||||
) -> None:
|
||||
"""Trains the crew for a given number of iterations."""
|
||||
self._setup_for_training(filename)
|
||||
train_crew = self.copy()
|
||||
train_crew._setup_for_training(filename)
|
||||
|
||||
for n_iteration in range(n_iterations):
|
||||
self._train_iteration = n_iteration
|
||||
self.kickoff(inputs=inputs)
|
||||
train_crew._train_iteration = n_iteration
|
||||
train_crew.kickoff(inputs=inputs)
|
||||
|
||||
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
|
||||
|
||||
for agent in self.agents:
|
||||
for agent in train_crew.agents:
|
||||
result = TaskEvaluator(agent).evaluate_training_data(
|
||||
training_data=training_data, agent_id=str(agent.id)
|
||||
)
|
||||
@@ -987,17 +988,19 @@ class Crew(BaseModel):
|
||||
inputs: Optional[Dict[str, Any]] = None,
|
||||
) -> None:
|
||||
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
|
||||
self._test_execution_span = self._telemetry.test_execution_span(
|
||||
self,
|
||||
test_crew = self.copy()
|
||||
|
||||
self._test_execution_span = test_crew._telemetry.test_execution_span(
|
||||
test_crew,
|
||||
n_iterations,
|
||||
inputs,
|
||||
openai_model_name, # type: ignore[arg-type]
|
||||
) # type: ignore[arg-type]
|
||||
evaluator = CrewEvaluator(self, openai_model_name) # type: ignore[arg-type]
|
||||
evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type]
|
||||
|
||||
for i in range(1, n_iterations + 1):
|
||||
evaluator.set_iteration(i)
|
||||
self.kickoff(inputs=inputs)
|
||||
test_crew.kickoff(inputs=inputs)
|
||||
|
||||
evaluator.print_crew_evaluation_result()
|
||||
|
||||
|
||||
@@ -190,7 +190,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
"""Returns the list of all outputs from executed methods."""
|
||||
return self._method_outputs
|
||||
|
||||
async def kickoff(self) -> Any:
|
||||
def kickoff(self) -> Any:
|
||||
return asyncio.run(self.kickoff_async())
|
||||
|
||||
async def kickoff_async(self) -> Any:
|
||||
if not self._start_methods:
|
||||
raise ValueError("No start method defined")
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ class EntityMemory(Memory):
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="entities",
|
||||
allow_reset=False,
|
||||
allow_reset=True,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
)
|
||||
|
||||
@@ -8,6 +8,9 @@ from typing import Any, Dict, List, Optional
|
||||
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
from chromadb.api import ClientAPI
|
||||
from chromadb.api.types import validate_embedding_function
|
||||
from chromadb import Documents, EmbeddingFunction, Embeddings
|
||||
from typing import cast
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
@@ -41,16 +44,93 @@ class RAGStorage(BaseRAGStorage):
|
||||
self.agents = agents
|
||||
|
||||
self.type = type
|
||||
self.embedder_config = embedder_config or self._create_embedding_function()
|
||||
|
||||
self.allow_reset = allow_reset
|
||||
self._initialize_app()
|
||||
|
||||
def _set_embedder_config(self):
|
||||
import chromadb.utils.embedding_functions as embedding_functions
|
||||
|
||||
if self.embedder_config is None:
|
||||
self.embedder_config = self._create_default_embedding_function()
|
||||
|
||||
if isinstance(self.embedder_config, dict):
|
||||
provider = self.embedder_config.get("provider")
|
||||
config = self.embedder_config.get("config", {})
|
||||
model_name = config.get("model")
|
||||
if provider == "openai":
|
||||
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
|
||||
api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"),
|
||||
model_name=model_name,
|
||||
)
|
||||
elif provider == "azure":
|
||||
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
|
||||
api_key=config.get("api_key"),
|
||||
api_base=config.get("api_base"),
|
||||
api_type=config.get("api_type", "azure"),
|
||||
api_version=config.get("api_version"),
|
||||
model_name=model_name,
|
||||
)
|
||||
elif provider == "ollama":
|
||||
from openai import OpenAI
|
||||
|
||||
class OllamaEmbeddingFunction(EmbeddingFunction):
|
||||
def __call__(self, input: Documents) -> Embeddings:
|
||||
client = OpenAI(
|
||||
base_url="http://localhost:11434/v1",
|
||||
api_key=config.get("api_key", "ollama"),
|
||||
)
|
||||
try:
|
||||
response = client.embeddings.create(
|
||||
input=input, model=model_name
|
||||
)
|
||||
embeddings = [item.embedding for item in response.data]
|
||||
return cast(Embeddings, embeddings)
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
self.embedder_config = OllamaEmbeddingFunction()
|
||||
elif provider == "vertexai":
|
||||
self.embedder_config = (
|
||||
embedding_functions.GoogleVertexEmbeddingFunction(
|
||||
model_name=model_name,
|
||||
api_key=config.get("api_key"),
|
||||
)
|
||||
)
|
||||
elif provider == "google":
|
||||
self.embedder_config = (
|
||||
embedding_functions.GoogleGenerativeAiEmbeddingFunction(
|
||||
model_name=model_name,
|
||||
api_key=config.get("api_key"),
|
||||
)
|
||||
)
|
||||
elif provider == "cohere":
|
||||
self.embedder_config = embedding_functions.CohereEmbeddingFunction(
|
||||
model_name=model_name,
|
||||
api_key=config.get("api_key"),
|
||||
)
|
||||
elif provider == "huggingface":
|
||||
self.embedder_config = embedding_functions.HuggingFaceEmbeddingServer(
|
||||
url=config.get("api_url"),
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
f"Unsupported embedding provider: {provider}, supported providers: [openai, azure, ollama, vertexai, google, cohere, huggingface]"
|
||||
)
|
||||
else:
|
||||
validate_embedding_function(self.embedder_config) # type: ignore # used for validating embedder_config if defined a embedding function/class
|
||||
self.embedder_config = self.embedder_config
|
||||
|
||||
def _initialize_app(self):
|
||||
import chromadb
|
||||
from chromadb.config import Settings
|
||||
|
||||
self._set_embedder_config()
|
||||
chroma_client = chromadb.PersistentClient(
|
||||
path=f"{db_storage_path()}/{self.type}/{self.agents}"
|
||||
path=f"{db_storage_path()}/{self.type}/{self.agents}",
|
||||
settings=Settings(allow_reset=self.allow_reset),
|
||||
)
|
||||
|
||||
self.app = chroma_client
|
||||
|
||||
try:
|
||||
@@ -122,11 +202,15 @@ class RAGStorage(BaseRAGStorage):
|
||||
if self.app:
|
||||
self.app.reset()
|
||||
except Exception as e:
|
||||
raise Exception(
|
||||
f"An error occurred while resetting the {self.type} memory: {e}"
|
||||
)
|
||||
if "attempt to write a readonly database" in str(e):
|
||||
# Ignore this specific error
|
||||
pass
|
||||
else:
|
||||
raise Exception(
|
||||
f"An error occurred while resetting the {self.type} memory: {e}"
|
||||
)
|
||||
|
||||
def _create_embedding_function(self):
|
||||
def _create_default_embedding_function(self):
|
||||
import chromadb.utils.embedding_functions as embedding_functions
|
||||
|
||||
return embedding_functions.OpenAIEmbeddingFunction(
|
||||
|
||||
@@ -6,14 +6,13 @@ from difflib import SequenceMatcher
|
||||
from textwrap import dedent
|
||||
from typing import Any, List, Union
|
||||
|
||||
import crewai.utilities.events as events
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.task import Task
|
||||
from crewai.telemetry import Telemetry
|
||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
|
||||
from crewai.utilities import I18N, Converter, ConverterError, Printer
|
||||
import crewai.utilities.events as events
|
||||
|
||||
|
||||
agentops = None
|
||||
if os.environ.get("AGENTOPS_API_KEY"):
|
||||
@@ -300,8 +299,11 @@ class ToolUsage:
|
||||
descriptions = []
|
||||
for tool in self.tools:
|
||||
args = {
|
||||
k: {k2: v2 for k2, v2 in v.items() if k2 in ["description", "type"]}
|
||||
for k, v in tool.args.items()
|
||||
name: {
|
||||
"description": field.description,
|
||||
"type": field.annotation.__name__,
|
||||
}
|
||||
for name, field in tool.args_schema.model_fields.items()
|
||||
}
|
||||
descriptions.append(
|
||||
"\n".join(
|
||||
|
||||
@@ -75,8 +75,8 @@ def test_install_success(mock_get, mock_subprocess_run):
|
||||
[
|
||||
"uv",
|
||||
"add",
|
||||
"--extra-index-url",
|
||||
"https://app.crewai.com/pypi/sample-repo",
|
||||
"--index",
|
||||
"sample-repo=https://example.com/repo",
|
||||
"sample-tool",
|
||||
],
|
||||
capture_output=False,
|
||||
|
||||
@@ -9,6 +9,7 @@ from unittest.mock import MagicMock, patch
|
||||
import instructor
|
||||
import pydantic_core
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.crew import Crew
|
||||
@@ -497,6 +498,7 @@ def test_cache_hitting_between_agents():
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_api_calls_throttling(capsys):
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai_tools import tool
|
||||
|
||||
@tool
|
||||
@@ -779,11 +781,14 @@ def test_async_task_execution_call_count():
|
||||
list_important_history.output = mock_task_output
|
||||
write_article.output = mock_task_output
|
||||
|
||||
with patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync, patch.object(
|
||||
Task, "execute_async", return_value=mock_future
|
||||
) as mock_execute_async:
|
||||
with (
|
||||
patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync,
|
||||
patch.object(
|
||||
Task, "execute_async", return_value=mock_future
|
||||
) as mock_execute_async,
|
||||
):
|
||||
crew.kickoff()
|
||||
|
||||
assert mock_execute_async.call_count == 2
|
||||
@@ -1105,6 +1110,7 @@ def test_dont_set_agents_step_callback_if_already_set():
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_function_calling_llm():
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai_tools import tool
|
||||
|
||||
llm = "gpt-4o"
|
||||
@@ -1448,52 +1454,6 @@ def test_crew_does_not_interpolate_without_inputs():
|
||||
interpolate_task_inputs.assert_not_called()
|
||||
|
||||
|
||||
# def test_crew_partial_inputs():
|
||||
# agent = Agent(
|
||||
# role="{topic} Researcher",
|
||||
# goal="Express hot takes on {topic}.",
|
||||
# backstory="You have a lot of experience with {topic}.",
|
||||
# )
|
||||
|
||||
# task = Task(
|
||||
# description="Give me an analysis around {topic}.",
|
||||
# expected_output="{points} bullet points about {topic}.",
|
||||
# )
|
||||
|
||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"topic": "AI"})
|
||||
# inputs = {"topic": "AI"}
|
||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
||||
|
||||
# assert crew.tasks[0].description == "Give me an analysis around AI."
|
||||
# assert crew.tasks[0].expected_output == "{points} bullet points about AI."
|
||||
# assert crew.agents[0].role == "AI Researcher"
|
||||
# assert crew.agents[0].goal == "Express hot takes on AI."
|
||||
# assert crew.agents[0].backstory == "You have a lot of experience with AI."
|
||||
|
||||
|
||||
# def test_crew_invalid_inputs():
|
||||
# agent = Agent(
|
||||
# role="{topic} Researcher",
|
||||
# goal="Express hot takes on {topic}.",
|
||||
# backstory="You have a lot of experience with {topic}.",
|
||||
# )
|
||||
|
||||
# task = Task(
|
||||
# description="Give me an analysis around {topic}.",
|
||||
# expected_output="{points} bullet points about {topic}.",
|
||||
# )
|
||||
|
||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"subject": "AI"})
|
||||
# inputs = {"subject": "AI"}
|
||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
||||
|
||||
# assert crew.tasks[0].description == "Give me an analysis around {topic}."
|
||||
# assert crew.tasks[0].expected_output == "{points} bullet points about {topic}."
|
||||
# assert crew.agents[0].role == "{topic} Researcher"
|
||||
# assert crew.agents[0].goal == "Express hot takes on {topic}."
|
||||
# assert crew.agents[0].backstory == "You have a lot of experience with {topic}."
|
||||
|
||||
|
||||
def test_task_callback_on_crew():
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
@@ -1770,7 +1730,10 @@ def test_manager_agent_with_tools_raises_exception():
|
||||
@patch("crewai.crew.Crew.kickoff")
|
||||
@patch("crewai.crew.CrewTrainingHandler")
|
||||
@patch("crewai.crew.TaskEvaluator")
|
||||
def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
||||
@patch("crewai.crew.Crew.copy")
|
||||
def test_crew_train_success(
|
||||
copy_mock, task_evaluator, crew_training_handler, kickoff_mock
|
||||
):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -1781,9 +1744,19 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
||||
agents=[researcher, writer],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
# Create a mock for the copied crew
|
||||
copy_mock.return_value = crew
|
||||
|
||||
crew.train(
|
||||
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
|
||||
)
|
||||
|
||||
# Ensure kickoff is called on the copied crew
|
||||
kickoff_mock.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
task_evaluator.assert_has_calls(
|
||||
[
|
||||
mock.call(researcher),
|
||||
@@ -1822,10 +1795,6 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
||||
]
|
||||
)
|
||||
|
||||
kickoff.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
|
||||
def test_crew_train_error():
|
||||
task = Task(
|
||||
@@ -1840,7 +1809,7 @@ def test_crew_train_error():
|
||||
)
|
||||
|
||||
with pytest.raises(TypeError) as e:
|
||||
crew.train()
|
||||
crew.train() # type: ignore purposefully throwing err
|
||||
assert "train() missing 1 required positional argument: 'n_iterations'" in str(
|
||||
e
|
||||
)
|
||||
@@ -2536,8 +2505,9 @@ def test_conditional_should_execute():
|
||||
|
||||
|
||||
@mock.patch("crewai.crew.CrewEvaluator")
|
||||
@mock.patch("crewai.crew.Crew.copy")
|
||||
@mock.patch("crewai.crew.Crew.kickoff")
|
||||
def test_crew_testing_function(mock_kickoff, crew_evaluator):
|
||||
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -2548,11 +2518,15 @@ def test_crew_testing_function(mock_kickoff, crew_evaluator):
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
# Create a mock for the copied crew
|
||||
copy_mock.return_value = crew
|
||||
|
||||
n_iterations = 2
|
||||
crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"})
|
||||
|
||||
assert len(mock_kickoff.mock_calls) == n_iterations
|
||||
mock_kickoff.assert_has_calls(
|
||||
# Ensure kickoff is called on the copied crew
|
||||
kickoff_mock.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
|
||||
119
tests/tools/test_tool_usage.py
Normal file
119
tests/tools/test_tool_usage.py
Normal file
@@ -0,0 +1,119 @@
|
||||
import json
|
||||
import random
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
from crewai_tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai import Agent, Task
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
|
||||
|
||||
class RandomNumberToolInput(BaseModel):
|
||||
min_value: int = Field(
|
||||
..., description="The minimum value of the range (inclusive)"
|
||||
)
|
||||
max_value: int = Field(
|
||||
..., description="The maximum value of the range (inclusive)"
|
||||
)
|
||||
|
||||
|
||||
class RandomNumberTool(BaseTool):
|
||||
name: str = "Random Number Generator"
|
||||
description: str = "Generates a random number within a specified range"
|
||||
args_schema: type[BaseModel] = RandomNumberToolInput
|
||||
|
||||
def _run(self, min_value: int, max_value: int) -> int:
|
||||
return random.randint(min_value, max_value)
|
||||
|
||||
|
||||
# Example agent and task
|
||||
example_agent = Agent(
|
||||
role="Number Generator",
|
||||
goal="Generate random numbers for various purposes",
|
||||
backstory="You are an AI agent specialized in generating random numbers within specified ranges.",
|
||||
tools=[RandomNumberTool()],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
example_task = Task(
|
||||
description="Generate a random number between 1 and 100",
|
||||
expected_output="A random number between 1 and 100",
|
||||
agent=example_agent,
|
||||
)
|
||||
|
||||
|
||||
def test_random_number_tool_range():
|
||||
tool = RandomNumberTool()
|
||||
result = tool._run(1, 10)
|
||||
assert 1 <= result <= 10
|
||||
|
||||
|
||||
def test_random_number_tool_invalid_range():
|
||||
tool = RandomNumberTool()
|
||||
with pytest.raises(ValueError):
|
||||
tool._run(10, 1) # min_value > max_value
|
||||
|
||||
|
||||
def test_random_number_tool_schema():
|
||||
tool = RandomNumberTool()
|
||||
|
||||
# Get the schema using model_json_schema()
|
||||
schema = tool.args_schema.model_json_schema()
|
||||
|
||||
# Convert the schema to a string
|
||||
schema_str = json.dumps(schema)
|
||||
|
||||
# Check if the schema string contains the expected fields
|
||||
assert "min_value" in schema_str
|
||||
assert "max_value" in schema_str
|
||||
|
||||
# Parse the schema string back to a dictionary
|
||||
schema_dict = json.loads(schema_str)
|
||||
|
||||
# Check if the schema contains the correct field types
|
||||
assert schema_dict["properties"]["min_value"]["type"] == "integer"
|
||||
assert schema_dict["properties"]["max_value"]["type"] == "integer"
|
||||
|
||||
# Check if the schema contains the field descriptions
|
||||
assert (
|
||||
"minimum value" in schema_dict["properties"]["min_value"]["description"].lower()
|
||||
)
|
||||
assert (
|
||||
"maximum value" in schema_dict["properties"]["max_value"]["description"].lower()
|
||||
)
|
||||
|
||||
|
||||
def test_tool_usage_render():
|
||||
tool = RandomNumberTool()
|
||||
|
||||
tool_usage = ToolUsage(
|
||||
tools_handler=MagicMock(),
|
||||
tools=[tool],
|
||||
original_tools=[tool],
|
||||
tools_description="Sample tool for testing",
|
||||
tools_names="random_number_generator",
|
||||
task=MagicMock(),
|
||||
function_calling_llm=MagicMock(),
|
||||
agent=MagicMock(),
|
||||
action=MagicMock(),
|
||||
)
|
||||
|
||||
rendered = tool_usage._render()
|
||||
|
||||
# Updated checks to match the actual output
|
||||
assert "Tool Name: random number generator" in rendered
|
||||
assert (
|
||||
"Random Number Generator(min_value: 'integer', max_value: 'integer') - Generates a random number within a specified range min_value: 'The minimum value of the range (inclusive)', max_value: 'The maximum value of the range (inclusive)'"
|
||||
in rendered
|
||||
)
|
||||
assert "Tool Arguments:" in rendered
|
||||
assert (
|
||||
"'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}"
|
||||
in rendered
|
||||
)
|
||||
assert (
|
||||
"'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}"
|
||||
in rendered
|
||||
)
|
||||
Reference in New Issue
Block a user