mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-17 21:08:29 +00:00
Compare commits
19 Commits
feat/cli-m
...
0.75.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b8a3c29745 | ||
|
|
9cd4ff05c9 | ||
|
|
4687779702 | ||
|
|
8731915330 | ||
|
|
093259389e | ||
|
|
6bcb3d1080 | ||
|
|
71a217b210 | ||
|
|
b98256e434 | ||
|
|
40f81aecf5 | ||
|
|
d1737a96fb | ||
|
|
84f48c465d | ||
|
|
60efcad481 | ||
|
|
53a9f107ca | ||
|
|
6fa2b89831 | ||
|
|
d72ebb9bb8 | ||
|
|
81ae07abdb | ||
|
|
6d20ba70a1 | ||
|
|
67f55bae2c | ||
|
|
9b59de1720 |
@@ -23,9 +23,9 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
|
|||||||
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
|
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
|
|
||||||
from crewai.flow.flow import Flow, listen, start
|
from crewai.flow.flow import Flow, listen, start
|
||||||
|
from dotenv import load_dotenv
|
||||||
from litellm import completion
|
from litellm import completion
|
||||||
|
|
||||||
|
|
||||||
@@ -67,19 +67,19 @@ class ExampleFlow(Flow):
|
|||||||
return fun_fact
|
return fun_fact
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
flow = ExampleFlow()
|
|
||||||
result = await flow.kickoff()
|
|
||||||
|
|
||||||
print(f"Generated fun fact: {result}")
|
flow = ExampleFlow()
|
||||||
|
result = flow.kickoff()
|
||||||
|
|
||||||
asyncio.run(main())
|
print(f"Generated fun fact: {result}")
|
||||||
```
|
```
|
||||||
|
|
||||||
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
||||||
|
|
||||||
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
|
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
|
||||||
|
|
||||||
|
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
|
||||||
|
|
||||||
### @start()
|
### @start()
|
||||||
|
|
||||||
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
|
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
|
||||||
@@ -119,7 +119,6 @@ Here's how you can access the final output:
|
|||||||
|
|
||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
from crewai.flow.flow import Flow, listen, start
|
from crewai.flow.flow import Flow, listen, start
|
||||||
|
|
||||||
class OutputExampleFlow(Flow):
|
class OutputExampleFlow(Flow):
|
||||||
@@ -131,26 +130,24 @@ class OutputExampleFlow(Flow):
|
|||||||
def second_method(self, first_output):
|
def second_method(self, first_output):
|
||||||
return f"Second method received: {first_output}"
|
return f"Second method received: {first_output}"
|
||||||
|
|
||||||
async def main():
|
|
||||||
flow = OutputExampleFlow()
|
|
||||||
final_output = await flow.kickoff()
|
|
||||||
print("---- Final Output ----")
|
|
||||||
print(final_output)
|
|
||||||
|
|
||||||
asyncio.run(main())
|
flow = OutputExampleFlow()
|
||||||
```
|
final_output = flow.kickoff()
|
||||||
|
|
||||||
|
print("---- Final Output ----")
|
||||||
|
print(final_output)
|
||||||
|
````
|
||||||
|
|
||||||
``` text Output
|
``` text Output
|
||||||
---- Final Output ----
|
---- Final Output ----
|
||||||
Second method received: Output from first_method
|
Second method received: Output from first_method
|
||||||
```
|
````
|
||||||
|
|
||||||
</CodeGroup>
|
</CodeGroup>
|
||||||
|
|
||||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
||||||
The `kickoff()` method will return the final output, which is then printed to the console.
|
The `kickoff()` method will return the final output, which is then printed to the console.
|
||||||
|
|
||||||
|
|
||||||
#### Accessing and Updating State
|
#### Accessing and Updating State
|
||||||
|
|
||||||
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
|
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
|
||||||
@@ -160,7 +157,6 @@ Here's an example of how to update and access the state:
|
|||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
from crewai.flow.flow import Flow, listen, start
|
from crewai.flow.flow import Flow, listen, start
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
|
|
||||||
@@ -181,21 +177,19 @@ class StateExampleFlow(Flow[ExampleState]):
|
|||||||
self.state.counter += 1
|
self.state.counter += 1
|
||||||
return self.state.message
|
return self.state.message
|
||||||
|
|
||||||
async def main():
|
flow = StateExampleFlow()
|
||||||
flow = StateExampleFlow()
|
final_output = flow.kickoff()
|
||||||
final_output = await flow.kickoff()
|
print(f"Final Output: {final_output}")
|
||||||
print(f"Final Output: {final_output}")
|
print("Final State:")
|
||||||
print("Final State:")
|
print(flow.state)
|
||||||
print(flow.state)
|
|
||||||
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
``` text Output
|
```text Output
|
||||||
Final Output: Hello from first_method - updated by second_method
|
Final Output: Hello from first_method - updated by second_method
|
||||||
Final State:
|
Final State:
|
||||||
counter=2 message='Hello from first_method - updated by second_method'
|
counter=2 message='Hello from first_method - updated by second_method'
|
||||||
```
|
```
|
||||||
|
|
||||||
</CodeGroup>
|
</CodeGroup>
|
||||||
|
|
||||||
In this example, the state is updated by both `first_method` and `second_method`.
|
In this example, the state is updated by both `first_method` and `second_method`.
|
||||||
@@ -215,8 +209,6 @@ In unstructured state management, all state is stored in the `state` attribute o
|
|||||||
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
|
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
|
|
||||||
from crewai.flow.flow import Flow, listen, start
|
from crewai.flow.flow import Flow, listen, start
|
||||||
|
|
||||||
class UntructuredExampleFlow(Flow):
|
class UntructuredExampleFlow(Flow):
|
||||||
@@ -239,12 +231,8 @@ class UntructuredExampleFlow(Flow):
|
|||||||
print(f"State after third_method: {self.state}")
|
print(f"State after third_method: {self.state}")
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
flow = UntructuredExampleFlow()
|
||||||
flow = UntructuredExampleFlow()
|
flow.kickoff()
|
||||||
await flow.kickoff()
|
|
||||||
|
|
||||||
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Key Points:**
|
**Key Points:**
|
||||||
@@ -258,8 +246,6 @@ Structured state management leverages predefined schemas to ensure consistency a
|
|||||||
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
|
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
|
|
||||||
from crewai.flow.flow import Flow, listen, start
|
from crewai.flow.flow import Flow, listen, start
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
|
|
||||||
@@ -288,12 +274,8 @@ class StructuredExampleFlow(Flow[ExampleState]):
|
|||||||
print(f"State after third_method: {self.state}")
|
print(f"State after third_method: {self.state}")
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
flow = StructuredExampleFlow()
|
||||||
flow = StructuredExampleFlow()
|
flow.kickoff()
|
||||||
await flow.kickoff()
|
|
||||||
|
|
||||||
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Key Points:**
|
**Key Points:**
|
||||||
@@ -326,7 +308,6 @@ The `or_` function in Flows allows you to listen to multiple methods and trigger
|
|||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
from crewai.flow.flow import Flow, listen, or_, start
|
from crewai.flow.flow import Flow, listen, or_, start
|
||||||
|
|
||||||
class OrExampleFlow(Flow):
|
class OrExampleFlow(Flow):
|
||||||
@@ -344,15 +325,12 @@ class OrExampleFlow(Flow):
|
|||||||
print(f"Logger: {result}")
|
print(f"Logger: {result}")
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
flow = OrExampleFlow()
|
|
||||||
await flow.kickoff()
|
|
||||||
|
|
||||||
|
flow = OrExampleFlow()
|
||||||
asyncio.run(main())
|
flow.kickoff()
|
||||||
```
|
```
|
||||||
|
|
||||||
``` text Output
|
```text Output
|
||||||
Logger: Hello from the start method
|
Logger: Hello from the start method
|
||||||
Logger: Hello from the second method
|
Logger: Hello from the second method
|
||||||
```
|
```
|
||||||
@@ -369,7 +347,6 @@ The `and_` function in Flows allows you to listen to multiple methods and trigge
|
|||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
from crewai.flow.flow import Flow, and_, listen, start
|
from crewai.flow.flow import Flow, and_, listen, start
|
||||||
|
|
||||||
class AndExampleFlow(Flow):
|
class AndExampleFlow(Flow):
|
||||||
@@ -387,16 +364,11 @@ class AndExampleFlow(Flow):
|
|||||||
print("---- Logger ----")
|
print("---- Logger ----")
|
||||||
print(self.state)
|
print(self.state)
|
||||||
|
|
||||||
|
flow = AndExampleFlow()
|
||||||
async def main():
|
flow.kickoff()
|
||||||
flow = AndExampleFlow()
|
|
||||||
await flow.kickoff()
|
|
||||||
|
|
||||||
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
``` text Output
|
```text Output
|
||||||
---- Logger ----
|
---- Logger ----
|
||||||
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
|
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
|
||||||
```
|
```
|
||||||
@@ -414,7 +386,6 @@ You can specify different routes based on the output of the method, allowing you
|
|||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
import asyncio
|
|
||||||
import random
|
import random
|
||||||
from crewai.flow.flow import Flow, listen, router, start
|
from crewai.flow.flow import Flow, listen, router, start
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
@@ -446,15 +417,11 @@ class RouterFlow(Flow[ExampleState]):
|
|||||||
print("Fourth method running")
|
print("Fourth method running")
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
flow = RouterFlow()
|
||||||
flow = RouterFlow()
|
flow.kickoff()
|
||||||
await flow.kickoff()
|
|
||||||
|
|
||||||
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
``` text Output
|
```text Output
|
||||||
Starting the structured flow
|
Starting the structured flow
|
||||||
Third method running
|
Third method running
|
||||||
Fourth method running
|
Fourth method running
|
||||||
@@ -486,10 +453,10 @@ This command will generate a new CrewAI project with the necessary folder struct
|
|||||||
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
|
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
|
||||||
|
|
||||||
| Directory/File | Description |
|
| Directory/File | Description |
|
||||||
|:---------------------------------|:------------------------------------------------------------------|
|
| :--------------------- | :----------------------------------------------------------------- |
|
||||||
| `name_of_flow/` | Root directory for the flow. |
|
| `name_of_flow/` | Root directory for the flow. |
|
||||||
| ├── `crews/` | Contains directories for specific crews. |
|
| ├── `crews/` | Contains directories for specific crews. |
|
||||||
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts.|
|
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
|
||||||
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
||||||
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
||||||
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
||||||
@@ -501,7 +468,6 @@ After running the `crewai create flow name_of_flow` command, you will see a fold
|
|||||||
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
||||||
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
||||||
|
|
||||||
|
|
||||||
### Building Your Crews
|
### Building Your Crews
|
||||||
|
|
||||||
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
|
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
|
||||||
@@ -520,7 +486,6 @@ Here's an example of how you can connect the `poem_crew` in the `main.py` file:
|
|||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
import asyncio
|
|
||||||
from random import randint
|
from random import randint
|
||||||
|
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
@@ -536,14 +501,12 @@ class PoemFlow(Flow[PoemState]):
|
|||||||
@start()
|
@start()
|
||||||
def generate_sentence_count(self):
|
def generate_sentence_count(self):
|
||||||
print("Generating sentence count")
|
print("Generating sentence count")
|
||||||
# Generate a number between 1 and 5
|
|
||||||
self.state.sentence_count = randint(1, 5)
|
self.state.sentence_count = randint(1, 5)
|
||||||
|
|
||||||
@listen(generate_sentence_count)
|
@listen(generate_sentence_count)
|
||||||
def generate_poem(self):
|
def generate_poem(self):
|
||||||
print("Generating poem")
|
print("Generating poem")
|
||||||
poem_crew = PoemCrew().crew()
|
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||||
result = poem_crew.kickoff(inputs={"sentence_count": self.state.sentence_count})
|
|
||||||
|
|
||||||
print("Poem generated", result.raw)
|
print("Poem generated", result.raw)
|
||||||
self.state.poem = result.raw
|
self.state.poem = result.raw
|
||||||
@@ -554,18 +517,17 @@ class PoemFlow(Flow[PoemState]):
|
|||||||
with open("poem.txt", "w") as f:
|
with open("poem.txt", "w") as f:
|
||||||
f.write(self.state.poem)
|
f.write(self.state.poem)
|
||||||
|
|
||||||
async def run():
|
def kickoff():
|
||||||
"""
|
|
||||||
Run the flow.
|
|
||||||
"""
|
|
||||||
poem_flow = PoemFlow()
|
poem_flow = PoemFlow()
|
||||||
await poem_flow.kickoff()
|
poem_flow.kickoff()
|
||||||
|
|
||||||
def main():
|
|
||||||
asyncio.run(run())
|
def plot():
|
||||||
|
poem_flow = PoemFlow()
|
||||||
|
poem_flow.plot()
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
kickoff()
|
||||||
```
|
```
|
||||||
|
|
||||||
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
|
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
|
||||||
@@ -587,13 +549,13 @@ source .venv/bin/activate
|
|||||||
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
crewai flow run
|
crewai flow kickoff
|
||||||
```
|
```
|
||||||
|
|
||||||
or
|
or
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
uv run run_flow
|
uv run kickoff
|
||||||
```
|
```
|
||||||
|
|
||||||
The flow will execute, and you should see the output in the console.
|
The flow will execute, and you should see the output in the console.
|
||||||
|
|||||||
@@ -62,6 +62,8 @@ os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
|||||||
2. Using LLM class attributes:
|
2. Using LLM class attributes:
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
|
from crewai import LLM
|
||||||
|
|
||||||
llm = LLM(
|
llm = LLM(
|
||||||
model="custom-model-name",
|
model="custom-model-name",
|
||||||
api_key="your-api-key",
|
api_key="your-api-key",
|
||||||
@@ -95,9 +97,11 @@ When configuring an LLM for your agent, you have access to a wide range of param
|
|||||||
| **api_key** | `str` | Your API key for authentication. |
|
| **api_key** | `str` | Your API key for authentication. |
|
||||||
|
|
||||||
|
|
||||||
Example:
|
## OpenAI Example Configuration
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
|
from crewai import LLM
|
||||||
|
|
||||||
llm = LLM(
|
llm = LLM(
|
||||||
model="gpt-4",
|
model="gpt-4",
|
||||||
temperature=0.8,
|
temperature=0.8,
|
||||||
@@ -112,15 +116,31 @@ llm = LLM(
|
|||||||
)
|
)
|
||||||
agent = Agent(llm=llm, ...)
|
agent = Agent(llm=llm, ...)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Cerebras Example Configuration
|
||||||
|
|
||||||
|
```python Code
|
||||||
|
from crewai import LLM
|
||||||
|
|
||||||
|
llm = LLM(
|
||||||
|
model="cerebras/llama-3.1-70b",
|
||||||
|
base_url="https://api.cerebras.ai/v1",
|
||||||
|
api_key="your-api-key-here"
|
||||||
|
)
|
||||||
|
agent = Agent(llm=llm, ...)
|
||||||
|
```
|
||||||
|
|
||||||
## Using Ollama (Local LLMs)
|
## Using Ollama (Local LLMs)
|
||||||
|
|
||||||
crewAI supports using Ollama for running open-source models locally:
|
CrewAI supports using Ollama for running open-source models locally:
|
||||||
|
|
||||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||||
2. Run a model: `ollama run llama2`
|
2. Run a model: `ollama run llama2`
|
||||||
3. Configure agent:
|
3. Configure agent:
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
|
from crewai import LLM
|
||||||
|
|
||||||
agent = Agent(
|
agent = Agent(
|
||||||
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
|
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
|
||||||
...
|
...
|
||||||
@@ -132,6 +152,8 @@ agent = Agent(
|
|||||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
|
from crewai import LLM
|
||||||
|
|
||||||
llm = LLM(
|
llm = LLM(
|
||||||
model="custom-model-name",
|
model="custom-model-name",
|
||||||
base_url="https://api.your-provider.com/v1",
|
base_url="https://api.your-provider.com/v1",
|
||||||
|
|||||||
@@ -34,7 +34,7 @@ By default, the memory system is disabled, and you can ensure it is active by se
|
|||||||
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||||
It's also possible to initialize the memory instance with your own instance.
|
It's also possible to initialize the memory instance with your own instance.
|
||||||
|
|
||||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
|
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
|
||||||
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||||
The data storage files are saved into a platform-specific location found using the appdirs package,
|
The data storage files are saved into a platform-specific location found using the appdirs package,
|
||||||
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||||||
@@ -113,6 +113,42 @@ my_crew = Crew(
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```python Code
|
||||||
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
from chromadb.utils.embedding_functions.openai_embedding_function import OpenAIEmbeddingFunction
|
||||||
|
|
||||||
|
my_crew = Crew(
|
||||||
|
agents=[...],
|
||||||
|
tasks=[...],
|
||||||
|
process=Process.sequential,
|
||||||
|
memory=True,
|
||||||
|
verbose=True,
|
||||||
|
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Ollama embeddings
|
||||||
|
|
||||||
|
```python Code
|
||||||
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
|
my_crew = Crew(
|
||||||
|
agents=[...],
|
||||||
|
tasks=[...],
|
||||||
|
process=Process.sequential,
|
||||||
|
memory=True,
|
||||||
|
verbose=True,
|
||||||
|
embedder={
|
||||||
|
"provider": "ollama",
|
||||||
|
"config": {
|
||||||
|
"model": "mxbai-embed-large"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
### Using Google AI embeddings
|
### Using Google AI embeddings
|
||||||
|
|
||||||
@@ -128,9 +164,8 @@ my_crew = Crew(
|
|||||||
embedder={
|
embedder={
|
||||||
"provider": "google",
|
"provider": "google",
|
||||||
"config": {
|
"config": {
|
||||||
"model": 'models/embedding-001',
|
"api_key": "<YOUR_API_KEY>",
|
||||||
"task_type": "retrieval_document",
|
"model_name": "<model_name>"
|
||||||
"title": "Embeddings for Embedchain"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
@@ -147,30 +182,13 @@ my_crew = Crew(
|
|||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder=embedding_functions.OpenAIEmbeddingFunction(
|
||||||
"provider": "azure_openai",
|
api_key="YOUR_API_KEY",
|
||||||
"config": {
|
api_base="YOUR_API_BASE_PATH",
|
||||||
"model": 'text-embedding-ada-002',
|
api_type="azure",
|
||||||
"deployment_name": "your_embedding_model_deployment_name"
|
api_version="YOUR_API_VERSION",
|
||||||
}
|
model_name="text-embedding-3-small"
|
||||||
}
|
)
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using GPT4ALL embeddings
|
|
||||||
|
|
||||||
```python Code
|
|
||||||
from crewai import Crew, Agent, Task, Process
|
|
||||||
|
|
||||||
my_crew = Crew(
|
|
||||||
agents=[...],
|
|
||||||
tasks=[...],
|
|
||||||
process=Process.sequential,
|
|
||||||
memory=True,
|
|
||||||
verbose=True,
|
|
||||||
embedder={
|
|
||||||
"provider": "gpt4all"
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -185,12 +203,12 @@ my_crew = Crew(
|
|||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder=embedding_functions.GoogleVertexEmbeddingFunction(
|
||||||
"provider": "vertexai",
|
project_id="YOUR_PROJECT_ID",
|
||||||
"config": {
|
region="YOUR_REGION",
|
||||||
"model": 'textembedding-gecko'
|
api_key="YOUR_API_KEY",
|
||||||
}
|
model_name="textembedding-gecko"
|
||||||
}
|
)
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -208,8 +226,27 @@ my_crew = Crew(
|
|||||||
embedder={
|
embedder={
|
||||||
"provider": "cohere",
|
"provider": "cohere",
|
||||||
"config": {
|
"config": {
|
||||||
"model": "embed-english-v3.0",
|
"api_key": "YOUR_API_KEY",
|
||||||
"vector_dimension": 1024
|
"model_name": "<model_name>"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
### Using HuggingFace embeddings
|
||||||
|
|
||||||
|
```python Code
|
||||||
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
|
my_crew = Crew(
|
||||||
|
agents=[...],
|
||||||
|
tasks=[...],
|
||||||
|
process=Process.sequential,
|
||||||
|
memory=True,
|
||||||
|
verbose=True,
|
||||||
|
embedder={
|
||||||
|
"provider": "huggingface",
|
||||||
|
"config": {
|
||||||
|
"api_url": "<api_url>",
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -11,10 +11,10 @@ icon: eye
|
|||||||
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output.
|
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output.
|
||||||
The URL or the PATH of the image should be passed to the Agent.
|
The URL or the PATH of the image should be passed to the Agent.
|
||||||
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
Install the crewai_tools package
|
Install the crewai_tools package
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
pip install 'crewai[tools]'
|
pip install 'crewai[tools]'
|
||||||
```
|
```
|
||||||
@@ -45,6 +45,5 @@ def researcher(self) -> Agent:
|
|||||||
The VisionTool requires the following arguments:
|
The VisionTool requires the following arguments:
|
||||||
|
|
||||||
| Argument | Type | Description |
|
| Argument | Type | Description |
|
||||||
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
|
| :----------------- | :------- | :------------------------------------------------------------------------------- |
|
||||||
| **image_path** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. |
|
| **image_path_url** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. |
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "crewai"
|
name = "crewai"
|
||||||
version = "0.70.1"
|
version = "0.75.1"
|
||||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
@@ -16,19 +16,18 @@ dependencies = [
|
|||||||
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
||||||
"instructor>=1.3.3",
|
"instructor>=1.3.3",
|
||||||
"regex>=2024.9.11",
|
"regex>=2024.9.11",
|
||||||
"crewai-tools>=0.12.1",
|
"crewai-tools>=0.13.2",
|
||||||
"click>=8.1.7",
|
"click>=8.1.7",
|
||||||
"python-dotenv>=1.0.0",
|
"python-dotenv>=1.0.0",
|
||||||
"appdirs>=1.4.4",
|
"appdirs>=1.4.4",
|
||||||
"jsonref>=1.1.0",
|
"jsonref>=1.1.0",
|
||||||
"agentops>=0.3.0",
|
|
||||||
"embedchain>=0.1.114",
|
|
||||||
"json-repair>=0.25.2",
|
"json-repair>=0.25.2",
|
||||||
"auth0-python>=4.7.1",
|
"auth0-python>=4.7.1",
|
||||||
"litellm>=1.44.22",
|
"litellm>=1.44.22",
|
||||||
"pyvis>=0.3.2",
|
"pyvis>=0.3.2",
|
||||||
"uv>=0.4.18",
|
"uv>=0.4.25",
|
||||||
"tomli-w>=1.1.0",
|
"tomli-w>=1.1.0",
|
||||||
|
"chromadb>=0.4.24",
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
@@ -37,7 +36,7 @@ Documentation = "https://docs.crewai.com"
|
|||||||
Repository = "https://github.com/crewAIInc/crewAI"
|
Repository = "https://github.com/crewAIInc/crewAI"
|
||||||
|
|
||||||
[project.optional-dependencies]
|
[project.optional-dependencies]
|
||||||
tools = ["crewai-tools>=0.12.1"]
|
tools = ["crewai-tools>=0.13.2"]
|
||||||
agentops = ["agentops>=0.3.0"]
|
agentops = ["agentops>=0.3.0"]
|
||||||
|
|
||||||
[tool.uv]
|
[tool.uv]
|
||||||
@@ -52,7 +51,7 @@ dev-dependencies = [
|
|||||||
"mkdocs-material-extensions>=1.3.1",
|
"mkdocs-material-extensions>=1.3.1",
|
||||||
"pillow>=10.2.0",
|
"pillow>=10.2.0",
|
||||||
"cairosvg>=2.7.1",
|
"cairosvg>=2.7.1",
|
||||||
"crewai-tools>=0.12.1",
|
"crewai-tools>=0.13.2",
|
||||||
"pytest>=8.0.0",
|
"pytest>=8.0.0",
|
||||||
"pytest-vcr>=1.0.2",
|
"pytest-vcr>=1.0.2",
|
||||||
"python-dotenv>=1.0.0",
|
"python-dotenv>=1.0.0",
|
||||||
|
|||||||
@@ -14,5 +14,5 @@ warnings.filterwarnings(
|
|||||||
category=UserWarning,
|
category=UserWarning,
|
||||||
module="pydantic.main",
|
module="pydantic.main",
|
||||||
)
|
)
|
||||||
__version__ = "0.70.1"
|
__version__ = "0.75.1"
|
||||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]
|
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]
|
||||||
|
|||||||
@@ -394,7 +394,7 @@ class Agent(BaseAgent):
|
|||||||
"""
|
"""
|
||||||
tool_strings = []
|
tool_strings = []
|
||||||
for tool in tools:
|
for tool in tools:
|
||||||
args_schema = str(tool.args)
|
args_schema = str(tool.model_fields)
|
||||||
if hasattr(tool, "func") and tool.func:
|
if hasattr(tool, "func") and tool.func:
|
||||||
sig = signature(tool.func)
|
sig = signature(tool.func)
|
||||||
description = (
|
description = (
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ if TYPE_CHECKING:
|
|||||||
|
|
||||||
class CrewAgentExecutorMixin:
|
class CrewAgentExecutorMixin:
|
||||||
crew: Optional["Crew"]
|
crew: Optional["Crew"]
|
||||||
crew_agent: Optional["BaseAgent"]
|
agent: Optional["BaseAgent"]
|
||||||
task: Optional["Task"]
|
task: Optional["Task"]
|
||||||
iterations: int
|
iterations: int
|
||||||
have_forced_answer: bool
|
have_forced_answer: bool
|
||||||
@@ -33,9 +33,9 @@ class CrewAgentExecutorMixin:
|
|||||||
"""Create and save a short-term memory item if conditions are met."""
|
"""Create and save a short-term memory item if conditions are met."""
|
||||||
if (
|
if (
|
||||||
self.crew
|
self.crew
|
||||||
and self.crew_agent
|
and self.agent
|
||||||
and self.task
|
and self.task
|
||||||
and "Action: Delegate work to coworker" not in output.log
|
and "Action: Delegate work to coworker" not in output.text
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
if (
|
if (
|
||||||
@@ -43,11 +43,11 @@ class CrewAgentExecutorMixin:
|
|||||||
and self.crew._short_term_memory
|
and self.crew._short_term_memory
|
||||||
):
|
):
|
||||||
self.crew._short_term_memory.save(
|
self.crew._short_term_memory.save(
|
||||||
value=output.log,
|
value=output.text,
|
||||||
metadata={
|
metadata={
|
||||||
"observation": self.task.description,
|
"observation": self.task.description,
|
||||||
},
|
},
|
||||||
agent=self.crew_agent.role,
|
agent=self.agent.role,
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Failed to add to short term memory: {e}")
|
print(f"Failed to add to short term memory: {e}")
|
||||||
@@ -61,18 +61,18 @@ class CrewAgentExecutorMixin:
|
|||||||
and self.crew._long_term_memory
|
and self.crew._long_term_memory
|
||||||
and self.crew._entity_memory
|
and self.crew._entity_memory
|
||||||
and self.task
|
and self.task
|
||||||
and self.crew_agent
|
and self.agent
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
ltm_agent = TaskEvaluator(self.crew_agent)
|
ltm_agent = TaskEvaluator(self.agent)
|
||||||
evaluation = ltm_agent.evaluate(self.task, output.log)
|
evaluation = ltm_agent.evaluate(self.task, output.text)
|
||||||
|
|
||||||
if isinstance(evaluation, ConverterError):
|
if isinstance(evaluation, ConverterError):
|
||||||
return
|
return
|
||||||
|
|
||||||
long_term_memory = LongTermMemoryItem(
|
long_term_memory = LongTermMemoryItem(
|
||||||
task=self.task.description,
|
task=self.task.description,
|
||||||
agent=self.crew_agent.role,
|
agent=self.agent.role,
|
||||||
quality=evaluation.quality,
|
quality=evaluation.quality,
|
||||||
datetime=str(time.time()),
|
datetime=str(time.time()),
|
||||||
expected_output=self.task.expected_output,
|
expected_output=self.task.expected_output,
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ import json
|
|||||||
import re
|
import re
|
||||||
from typing import Any, Dict, List, Union
|
from typing import Any, Dict, List, Union
|
||||||
|
|
||||||
|
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||||
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
||||||
from crewai.agents.parser import (
|
from crewai.agents.parser import (
|
||||||
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE,
|
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE,
|
||||||
@@ -29,7 +30,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
llm: Any,
|
llm: Any,
|
||||||
task: Any,
|
task: Any,
|
||||||
crew: Any,
|
crew: Any,
|
||||||
agent: Any,
|
agent: BaseAgent,
|
||||||
prompt: dict[str, str],
|
prompt: dict[str, str],
|
||||||
max_iter: int,
|
max_iter: int,
|
||||||
tools: List[Any],
|
tools: List[Any],
|
||||||
@@ -103,7 +104,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
|
|
||||||
if self.crew and self.crew._train:
|
if self.crew and self.crew._train:
|
||||||
self._handle_crew_training_output(formatted_answer)
|
self._handle_crew_training_output(formatted_answer)
|
||||||
|
self._create_short_term_memory(formatted_answer)
|
||||||
|
self._create_long_term_memory(formatted_answer)
|
||||||
return {"output": formatted_answer.output}
|
return {"output": formatted_answer.output}
|
||||||
|
|
||||||
def _invoke_loop(self, formatted_answer=None):
|
def _invoke_loop(self, formatted_answer=None):
|
||||||
@@ -176,6 +178,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
return formatted_answer
|
return formatted_answer
|
||||||
|
|
||||||
def _show_start_logs(self):
|
def _show_start_logs(self):
|
||||||
|
if self.agent is None:
|
||||||
|
raise ValueError("Agent cannot be None")
|
||||||
if self.agent.verbose or (
|
if self.agent.verbose or (
|
||||||
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
||||||
):
|
):
|
||||||
@@ -188,6 +192,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
)
|
)
|
||||||
|
|
||||||
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
|
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
|
||||||
|
if self.agent is None:
|
||||||
|
raise ValueError("Agent cannot be None")
|
||||||
if self.agent.verbose or (
|
if self.agent.verbose or (
|
||||||
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
||||||
):
|
):
|
||||||
@@ -306,7 +312,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
self, result: AgentFinish, human_feedback: str | None = None
|
self, result: AgentFinish, human_feedback: str | None = None
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Function to handle the process of the training data."""
|
"""Function to handle the process of the training data."""
|
||||||
agent_id = str(self.agent.id)
|
agent_id = str(self.agent.id) # type: ignore
|
||||||
|
|
||||||
# Load training data
|
# Load training data
|
||||||
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
|
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
|
||||||
@@ -317,9 +323,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||||
train_iteration = self.crew._train_iteration
|
train_iteration = self.crew._train_iteration
|
||||||
if agent_id in training_data and isinstance(train_iteration, int):
|
if agent_id in training_data and isinstance(train_iteration, int):
|
||||||
training_data[agent_id][train_iteration]["improved_output"] = (
|
training_data[agent_id][train_iteration][
|
||||||
result.output
|
"improved_output"
|
||||||
)
|
] = result.output
|
||||||
training_handler.save(training_data)
|
training_handler.save(training_data)
|
||||||
else:
|
else:
|
||||||
self._logger.log(
|
self._logger.log(
|
||||||
@@ -339,7 +345,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
"initial_output": result.output,
|
"initial_output": result.output,
|
||||||
"human_feedback": human_feedback,
|
"human_feedback": human_feedback,
|
||||||
"agent": agent_id,
|
"agent": agent_id,
|
||||||
"agent_role": self.agent.role,
|
"agent_role": self.agent.role, # type: ignore
|
||||||
}
|
}
|
||||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||||
train_iteration = self.crew._train_iteration
|
train_iteration = self.crew._train_iteration
|
||||||
|
|||||||
@@ -14,11 +14,11 @@ from .authentication.main import AuthenticationCommand
|
|||||||
from .deploy.main import DeployCommand
|
from .deploy.main import DeployCommand
|
||||||
from .evaluate_crew import evaluate_crew
|
from .evaluate_crew import evaluate_crew
|
||||||
from .install_crew import install_crew
|
from .install_crew import install_crew
|
||||||
|
from .kickoff_flow import kickoff_flow
|
||||||
from .plot_flow import plot_flow
|
from .plot_flow import plot_flow
|
||||||
from .replay_from_task import replay_task_command
|
from .replay_from_task import replay_task_command
|
||||||
from .reset_memories_command import reset_memories_command
|
from .reset_memories_command import reset_memories_command
|
||||||
from .run_crew import run_crew
|
from .run_crew import run_crew
|
||||||
from .run_flow import run_flow
|
|
||||||
from .tools.main import ToolCommand
|
from .tools.main import ToolCommand
|
||||||
from .train_crew import train_crew
|
from .train_crew import train_crew
|
||||||
from .update_crew import update_crew
|
from .update_crew import update_crew
|
||||||
@@ -32,10 +32,11 @@ def crewai():
|
|||||||
@crewai.command()
|
@crewai.command()
|
||||||
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
|
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
|
||||||
@click.argument("name")
|
@click.argument("name")
|
||||||
def create(type, name):
|
@click.option("--provider", type=str, help="The provider to use for the crew")
|
||||||
|
def create(type, name, provider):
|
||||||
"""Create a new crew, pipeline, or flow."""
|
"""Create a new crew, pipeline, or flow."""
|
||||||
if type == "crew":
|
if type == "crew":
|
||||||
create_crew(name)
|
create_crew(name, provider)
|
||||||
elif type == "pipeline":
|
elif type == "pipeline":
|
||||||
create_pipeline(name)
|
create_pipeline(name)
|
||||||
elif type == "flow":
|
elif type == "flow":
|
||||||
@@ -304,11 +305,11 @@ def flow():
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
@flow.command(name="run")
|
@flow.command(name="kickoff")
|
||||||
def flow_run():
|
def flow_run():
|
||||||
"""Run the Flow."""
|
"""Kickoff the Flow."""
|
||||||
click.echo("Running the Flow")
|
click.echo("Running the Flow")
|
||||||
run_flow()
|
kickoff_flow()
|
||||||
|
|
||||||
|
|
||||||
@flow.command(name="plot")
|
@flow.command(name="plot")
|
||||||
|
|||||||
19
src/crewai/cli/constants.py
Normal file
19
src/crewai/cli/constants.py
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
ENV_VARS = {
|
||||||
|
'openai': ['OPENAI_API_KEY'],
|
||||||
|
'anthropic': ['ANTHROPIC_API_KEY'],
|
||||||
|
'gemini': ['GEMINI_API_KEY'],
|
||||||
|
'groq': ['GROQ_API_KEY'],
|
||||||
|
'ollama': ['FAKE_KEY'],
|
||||||
|
}
|
||||||
|
|
||||||
|
PROVIDERS = ['openai', 'anthropic', 'gemini', 'groq', 'ollama']
|
||||||
|
|
||||||
|
MODELS = {
|
||||||
|
'openai': ['gpt-4', 'gpt-4o', 'gpt-4o-mini', 'o1-mini', 'o1-preview'],
|
||||||
|
'anthropic': ['claude-3-5-sonnet-20240620', 'claude-3-sonnet-20240229', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'],
|
||||||
|
'gemini': ['gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-gemma-2-9b-it', 'gemini-gemma-2-27b-it'],
|
||||||
|
'groq': ['llama-3.1-8b-instant', 'llama-3.1-70b-versatile', 'llama-3.1-405b-reasoning', 'gemma2-9b-it', 'gemma-7b-it'],
|
||||||
|
'ollama': ['llama3.1', 'mixtral'],
|
||||||
|
}
|
||||||
|
|
||||||
|
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
|
||||||
@@ -1,12 +1,11 @@
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
import click
|
import click
|
||||||
|
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
|
||||||
from crewai.cli.utils import copy_template
|
from crewai.cli.provider import get_provider_data, select_provider, PROVIDERS
|
||||||
|
from crewai.cli.constants import ENV_VARS
|
||||||
|
|
||||||
|
|
||||||
def create_crew(name, parent_folder=None):
|
def create_folder_structure(name, parent_folder=None):
|
||||||
"""Create a new crew."""
|
|
||||||
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
||||||
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
||||||
|
|
||||||
@@ -28,19 +27,85 @@ def create_crew(name, parent_folder=None):
|
|||||||
(folder_path / "src" / folder_name).mkdir(parents=True)
|
(folder_path / "src" / folder_name).mkdir(parents=True)
|
||||||
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
||||||
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
||||||
with open(folder_path / ".env", "w") as file:
|
|
||||||
file.write("OPENAI_API_KEY=YOUR_API_KEY")
|
|
||||||
else:
|
else:
|
||||||
click.secho(
|
click.secho(
|
||||||
f"\tFolder {folder_name} already exists. Please choose a different name.",
|
f"\tFolder {folder_name} already exists.",
|
||||||
fg="red",
|
fg="yellow",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
return folder_path, folder_name, class_name
|
||||||
|
|
||||||
|
|
||||||
|
def copy_template_files(folder_path, name, class_name, parent_folder):
|
||||||
|
package_dir = Path(__file__).parent
|
||||||
|
templates_dir = package_dir / "templates" / "crew"
|
||||||
|
|
||||||
|
root_template_files = (
|
||||||
|
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||||
|
)
|
||||||
|
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
|
||||||
|
config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
|
||||||
|
src_template_files = (
|
||||||
|
["__init__.py", "main.py", "crew.py"] if not parent_folder else ["crew.py"]
|
||||||
|
)
|
||||||
|
|
||||||
|
for file_name in root_template_files:
|
||||||
|
src_file = templates_dir / file_name
|
||||||
|
dst_file = folder_path / file_name
|
||||||
|
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||||
|
|
||||||
|
src_folder = (
|
||||||
|
folder_path / "src" / folder_path.name if not parent_folder else folder_path
|
||||||
|
)
|
||||||
|
|
||||||
|
for file_name in src_template_files:
|
||||||
|
src_file = templates_dir / file_name
|
||||||
|
dst_file = src_folder / file_name
|
||||||
|
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||||
|
|
||||||
|
if not parent_folder:
|
||||||
|
for file_name in tools_template_files + config_template_files:
|
||||||
|
src_file = templates_dir / file_name
|
||||||
|
dst_file = src_folder / file_name
|
||||||
|
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||||
|
|
||||||
|
|
||||||
|
def create_crew(name, provider=None, parent_folder=None):
|
||||||
|
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
|
||||||
|
env_vars = load_env_vars(folder_path)
|
||||||
|
|
||||||
|
if not provider:
|
||||||
|
provider_models = get_provider_data()
|
||||||
|
if not provider_models:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
selected_provider = select_provider(provider_models)
|
||||||
|
if not selected_provider:
|
||||||
|
return
|
||||||
|
provider = selected_provider
|
||||||
|
|
||||||
|
# selected_model = select_model(provider, provider_models)
|
||||||
|
# if not selected_model:
|
||||||
|
# return
|
||||||
|
# model = selected_model
|
||||||
|
|
||||||
|
if provider in PROVIDERS:
|
||||||
|
api_key_var = ENV_VARS[provider][0]
|
||||||
|
else:
|
||||||
|
api_key_var = click.prompt(
|
||||||
|
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||||
|
type=str,
|
||||||
|
)
|
||||||
|
|
||||||
|
env_vars = {api_key_var: "YOUR_API_KEY_HERE"}
|
||||||
|
write_env_file(folder_path, env_vars)
|
||||||
|
|
||||||
|
# env_vars['MODEL'] = model
|
||||||
|
# click.secho(f"Selected model: {model}", fg="green")
|
||||||
|
|
||||||
package_dir = Path(__file__).parent
|
package_dir = Path(__file__).parent
|
||||||
templates_dir = package_dir / "templates" / "crew"
|
templates_dir = package_dir / "templates" / "crew"
|
||||||
|
|
||||||
# List of template files to copy
|
|
||||||
root_template_files = (
|
root_template_files = (
|
||||||
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -3,11 +3,11 @@ import subprocess
|
|||||||
import click
|
import click
|
||||||
|
|
||||||
|
|
||||||
def run_flow() -> None:
|
def kickoff_flow() -> None:
|
||||||
"""
|
"""
|
||||||
Run the flow by running a command in the UV environment.
|
Kickoff the flow by running a command in the UV environment.
|
||||||
"""
|
"""
|
||||||
command = ["uv", "run", "run_flow"]
|
command = ["uv", "run", "kickoff"]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||||
186
src/crewai/cli/provider.py
Normal file
186
src/crewai/cli/provider.py
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
import requests
|
||||||
|
from collections import defaultdict
|
||||||
|
import click
|
||||||
|
from pathlib import Path
|
||||||
|
from crewai.cli.constants import PROVIDERS, MODELS, JSON_URL
|
||||||
|
|
||||||
|
def select_choice(prompt_message, choices):
|
||||||
|
"""
|
||||||
|
Presents a list of choices to the user and prompts them to select one.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- prompt_message (str): The message to display to the user before presenting the choices.
|
||||||
|
- choices (list): A list of options to present to the user.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- str: The selected choice from the list, or None if the operation is aborted or an invalid selection is made.
|
||||||
|
"""
|
||||||
|
click.secho(prompt_message, fg="cyan")
|
||||||
|
for idx, choice in enumerate(choices, start=1):
|
||||||
|
click.secho(f"{idx}. {choice}", fg="cyan")
|
||||||
|
try:
|
||||||
|
selected_index = click.prompt("Enter the number of your choice", type=int) - 1
|
||||||
|
except click.exceptions.Abort:
|
||||||
|
click.secho("Operation aborted by the user.", fg="red")
|
||||||
|
return None
|
||||||
|
if not (0 <= selected_index < len(choices)):
|
||||||
|
click.secho("Invalid selection.", fg="red")
|
||||||
|
return None
|
||||||
|
return choices[selected_index]
|
||||||
|
|
||||||
|
def select_provider(provider_models):
|
||||||
|
"""
|
||||||
|
Presents a list of providers to the user and prompts them to select one.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- provider_models (dict): A dictionary of provider models.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- str: The selected provider, or None if the operation is aborted or an invalid selection is made.
|
||||||
|
"""
|
||||||
|
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||||
|
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
|
||||||
|
|
||||||
|
provider = select_choice("Select a provider to set up:", predefined_providers + ['other'])
|
||||||
|
if not provider:
|
||||||
|
return None
|
||||||
|
provider = provider.lower()
|
||||||
|
|
||||||
|
if provider == 'other':
|
||||||
|
provider = select_choice("Select a provider from the full list:", all_providers)
|
||||||
|
if not provider:
|
||||||
|
return None
|
||||||
|
return provider
|
||||||
|
|
||||||
|
def select_model(provider, provider_models):
|
||||||
|
"""
|
||||||
|
Presents a list of models for a given provider to the user and prompts them to select one.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- provider (str): The provider for which to select a model.
|
||||||
|
- provider_models (dict): A dictionary of provider models.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- str: The selected model, or None if the operation is aborted or an invalid selection is made.
|
||||||
|
"""
|
||||||
|
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||||
|
|
||||||
|
if provider in predefined_providers:
|
||||||
|
available_models = MODELS.get(provider, [])
|
||||||
|
else:
|
||||||
|
available_models = provider_models.get(provider, [])
|
||||||
|
|
||||||
|
if not available_models:
|
||||||
|
click.secho(f"No models available for provider '{provider}'.", fg="red")
|
||||||
|
return None
|
||||||
|
|
||||||
|
selected_model = select_choice(f"Select a model to use for {provider.capitalize()}:", available_models)
|
||||||
|
return selected_model
|
||||||
|
|
||||||
|
def load_provider_data(cache_file, cache_expiry):
|
||||||
|
"""
|
||||||
|
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- cache_file (Path): The path to the cache file.
|
||||||
|
- cache_expiry (int): The cache expiry time in seconds.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: The loaded provider data or None if the operation fails.
|
||||||
|
"""
|
||||||
|
current_time = time.time()
|
||||||
|
if cache_file.exists() and (current_time - cache_file.stat().st_mtime) < cache_expiry:
|
||||||
|
data = read_cache_file(cache_file)
|
||||||
|
if data:
|
||||||
|
return data
|
||||||
|
click.secho("Cache is corrupted. Fetching provider data from the web...", fg="yellow")
|
||||||
|
else:
|
||||||
|
click.secho("Cache expired or not found. Fetching provider data from the web...", fg="cyan")
|
||||||
|
return fetch_provider_data(cache_file)
|
||||||
|
|
||||||
|
def read_cache_file(cache_file):
|
||||||
|
"""
|
||||||
|
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- cache_file (Path): The path to the cache file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
with open(cache_file, "r") as f:
|
||||||
|
return json.load(f)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def fetch_provider_data(cache_file):
|
||||||
|
"""
|
||||||
|
Fetches provider data from a specified URL and caches it to a file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- cache_file (Path): The path to the cache file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: The fetched provider data or None if the operation fails.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
response = requests.get(JSON_URL, stream=True, timeout=10)
|
||||||
|
response.raise_for_status()
|
||||||
|
data = download_data(response)
|
||||||
|
with open(cache_file, "w") as f:
|
||||||
|
json.dump(data, f)
|
||||||
|
return data
|
||||||
|
except requests.RequestException as e:
|
||||||
|
click.secho(f"Error fetching provider data: {e}", fg="red")
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def download_data(response):
|
||||||
|
"""
|
||||||
|
Downloads data from a given HTTP response and returns the JSON content.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- response (requests.Response): The HTTP response object.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict: The JSON content of the response.
|
||||||
|
"""
|
||||||
|
total_size = int(response.headers.get('content-length', 0))
|
||||||
|
block_size = 8192
|
||||||
|
data_chunks = []
|
||||||
|
with click.progressbar(length=total_size, label='Downloading', show_pos=True) as progress_bar:
|
||||||
|
for chunk in response.iter_content(block_size):
|
||||||
|
if chunk:
|
||||||
|
data_chunks.append(chunk)
|
||||||
|
progress_bar.update(len(chunk))
|
||||||
|
data_content = b''.join(data_chunks)
|
||||||
|
return json.loads(data_content.decode('utf-8'))
|
||||||
|
|
||||||
|
def get_provider_data():
|
||||||
|
"""
|
||||||
|
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
|
||||||
|
"""
|
||||||
|
cache_dir = Path.home() / '.crewai'
|
||||||
|
cache_dir.mkdir(exist_ok=True)
|
||||||
|
cache_file = cache_dir / 'provider_cache.json'
|
||||||
|
cache_expiry = 24 * 3600
|
||||||
|
|
||||||
|
data = load_provider_data(cache_file, cache_expiry)
|
||||||
|
if not data:
|
||||||
|
return None
|
||||||
|
|
||||||
|
provider_models = defaultdict(list)
|
||||||
|
for model_name, properties in data.items():
|
||||||
|
provider = properties.get("litellm_provider", "").strip().lower()
|
||||||
|
if 'http' in provider or provider == 'other':
|
||||||
|
continue
|
||||||
|
if provider:
|
||||||
|
provider_models[provider].append(model_name)
|
||||||
|
return provider_models
|
||||||
@@ -3,7 +3,7 @@ import sys
|
|||||||
from {{folder_name}}.crew import {{crew_name}}Crew
|
from {{folder_name}}.crew import {{crew_name}}Crew
|
||||||
|
|
||||||
# This main file is intended to be a way for you to run your
|
# This main file is intended to be a way for you to run your
|
||||||
# crew locally, so refrain from adding necessary logic into this file.
|
# crew locally, so refrain from adding unnecessary logic into this file.
|
||||||
# Replace with inputs you want to test with, it will automatically
|
# Replace with inputs you want to test with, it will automatically
|
||||||
# interpolate any tasks and agents information
|
# interpolate any tasks and agents information
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
|||||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.67.1,<1.0.0"
|
"crewai[tools]>=0.75.1,<1.0.0"
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
|
|||||||
@@ -1,65 +1,53 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
import asyncio
|
|
||||||
from random import randint
|
from random import randint
|
||||||
|
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
|
|
||||||
from crewai.flow.flow import Flow, listen, start
|
from crewai.flow.flow import Flow, listen, start
|
||||||
|
|
||||||
from .crews.poem_crew.poem_crew import PoemCrew
|
from .crews.poem_crew.poem_crew import PoemCrew
|
||||||
|
|
||||||
|
|
||||||
class PoemState(BaseModel):
|
class PoemState(BaseModel):
|
||||||
sentence_count: int = 1
|
sentence_count: int = 1
|
||||||
poem: str = ""
|
poem: str = ""
|
||||||
|
|
||||||
|
|
||||||
class PoemFlow(Flow[PoemState]):
|
class PoemFlow(Flow[PoemState]):
|
||||||
|
|
||||||
@start()
|
@start()
|
||||||
def generate_sentence_count(self):
|
def generate_sentence_count(self):
|
||||||
print("Generating sentence count")
|
print("Generating sentence count")
|
||||||
# Generate a number between 1 and 5
|
|
||||||
self.state.sentence_count = randint(1, 5)
|
self.state.sentence_count = randint(1, 5)
|
||||||
|
|
||||||
@listen(generate_sentence_count)
|
@listen(generate_sentence_count)
|
||||||
def generate_poem(self):
|
def generate_poem(self):
|
||||||
print("Generating poem")
|
print("Generating poem")
|
||||||
print(f"State before poem: {self.state}")
|
result = (
|
||||||
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
PoemCrew()
|
||||||
|
.crew()
|
||||||
|
.kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||||
|
)
|
||||||
|
|
||||||
print("Poem generated", result.raw)
|
print("Poem generated", result.raw)
|
||||||
self.state.poem = result.raw
|
self.state.poem = result.raw
|
||||||
|
|
||||||
print(f"State after generate_poem: {self.state}")
|
|
||||||
|
|
||||||
@listen(generate_poem)
|
@listen(generate_poem)
|
||||||
def save_poem(self):
|
def save_poem(self):
|
||||||
print("Saving poem")
|
print("Saving poem")
|
||||||
print(f"State before save_poem: {self.state}")
|
|
||||||
with open("poem.txt", "w") as f:
|
with open("poem.txt", "w") as f:
|
||||||
f.write(self.state.poem)
|
f.write(self.state.poem)
|
||||||
print(f"State after save_poem: {self.state}")
|
|
||||||
|
|
||||||
async def run_flow():
|
|
||||||
"""
|
def kickoff():
|
||||||
Run the flow.
|
|
||||||
"""
|
|
||||||
poem_flow = PoemFlow()
|
poem_flow = PoemFlow()
|
||||||
await poem_flow.kickoff()
|
poem_flow.kickoff()
|
||||||
|
|
||||||
async def plot_flow():
|
|
||||||
"""
|
def plot():
|
||||||
Plot the flow.
|
|
||||||
"""
|
|
||||||
poem_flow = PoemFlow()
|
poem_flow = PoemFlow()
|
||||||
poem_flow.plot()
|
poem_flow.plot()
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
asyncio.run(run_flow())
|
|
||||||
|
|
||||||
|
|
||||||
def plot():
|
|
||||||
asyncio.run(plot_flow())
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
kickoff()
|
||||||
|
|||||||
@@ -5,14 +5,12 @@ description = "{{name}} using crewAI"
|
|||||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.67.1,<1.0.0",
|
"crewai[tools]>=0.75.1,<1.0.0",
|
||||||
"asyncio"
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
{{folder_name}} = "{{folder_name}}.main:main"
|
kickoff = "{{folder_name}}.main:kickoff"
|
||||||
run_flow = "{{folder_name}}.main:main"
|
plot = "{{folder_name}}.main:plot"
|
||||||
plot_flow = "{{folder_name}}.main:plot"
|
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["hatchling"]
|
requires = ["hatchling"]
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
|||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.10,<=3.13"
|
python = ">=3.10,<=3.13"
|
||||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
crewai = { extras = ["tools"], version = ">=0.75.1,<1.0.0" }
|
||||||
asyncio = "*"
|
asyncio = "*"
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
[tool.poetry.scripts]
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
|||||||
authors = ["Your Name <you@example.com>"]
|
authors = ["Your Name <you@example.com>"]
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.67.1,<1.0.0"
|
"crewai[tools]>=0.75.1,<1.0.0"
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
|
|||||||
@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
|
|||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.70.1"
|
"crewai[tools]>=0.75.1"
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|||||||
@@ -28,8 +28,6 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
|||||||
A class to handle tool repository related operations for CrewAI projects.
|
A class to handle tool repository related operations for CrewAI projects.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
BASE_URL = "https://app.crewai.com/pypi/"
|
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
BaseCommand.__init__(self)
|
BaseCommand.__init__(self)
|
||||||
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
||||||
@@ -178,12 +176,14 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
|||||||
def _add_package(self, tool_details):
|
def _add_package(self, tool_details):
|
||||||
tool_handle = tool_details["handle"]
|
tool_handle = tool_details["handle"]
|
||||||
repository_handle = tool_details["repository"]["handle"]
|
repository_handle = tool_details["repository"]["handle"]
|
||||||
|
repository_url = tool_details["repository"]["url"]
|
||||||
|
index = f"{repository_handle}={repository_url}"
|
||||||
|
|
||||||
add_package_command = [
|
add_package_command = [
|
||||||
"uv",
|
"uv",
|
||||||
"add",
|
"add",
|
||||||
"--extra-index-url",
|
"--index",
|
||||||
self.BASE_URL + repository_handle,
|
index,
|
||||||
tool_handle,
|
tool_handle,
|
||||||
]
|
]
|
||||||
add_package_result = subprocess.run(
|
add_package_result = subprocess.run(
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
|
|
||||||
import tomli_w
|
import tomli_w
|
||||||
@@ -94,6 +95,15 @@ def migrate_pyproject(input_file, output_file):
|
|||||||
shutil.copy2(input_file, backup_file)
|
shutil.copy2(input_file, backup_file)
|
||||||
print(f"Original pyproject.toml backed up as {backup_file}")
|
print(f"Original pyproject.toml backed up as {backup_file}")
|
||||||
|
|
||||||
|
# Rename the poetry.lock file
|
||||||
|
lock_file = "poetry.lock"
|
||||||
|
lock_backup = "poetry-old.lock"
|
||||||
|
if os.path.exists(lock_file):
|
||||||
|
os.rename(lock_file, lock_backup)
|
||||||
|
print(f"Original poetry.lock renamed to {lock_backup}")
|
||||||
|
else:
|
||||||
|
print("No poetry.lock file found to rename.")
|
||||||
|
|
||||||
# Write the new pyproject.toml
|
# Write the new pyproject.toml
|
||||||
with open(output_file, "wb") as f:
|
with open(output_file, "wb") as f:
|
||||||
tomli_w.dump(new_pyproject, f)
|
tomli_w.dump(new_pyproject, f)
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import click
|
|||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
|
|
||||||
from crewai.cli.authentication.utils import TokenManager
|
from crewai.cli.authentication.utils import TokenManager
|
||||||
|
from crewai.cli.constants import ENV_VARS
|
||||||
|
|
||||||
if sys.version_info >= (3, 11):
|
if sys.version_info >= (3, 11):
|
||||||
import tomllib
|
import tomllib
|
||||||
@@ -200,3 +201,76 @@ def tree_find_and_replace(directory, find, replace):
|
|||||||
new_dirpath = os.path.join(path, new_dirname)
|
new_dirpath = os.path.join(path, new_dirname)
|
||||||
old_dirpath = os.path.join(path, dirname)
|
old_dirpath = os.path.join(path, dirname)
|
||||||
os.rename(old_dirpath, new_dirpath)
|
os.rename(old_dirpath, new_dirpath)
|
||||||
|
|
||||||
|
|
||||||
|
def load_env_vars(folder_path):
|
||||||
|
"""
|
||||||
|
Loads environment variables from a .env file in the specified folder path.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- folder_path (Path): The path to the folder containing the .env file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict: A dictionary of environment variables.
|
||||||
|
"""
|
||||||
|
env_file_path = folder_path / ".env"
|
||||||
|
env_vars = {}
|
||||||
|
if env_file_path.exists():
|
||||||
|
with open(env_file_path, "r") as file:
|
||||||
|
for line in file:
|
||||||
|
key, _, value = line.strip().partition("=")
|
||||||
|
if key and value:
|
||||||
|
env_vars[key] = value
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
|
||||||
|
def update_env_vars(env_vars, provider, model):
|
||||||
|
"""
|
||||||
|
Updates environment variables with the API key for the selected provider and model.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- env_vars (dict): Environment variables dictionary.
|
||||||
|
- provider (str): Selected provider.
|
||||||
|
- model (str): Selected model.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- None
|
||||||
|
"""
|
||||||
|
api_key_var = ENV_VARS.get(
|
||||||
|
provider,
|
||||||
|
[
|
||||||
|
click.prompt(
|
||||||
|
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||||
|
type=str,
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)[0]
|
||||||
|
|
||||||
|
if api_key_var not in env_vars:
|
||||||
|
try:
|
||||||
|
env_vars[api_key_var] = click.prompt(
|
||||||
|
f"Enter your {provider.capitalize()} API key", type=str, hide_input=True
|
||||||
|
)
|
||||||
|
except click.exceptions.Abort:
|
||||||
|
click.secho("Operation aborted by the user.", fg="red")
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
click.secho(f"API key already exists for {provider.capitalize()}.", fg="yellow")
|
||||||
|
|
||||||
|
env_vars["MODEL"] = model
|
||||||
|
click.secho(f"Selected model: {model}", fg="green")
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
|
||||||
|
def write_env_file(folder_path, env_vars):
|
||||||
|
"""
|
||||||
|
Writes environment variables to a .env file in the specified folder.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- folder_path (Path): The path to the folder where the .env file will be written.
|
||||||
|
- env_vars (dict): A dictionary of environment variables to write.
|
||||||
|
"""
|
||||||
|
env_file_path = folder_path / ".env"
|
||||||
|
with open(env_file_path, "w") as file:
|
||||||
|
for key, value in env_vars.items():
|
||||||
|
file.write(f"{key}={value}\n")
|
||||||
|
|||||||
@@ -126,8 +126,8 @@ class Crew(BaseModel):
|
|||||||
default=None,
|
default=None,
|
||||||
description="An Instance of the EntityMemory to be used by the Crew",
|
description="An Instance of the EntityMemory to be used by the Crew",
|
||||||
)
|
)
|
||||||
embedder: Optional[dict] = Field(
|
embedder: Optional[Any] = Field(
|
||||||
default={"provider": "openai"},
|
default=None,
|
||||||
description="Configuration for the embedder to be used for the crew.",
|
description="Configuration for the embedder to be used for the crew.",
|
||||||
)
|
)
|
||||||
usage_metrics: Optional[UsageMetrics] = Field(
|
usage_metrics: Optional[UsageMetrics] = Field(
|
||||||
@@ -435,15 +435,16 @@ class Crew(BaseModel):
|
|||||||
self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {}
|
self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {}
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Trains the crew for a given number of iterations."""
|
"""Trains the crew for a given number of iterations."""
|
||||||
self._setup_for_training(filename)
|
train_crew = self.copy()
|
||||||
|
train_crew._setup_for_training(filename)
|
||||||
|
|
||||||
for n_iteration in range(n_iterations):
|
for n_iteration in range(n_iterations):
|
||||||
self._train_iteration = n_iteration
|
train_crew._train_iteration = n_iteration
|
||||||
self.kickoff(inputs=inputs)
|
train_crew.kickoff(inputs=inputs)
|
||||||
|
|
||||||
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
|
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
|
||||||
|
|
||||||
for agent in self.agents:
|
for agent in train_crew.agents:
|
||||||
result = TaskEvaluator(agent).evaluate_training_data(
|
result = TaskEvaluator(agent).evaluate_training_data(
|
||||||
training_data=training_data, agent_id=str(agent.id)
|
training_data=training_data, agent_id=str(agent.id)
|
||||||
)
|
)
|
||||||
@@ -774,7 +775,9 @@ class Crew(BaseModel):
|
|||||||
|
|
||||||
def _log_task_start(self, task: Task, role: str = "None"):
|
def _log_task_start(self, task: Task, role: str = "None"):
|
||||||
if self.output_log_file:
|
if self.output_log_file:
|
||||||
self._file_handler.log(task_name=task.name, task=task.description, agent=role, status="started")
|
self._file_handler.log(
|
||||||
|
task_name=task.name, task=task.description, agent=role, status="started"
|
||||||
|
)
|
||||||
|
|
||||||
def _update_manager_tools(self, task: Task):
|
def _update_manager_tools(self, task: Task):
|
||||||
if self.manager_agent:
|
if self.manager_agent:
|
||||||
@@ -796,7 +799,13 @@ class Crew(BaseModel):
|
|||||||
def _process_task_result(self, task: Task, output: TaskOutput) -> None:
|
def _process_task_result(self, task: Task, output: TaskOutput) -> None:
|
||||||
role = task.agent.role if task.agent is not None else "None"
|
role = task.agent.role if task.agent is not None else "None"
|
||||||
if self.output_log_file:
|
if self.output_log_file:
|
||||||
self._file_handler.log(task_name=task.name, task=task.description, agent=role, status="completed", output=output.raw)
|
self._file_handler.log(
|
||||||
|
task_name=task.name,
|
||||||
|
task=task.description,
|
||||||
|
agent=role,
|
||||||
|
status="completed",
|
||||||
|
output=output.raw,
|
||||||
|
)
|
||||||
|
|
||||||
def _create_crew_output(self, task_outputs: List[TaskOutput]) -> CrewOutput:
|
def _create_crew_output(self, task_outputs: List[TaskOutput]) -> CrewOutput:
|
||||||
if len(task_outputs) != 1:
|
if len(task_outputs) != 1:
|
||||||
@@ -979,17 +988,19 @@ class Crew(BaseModel):
|
|||||||
inputs: Optional[Dict[str, Any]] = None,
|
inputs: Optional[Dict[str, Any]] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
|
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
|
||||||
self._test_execution_span = self._telemetry.test_execution_span(
|
test_crew = self.copy()
|
||||||
self,
|
|
||||||
|
self._test_execution_span = test_crew._telemetry.test_execution_span(
|
||||||
|
test_crew,
|
||||||
n_iterations,
|
n_iterations,
|
||||||
inputs,
|
inputs,
|
||||||
openai_model_name, # type: ignore[arg-type]
|
openai_model_name, # type: ignore[arg-type]
|
||||||
) # type: ignore[arg-type]
|
) # type: ignore[arg-type]
|
||||||
evaluator = CrewEvaluator(self, openai_model_name) # type: ignore[arg-type]
|
evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type]
|
||||||
|
|
||||||
for i in range(1, n_iterations + 1):
|
for i in range(1, n_iterations + 1):
|
||||||
evaluator.set_iteration(i)
|
evaluator.set_iteration(i)
|
||||||
self.kickoff(inputs=inputs)
|
test_crew.kickoff(inputs=inputs)
|
||||||
|
|
||||||
evaluator.print_crew_evaluation_result()
|
evaluator.print_crew_evaluation_result()
|
||||||
|
|
||||||
|
|||||||
@@ -190,7 +190,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
|||||||
"""Returns the list of all outputs from executed methods."""
|
"""Returns the list of all outputs from executed methods."""
|
||||||
return self._method_outputs
|
return self._method_outputs
|
||||||
|
|
||||||
async def kickoff(self) -> Any:
|
def kickoff(self) -> Any:
|
||||||
|
return asyncio.run(self.kickoff_async())
|
||||||
|
|
||||||
|
async def kickoff_async(self) -> Any:
|
||||||
if not self._start_methods:
|
if not self._start_methods:
|
||||||
raise ValueError("No start method defined")
|
raise ValueError("No start method defined")
|
||||||
|
|
||||||
|
|||||||
@@ -31,7 +31,9 @@ class ContextualMemory:
|
|||||||
formatted as bullet points.
|
formatted as bullet points.
|
||||||
"""
|
"""
|
||||||
stm_results = self.stm.search(query)
|
stm_results = self.stm.search(query)
|
||||||
formatted_results = "\n".join([f"- {result}" for result in stm_results])
|
formatted_results = "\n".join(
|
||||||
|
[f"- {result['context']}" for result in stm_results]
|
||||||
|
)
|
||||||
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
|
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
|
||||||
|
|
||||||
def _fetch_ltm_context(self, task) -> Optional[str]:
|
def _fetch_ltm_context(self, task) -> Optional[str]:
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ class EntityMemory(Memory):
|
|||||||
if storage
|
if storage
|
||||||
else RAGStorage(
|
else RAGStorage(
|
||||||
type="entities",
|
type="entities",
|
||||||
allow_reset=False,
|
allow_reset=True,
|
||||||
embedder_config=embedder_config,
|
embedder_config=embedder_config,
|
||||||
crew=crew,
|
crew=crew,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from typing import Any, Dict
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
|
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
|
||||||
from crewai.memory.memory import Memory
|
from crewai.memory.memory import Memory
|
||||||
@@ -28,7 +28,7 @@ class LongTermMemory(Memory):
|
|||||||
datetime=item.datetime,
|
datetime=item.datetime,
|
||||||
)
|
)
|
||||||
|
|
||||||
def search(self, task: str, latest_n: int = 3) -> Dict[str, Any]:
|
def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||||
return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load"
|
return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load"
|
||||||
|
|
||||||
def reset(self) -> None:
|
def reset(self) -> None:
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional, List
|
||||||
|
|
||||||
from crewai.memory.storage.interface import Storage
|
from crewai.memory.storage.rag_storage import RAGStorage
|
||||||
|
|
||||||
|
|
||||||
class Memory:
|
class Memory:
|
||||||
@@ -8,7 +8,7 @@ class Memory:
|
|||||||
Base class for memory, now supporting agent tags and generic metadata.
|
Base class for memory, now supporting agent tags and generic metadata.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, storage: Storage):
|
def __init__(self, storage: RAGStorage):
|
||||||
self.storage = storage
|
self.storage = storage
|
||||||
|
|
||||||
def save(
|
def save(
|
||||||
@@ -23,5 +23,5 @@ class Memory:
|
|||||||
|
|
||||||
self.storage.save(value, metadata)
|
self.storage.save(value, metadata)
|
||||||
|
|
||||||
def search(self, query: str) -> Dict[str, Any]:
|
def search(self, query: str) -> List[Dict[str, Any]]:
|
||||||
return self.storage.search(query)
|
return self.storage.search(query)
|
||||||
|
|||||||
76
src/crewai/memory/storage/base_rag_storage.py
Normal file
76
src/crewai/memory/storage/base_rag_storage.py
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class BaseRAGStorage(ABC):
|
||||||
|
"""
|
||||||
|
Base class for RAG-based Storage implementations.
|
||||||
|
"""
|
||||||
|
|
||||||
|
app: Any | None = None
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
type: str,
|
||||||
|
allow_reset: bool = True,
|
||||||
|
embedder_config: Optional[Any] = None,
|
||||||
|
crew: Any = None,
|
||||||
|
):
|
||||||
|
self.type = type
|
||||||
|
self.allow_reset = allow_reset
|
||||||
|
self.embedder_config = embedder_config
|
||||||
|
self.crew = crew
|
||||||
|
self.agents = self._initialize_agents()
|
||||||
|
|
||||||
|
def _initialize_agents(self) -> str:
|
||||||
|
if self.crew:
|
||||||
|
return "_".join(
|
||||||
|
[self._sanitize_role(agent.role) for agent in self.crew.agents]
|
||||||
|
)
|
||||||
|
return ""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def _sanitize_role(self, role: str) -> str:
|
||||||
|
"""Sanitizes agent roles to ensure valid directory names."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||||
|
"""Save a value with metadata to the storage."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def search(
|
||||||
|
self,
|
||||||
|
query: str,
|
||||||
|
limit: int = 3,
|
||||||
|
filter: Optional[dict] = None,
|
||||||
|
score_threshold: float = 0.35,
|
||||||
|
) -> List[Any]:
|
||||||
|
"""Search for entries in the storage."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def reset(self) -> None:
|
||||||
|
"""Reset the storage."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def _generate_embedding(
|
||||||
|
self, text: str, metadata: Optional[Dict[str, Any]] = None
|
||||||
|
) -> Any:
|
||||||
|
"""Generate an embedding for the given text and metadata."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def _initialize_app(self):
|
||||||
|
"""Initialize the vector db."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def setup_config(self, config: Dict[str, Any]):
|
||||||
|
"""Setup the config of the storage."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def initialize_client(self):
|
||||||
|
"""Initialize the client of the storage. This should setup the app and the db collection"""
|
||||||
|
pass
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
from typing import Any, Dict
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
|
|
||||||
class Storage:
|
class Storage:
|
||||||
@@ -7,7 +7,7 @@ class Storage:
|
|||||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def search(self, key: str) -> Dict[str, Any]: # type: ignore
|
def search(self, key: str) -> List[Dict[str, Any]]: # type: ignore
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def reset(self) -> None:
|
def reset(self) -> None:
|
||||||
|
|||||||
@@ -3,10 +3,14 @@ import io
|
|||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
|
import uuid
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
|
||||||
from crewai.memory.storage.interface import Storage
|
|
||||||
from crewai.utilities.paths import db_storage_path
|
from crewai.utilities.paths import db_storage_path
|
||||||
|
from chromadb.api import ClientAPI
|
||||||
|
from chromadb.api.types import validate_embedding_function
|
||||||
|
from chromadb import Documents, EmbeddingFunction, Embeddings
|
||||||
|
from typing import cast
|
||||||
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
@contextlib.contextmanager
|
||||||
@@ -24,61 +28,119 @@ def suppress_logging(
|
|||||||
logger.setLevel(original_level)
|
logger.setLevel(original_level)
|
||||||
|
|
||||||
|
|
||||||
class RAGStorage(Storage):
|
class RAGStorage(BaseRAGStorage):
|
||||||
"""
|
"""
|
||||||
Extends Storage to handle embeddings for memory entries, improving
|
Extends Storage to handle embeddings for memory entries, improving
|
||||||
search efficiency.
|
search efficiency.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, type, allow_reset=True, embedder_config=None, crew=None):
|
app: ClientAPI | None = None
|
||||||
super().__init__()
|
|
||||||
if (
|
|
||||||
not os.getenv("OPENAI_API_KEY")
|
|
||||||
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
|
|
||||||
):
|
|
||||||
os.environ["OPENAI_API_KEY"] = "fake"
|
|
||||||
|
|
||||||
|
def __init__(self, type, allow_reset=True, embedder_config=None, crew=None):
|
||||||
|
super().__init__(type, allow_reset, embedder_config, crew)
|
||||||
agents = crew.agents if crew else []
|
agents = crew.agents if crew else []
|
||||||
agents = [self._sanitize_role(agent.role) for agent in agents]
|
agents = [self._sanitize_role(agent.role) for agent in agents]
|
||||||
agents = "_".join(agents)
|
agents = "_".join(agents)
|
||||||
|
self.agents = agents
|
||||||
|
|
||||||
config = {
|
|
||||||
"app": {
|
|
||||||
"config": {"name": type, "collect_metrics": False, "log_level": "ERROR"}
|
|
||||||
},
|
|
||||||
"chunker": {
|
|
||||||
"chunk_size": 5000,
|
|
||||||
"chunk_overlap": 100,
|
|
||||||
"length_function": "len",
|
|
||||||
"min_chunk_size": 150,
|
|
||||||
},
|
|
||||||
"vectordb": {
|
|
||||||
"provider": "chroma",
|
|
||||||
"config": {
|
|
||||||
"collection_name": type,
|
|
||||||
"dir": f"{db_storage_path()}/{type}/{agents}",
|
|
||||||
"allow_reset": allow_reset,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
if embedder_config:
|
|
||||||
config["embedder"] = embedder_config
|
|
||||||
self.type = type
|
self.type = type
|
||||||
self.config = config
|
|
||||||
self.allow_reset = allow_reset
|
self.allow_reset = allow_reset
|
||||||
|
self._initialize_app()
|
||||||
|
|
||||||
|
def _set_embedder_config(self):
|
||||||
|
import chromadb.utils.embedding_functions as embedding_functions
|
||||||
|
|
||||||
|
if self.embedder_config is None:
|
||||||
|
self.embedder_config = self._create_default_embedding_function()
|
||||||
|
|
||||||
|
if isinstance(self.embedder_config, dict):
|
||||||
|
provider = self.embedder_config.get("provider")
|
||||||
|
config = self.embedder_config.get("config", {})
|
||||||
|
model_name = config.get("model")
|
||||||
|
if provider == "openai":
|
||||||
|
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
|
||||||
|
api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"),
|
||||||
|
model_name=model_name,
|
||||||
|
)
|
||||||
|
elif provider == "azure":
|
||||||
|
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
|
||||||
|
api_key=config.get("api_key"),
|
||||||
|
api_base=config.get("api_base"),
|
||||||
|
api_type=config.get("api_type", "azure"),
|
||||||
|
api_version=config.get("api_version"),
|
||||||
|
model_name=model_name,
|
||||||
|
)
|
||||||
|
elif provider == "ollama":
|
||||||
|
from openai import OpenAI
|
||||||
|
|
||||||
|
class OllamaEmbeddingFunction(EmbeddingFunction):
|
||||||
|
def __call__(self, input: Documents) -> Embeddings:
|
||||||
|
client = OpenAI(
|
||||||
|
base_url="http://localhost:11434/v1",
|
||||||
|
api_key=config.get("api_key", "ollama"),
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
response = client.embeddings.create(
|
||||||
|
input=input, model=model_name
|
||||||
|
)
|
||||||
|
embeddings = [item.embedding for item in response.data]
|
||||||
|
return cast(Embeddings, embeddings)
|
||||||
|
except Exception as e:
|
||||||
|
raise e
|
||||||
|
|
||||||
|
self.embedder_config = OllamaEmbeddingFunction()
|
||||||
|
elif provider == "vertexai":
|
||||||
|
self.embedder_config = (
|
||||||
|
embedding_functions.GoogleVertexEmbeddingFunction(
|
||||||
|
model_name=model_name,
|
||||||
|
api_key=config.get("api_key"),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
elif provider == "google":
|
||||||
|
self.embedder_config = (
|
||||||
|
embedding_functions.GoogleGenerativeAiEmbeddingFunction(
|
||||||
|
model_name=model_name,
|
||||||
|
api_key=config.get("api_key"),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
elif provider == "cohere":
|
||||||
|
self.embedder_config = embedding_functions.CohereEmbeddingFunction(
|
||||||
|
model_name=model_name,
|
||||||
|
api_key=config.get("api_key"),
|
||||||
|
)
|
||||||
|
elif provider == "huggingface":
|
||||||
|
self.embedder_config = embedding_functions.HuggingFaceEmbeddingServer(
|
||||||
|
url=config.get("api_url"),
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise Exception(
|
||||||
|
f"Unsupported embedding provider: {provider}, supported providers: [openai, azure, ollama, vertexai, google, cohere, huggingface]"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
validate_embedding_function(self.embedder_config) # type: ignore # used for validating embedder_config if defined a embedding function/class
|
||||||
|
self.embedder_config = self.embedder_config
|
||||||
|
|
||||||
def _initialize_app(self):
|
def _initialize_app(self):
|
||||||
from embedchain import App
|
import chromadb
|
||||||
from embedchain.llm.base import BaseLlm
|
from chromadb.config import Settings
|
||||||
|
|
||||||
class FakeLLM(BaseLlm):
|
self._set_embedder_config()
|
||||||
pass
|
chroma_client = chromadb.PersistentClient(
|
||||||
|
path=f"{db_storage_path()}/{self.type}/{self.agents}",
|
||||||
|
settings=Settings(allow_reset=self.allow_reset),
|
||||||
|
)
|
||||||
|
|
||||||
self.app = App.from_config(config=self.config)
|
self.app = chroma_client
|
||||||
self.app.llm = FakeLLM()
|
|
||||||
if self.allow_reset:
|
try:
|
||||||
self.app.reset()
|
self.collection = self.app.get_collection(
|
||||||
|
name=self.type, embedding_function=self.embedder_config
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
self.collection = self.app.create_collection(
|
||||||
|
name=self.type, embedding_function=self.embedder_config
|
||||||
|
)
|
||||||
|
|
||||||
def _sanitize_role(self, role: str) -> str:
|
def _sanitize_role(self, role: str) -> str:
|
||||||
"""
|
"""
|
||||||
@@ -87,11 +149,14 @@ class RAGStorage(Storage):
|
|||||||
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
|
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
|
||||||
|
|
||||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||||
if not hasattr(self, "app"):
|
if not hasattr(self, "app") or not hasattr(self, "collection"):
|
||||||
self._initialize_app()
|
self._initialize_app()
|
||||||
|
try:
|
||||||
self._generate_embedding(value, metadata)
|
self._generate_embedding(value, metadata)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Error during {self.type} save: {str(e)}")
|
||||||
|
|
||||||
def search( # type: ignore # BUG?: Signature of "search" incompatible with supertype "Storage"
|
def search(
|
||||||
self,
|
self,
|
||||||
query: str,
|
query: str,
|
||||||
limit: int = 3,
|
limit: int = 3,
|
||||||
@@ -100,31 +165,54 @@ class RAGStorage(Storage):
|
|||||||
) -> List[Any]:
|
) -> List[Any]:
|
||||||
if not hasattr(self, "app"):
|
if not hasattr(self, "app"):
|
||||||
self._initialize_app()
|
self._initialize_app()
|
||||||
from embedchain.vectordb.chroma import InvalidDimensionException
|
|
||||||
|
|
||||||
with suppress_logging():
|
|
||||||
try:
|
try:
|
||||||
results = (
|
with suppress_logging():
|
||||||
self.app.search(query, limit, where=filter)
|
response = self.collection.query(query_texts=query, n_results=limit)
|
||||||
if filter
|
|
||||||
else self.app.search(query, limit)
|
results = []
|
||||||
)
|
for i in range(len(response["ids"][0])):
|
||||||
except InvalidDimensionException:
|
result = {
|
||||||
self.app.reset()
|
"id": response["ids"][0][i],
|
||||||
|
"metadata": response["metadatas"][0][i],
|
||||||
|
"context": response["documents"][0][i],
|
||||||
|
"score": response["distances"][0][i],
|
||||||
|
}
|
||||||
|
if result["score"] >= score_threshold:
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
return results
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Error during {self.type} search: {str(e)}")
|
||||||
return []
|
return []
|
||||||
return [r for r in results if r["metadata"]["score"] >= score_threshold]
|
|
||||||
|
|
||||||
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> Any:
|
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> None: # type: ignore
|
||||||
if not hasattr(self, "app"):
|
if not hasattr(self, "app") or not hasattr(self, "collection"):
|
||||||
self._initialize_app()
|
self._initialize_app()
|
||||||
from embedchain.models.data_type import DataType
|
|
||||||
|
|
||||||
self.app.add(text, data_type=DataType.TEXT, metadata=metadata)
|
self.collection.add(
|
||||||
|
documents=[text],
|
||||||
|
metadatas=[metadata or {}],
|
||||||
|
ids=[str(uuid.uuid4())],
|
||||||
|
)
|
||||||
|
|
||||||
def reset(self) -> None:
|
def reset(self) -> None:
|
||||||
try:
|
try:
|
||||||
shutil.rmtree(f"{db_storage_path()}/{self.type}")
|
shutil.rmtree(f"{db_storage_path()}/{self.type}")
|
||||||
|
if self.app:
|
||||||
|
self.app.reset()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
if "attempt to write a readonly database" in str(e):
|
||||||
|
# Ignore this specific error
|
||||||
|
pass
|
||||||
|
else:
|
||||||
raise Exception(
|
raise Exception(
|
||||||
f"An error occurred while resetting the {self.type} memory: {e}"
|
f"An error occurred while resetting the {self.type} memory: {e}"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def _create_default_embedding_function(self):
|
||||||
|
import chromadb.utils.embedding_functions as embedding_functions
|
||||||
|
|
||||||
|
return embedding_functions.OpenAIEmbeddingFunction(
|
||||||
|
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
|
||||||
|
)
|
||||||
|
|||||||
@@ -76,27 +76,13 @@ def crew(func) -> Callable[..., Crew]:
|
|||||||
instantiated_agents = []
|
instantiated_agents = []
|
||||||
agent_roles = set()
|
agent_roles = set()
|
||||||
|
|
||||||
# Collect methods from crew in order
|
# Use the preserved task and agent information
|
||||||
all_functions = [
|
tasks = self._original_tasks.items()
|
||||||
(name, getattr(self, name))
|
agents = self._original_agents.items()
|
||||||
for name, attr in self.__class__.__dict__.items()
|
|
||||||
if callable(attr)
|
|
||||||
]
|
|
||||||
tasks = [
|
|
||||||
(name, method)
|
|
||||||
for name, method in all_functions
|
|
||||||
if hasattr(method, "is_task")
|
|
||||||
]
|
|
||||||
|
|
||||||
agents = [
|
|
||||||
(name, method)
|
|
||||||
for name, method in all_functions
|
|
||||||
if hasattr(method, "is_agent")
|
|
||||||
]
|
|
||||||
|
|
||||||
# Instantiate tasks in order
|
# Instantiate tasks in order
|
||||||
for task_name, task_method in tasks:
|
for task_name, task_method in tasks:
|
||||||
task_instance = task_method()
|
task_instance = task_method(self)
|
||||||
instantiated_tasks.append(task_instance)
|
instantiated_tasks.append(task_instance)
|
||||||
agent_instance = getattr(task_instance, "agent", None)
|
agent_instance = getattr(task_instance, "agent", None)
|
||||||
if agent_instance and agent_instance.role not in agent_roles:
|
if agent_instance and agent_instance.role not in agent_roles:
|
||||||
@@ -105,7 +91,7 @@ def crew(func) -> Callable[..., Crew]:
|
|||||||
|
|
||||||
# Instantiate agents not included by tasks
|
# Instantiate agents not included by tasks
|
||||||
for agent_name, agent_method in agents:
|
for agent_name, agent_method in agents:
|
||||||
agent_instance = agent_method()
|
agent_instance = agent_method(self)
|
||||||
if agent_instance.role not in agent_roles:
|
if agent_instance.role not in agent_roles:
|
||||||
instantiated_agents.append(agent_instance)
|
instantiated_agents.append(agent_instance)
|
||||||
agent_roles.add(agent_instance.role)
|
agent_roles.add(agent_instance.role)
|
||||||
|
|||||||
@@ -34,6 +34,18 @@ def CrewBase(cls: T) -> T:
|
|||||||
self.map_all_agent_variables()
|
self.map_all_agent_variables()
|
||||||
self.map_all_task_variables()
|
self.map_all_task_variables()
|
||||||
|
|
||||||
|
# Preserve task and agent information
|
||||||
|
self._original_tasks = {
|
||||||
|
name: method
|
||||||
|
for name, method in cls.__dict__.items()
|
||||||
|
if hasattr(method, "is_task") and method.is_task
|
||||||
|
}
|
||||||
|
self._original_agents = {
|
||||||
|
name: method
|
||||||
|
for name, method in cls.__dict__.items()
|
||||||
|
if hasattr(method, "is_agent") and method.is_agent
|
||||||
|
}
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def load_yaml(config_path: Path):
|
def load_yaml(config_path: Path):
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -65,7 +65,7 @@ class Telemetry:
|
|||||||
|
|
||||||
self.provider.add_span_processor(processor)
|
self.provider.add_span_processor(processor)
|
||||||
self.ready = True
|
self.ready = True
|
||||||
except BaseException as e:
|
except Exception as e:
|
||||||
if isinstance(
|
if isinstance(
|
||||||
e,
|
e,
|
||||||
(SystemExit, KeyboardInterrupt, GeneratorExit, asyncio.CancelledError),
|
(SystemExit, KeyboardInterrupt, GeneratorExit, asyncio.CancelledError),
|
||||||
@@ -83,10 +83,18 @@ class Telemetry:
|
|||||||
self.ready = False
|
self.ready = False
|
||||||
self.trace_set = False
|
self.trace_set = False
|
||||||
|
|
||||||
|
def _safe_telemetry_operation(self, operation):
|
||||||
|
if not self.ready:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
operation()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None):
|
def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None):
|
||||||
"""Records the creation of a crew."""
|
"""Records the creation of a crew."""
|
||||||
if self.ready:
|
|
||||||
try:
|
def operation():
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Crew Created")
|
span = tracer.start_span("Crew Created")
|
||||||
self._add_attribute(
|
self._add_attribute(
|
||||||
@@ -127,8 +135,7 @@ class Telemetry:
|
|||||||
"allow_code_execution?": agent.allow_code_execution,
|
"allow_code_execution?": agent.allow_code_execution,
|
||||||
"max_retry_limit": agent.max_retry_limit,
|
"max_retry_limit": agent.max_retry_limit,
|
||||||
"tools_names": [
|
"tools_names": [
|
||||||
tool.name.casefold()
|
tool.name.casefold() for tool in agent.tools or []
|
||||||
for tool in agent.tools or []
|
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
for agent in crew.agents
|
for agent in crew.agents
|
||||||
@@ -157,8 +164,7 @@ class Telemetry:
|
|||||||
else None
|
else None
|
||||||
),
|
),
|
||||||
"tools_names": [
|
"tools_names": [
|
||||||
tool.name.casefold()
|
tool.name.casefold() for tool in task.tools or []
|
||||||
for tool in task.tools or []
|
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
for task in crew.tasks
|
for task in crew.tasks
|
||||||
@@ -196,8 +202,7 @@ class Telemetry:
|
|||||||
"allow_code_execution?": agent.allow_code_execution,
|
"allow_code_execution?": agent.allow_code_execution,
|
||||||
"max_retry_limit": agent.max_retry_limit,
|
"max_retry_limit": agent.max_retry_limit,
|
||||||
"tools_names": [
|
"tools_names": [
|
||||||
tool.name.casefold()
|
tool.name.casefold() for tool in agent.tools or []
|
||||||
for tool in agent.tools or []
|
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
for agent in crew.agents
|
for agent in crew.agents
|
||||||
@@ -219,8 +224,7 @@ class Telemetry:
|
|||||||
),
|
),
|
||||||
"agent_key": task.agent.key if task.agent else None,
|
"agent_key": task.agent.key if task.agent else None,
|
||||||
"tools_names": [
|
"tools_names": [
|
||||||
tool.name.casefold()
|
tool.name.casefold() for tool in task.tools or []
|
||||||
for tool in task.tools or []
|
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
for task in crew.tasks
|
for task in crew.tasks
|
||||||
@@ -229,13 +233,13 @@ class Telemetry:
|
|||||||
)
|
)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def task_started(self, crew: Crew, task: Task) -> Span | None:
|
def task_started(self, crew: Crew, task: Task) -> Span | None:
|
||||||
"""Records task started in a crew."""
|
"""Records task started in a crew."""
|
||||||
if self.ready:
|
|
||||||
try:
|
def operation():
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
|
|
||||||
created_span = tracer.start_span("Task Created")
|
created_span = tracer.start_span("Task Created")
|
||||||
@@ -270,15 +274,13 @@ class Telemetry:
|
|||||||
)
|
)
|
||||||
|
|
||||||
return span
|
return span
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return None
|
return self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def task_ended(self, span: Span, task: Task, crew: Crew):
|
def task_ended(self, span: Span, task: Task, crew: Crew):
|
||||||
"""Records task execution in a crew."""
|
"""Records task execution in a crew."""
|
||||||
if self.ready:
|
|
||||||
try:
|
def operation():
|
||||||
if crew.share_crew:
|
if crew.share_crew:
|
||||||
self._add_attribute(
|
self._add_attribute(
|
||||||
span,
|
span,
|
||||||
@@ -288,13 +290,13 @@ class Telemetry:
|
|||||||
|
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
|
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||||
"""Records the repeated usage 'error' of a tool by an agent."""
|
"""Records the repeated usage 'error' of a tool by an agent."""
|
||||||
if self.ready:
|
|
||||||
try:
|
def operation():
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Tool Repeated Usage")
|
span = tracer.start_span("Tool Repeated Usage")
|
||||||
self._add_attribute(
|
self._add_attribute(
|
||||||
@@ -308,13 +310,13 @@ class Telemetry:
|
|||||||
self._add_attribute(span, "llm", llm.model)
|
self._add_attribute(span, "llm", llm.model)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
|
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||||
"""Records the usage of a tool by an agent."""
|
"""Records the usage of a tool by an agent."""
|
||||||
if self.ready:
|
|
||||||
try:
|
def operation():
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Tool Usage")
|
span = tracer.start_span("Tool Usage")
|
||||||
self._add_attribute(
|
self._add_attribute(
|
||||||
@@ -328,13 +330,13 @@ class Telemetry:
|
|||||||
self._add_attribute(span, "llm", llm.model)
|
self._add_attribute(span, "llm", llm.model)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def tool_usage_error(self, llm: Any):
|
def tool_usage_error(self, llm: Any):
|
||||||
"""Records the usage of a tool by an agent."""
|
"""Records the usage of a tool by an agent."""
|
||||||
if self.ready:
|
|
||||||
try:
|
def operation():
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Tool Usage Error")
|
span = tracer.start_span("Tool Usage Error")
|
||||||
self._add_attribute(
|
self._add_attribute(
|
||||||
@@ -346,14 +348,13 @@ class Telemetry:
|
|||||||
self._add_attribute(span, "llm", llm.model)
|
self._add_attribute(span, "llm", llm.model)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def individual_test_result_span(
|
def individual_test_result_span(
|
||||||
self, crew: Crew, quality: float, exec_time: int, model_name: str
|
self, crew: Crew, quality: float, exec_time: int, model_name: str
|
||||||
):
|
):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Crew Individual Test Result")
|
span = tracer.start_span("Crew Individual Test Result")
|
||||||
|
|
||||||
@@ -369,8 +370,8 @@ class Telemetry:
|
|||||||
self._add_attribute(span, "model_name", model_name)
|
self._add_attribute(span, "model_name", model_name)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def test_execution_span(
|
def test_execution_span(
|
||||||
self,
|
self,
|
||||||
@@ -379,8 +380,7 @@ class Telemetry:
|
|||||||
inputs: dict[str, Any] | None,
|
inputs: dict[str, Any] | None,
|
||||||
model_name: str,
|
model_name: str,
|
||||||
):
|
):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Crew Test Execution")
|
span = tracer.start_span("Crew Test Execution")
|
||||||
|
|
||||||
@@ -401,44 +401,40 @@ class Telemetry:
|
|||||||
|
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def deploy_signup_error_span(self):
|
def deploy_signup_error_span(self):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Deploy Signup Error")
|
span = tracer.start_span("Deploy Signup Error")
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def start_deployment_span(self, uuid: Optional[str] = None):
|
def start_deployment_span(self, uuid: Optional[str] = None):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Start Deployment")
|
span = tracer.start_span("Start Deployment")
|
||||||
if uuid:
|
if uuid:
|
||||||
self._add_attribute(span, "uuid", uuid)
|
self._add_attribute(span, "uuid", uuid)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def create_crew_deployment_span(self):
|
def create_crew_deployment_span(self):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Create Crew Deployment")
|
span = tracer.start_span("Create Crew Deployment")
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
|
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Get Crew Logs")
|
span = tracer.start_span("Get Crew Logs")
|
||||||
self._add_attribute(span, "log_type", log_type)
|
self._add_attribute(span, "log_type", log_type)
|
||||||
@@ -446,20 +442,19 @@ class Telemetry:
|
|||||||
self._add_attribute(span, "uuid", uuid)
|
self._add_attribute(span, "uuid", uuid)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def remove_crew_span(self, uuid: Optional[str] = None):
|
def remove_crew_span(self, uuid: Optional[str] = None):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Remove Crew")
|
span = tracer.start_span("Remove Crew")
|
||||||
if uuid:
|
if uuid:
|
||||||
self._add_attribute(span, "uuid", uuid)
|
self._add_attribute(span, "uuid", uuid)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
|
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
|
||||||
"""Records the complete execution of a crew.
|
"""Records the complete execution of a crew.
|
||||||
@@ -467,8 +462,7 @@ class Telemetry:
|
|||||||
"""
|
"""
|
||||||
self.crew_creation(crew, inputs)
|
self.crew_creation(crew, inputs)
|
||||||
|
|
||||||
if (self.ready) and (crew.share_crew):
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Crew Execution")
|
span = tracer.start_span("Crew Execution")
|
||||||
self._add_attribute(
|
self._add_attribute(
|
||||||
@@ -533,12 +527,13 @@ class Telemetry:
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
return span
|
return span
|
||||||
except Exception:
|
|
||||||
pass
|
if crew.share_crew:
|
||||||
|
return self._safe_telemetry_operation(operation)
|
||||||
|
return None
|
||||||
|
|
||||||
def end_crew(self, crew, final_string_output):
|
def end_crew(self, crew, final_string_output):
|
||||||
if (self.ready) and (crew.share_crew):
|
def operation():
|
||||||
try:
|
|
||||||
self._add_attribute(
|
self._add_attribute(
|
||||||
crew._execution_span,
|
crew._execution_span,
|
||||||
"crewai_version",
|
"crewai_version",
|
||||||
@@ -563,47 +558,46 @@ class Telemetry:
|
|||||||
)
|
)
|
||||||
crew._execution_span.set_status(Status(StatusCode.OK))
|
crew._execution_span.set_status(Status(StatusCode.OK))
|
||||||
crew._execution_span.end()
|
crew._execution_span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
if crew.share_crew:
|
||||||
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def _add_attribute(self, span, key, value):
|
def _add_attribute(self, span, key, value):
|
||||||
"""Add an attribute to a span."""
|
"""Add an attribute to a span."""
|
||||||
try:
|
|
||||||
|
def operation():
|
||||||
return span.set_attribute(key, value)
|
return span.set_attribute(key, value)
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def flow_creation_span(self, flow_name: str):
|
def flow_creation_span(self, flow_name: str):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Flow Creation")
|
span = tracer.start_span("Flow Creation")
|
||||||
self._add_attribute(span, "flow_name", flow_name)
|
self._add_attribute(span, "flow_name", flow_name)
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def flow_plotting_span(self, flow_name: str, node_names: list[str]):
|
def flow_plotting_span(self, flow_name: str, node_names: list[str]):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Flow Plotting")
|
span = tracer.start_span("Flow Plotting")
|
||||||
self._add_attribute(span, "flow_name", flow_name)
|
self._add_attribute(span, "flow_name", flow_name)
|
||||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|
||||||
def flow_execution_span(self, flow_name: str, node_names: list[str]):
|
def flow_execution_span(self, flow_name: str, node_names: list[str]):
|
||||||
if self.ready:
|
def operation():
|
||||||
try:
|
|
||||||
tracer = trace.get_tracer("crewai.telemetry")
|
tracer = trace.get_tracer("crewai.telemetry")
|
||||||
span = tracer.start_span("Flow Execution")
|
span = tracer.start_span("Flow Execution")
|
||||||
self._add_attribute(span, "flow_name", flow_name)
|
self._add_attribute(span, "flow_name", flow_name)
|
||||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||||
span.set_status(Status(StatusCode.OK))
|
span.set_status(Status(StatusCode.OK))
|
||||||
span.end()
|
span.end()
|
||||||
except Exception:
|
|
||||||
pass
|
self._safe_telemetry_operation(operation)
|
||||||
|
|||||||
@@ -6,14 +6,13 @@ from difflib import SequenceMatcher
|
|||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
from typing import Any, List, Union
|
from typing import Any, List, Union
|
||||||
|
|
||||||
|
import crewai.utilities.events as events
|
||||||
from crewai.agents.tools_handler import ToolsHandler
|
from crewai.agents.tools_handler import ToolsHandler
|
||||||
from crewai.task import Task
|
from crewai.task import Task
|
||||||
from crewai.telemetry import Telemetry
|
from crewai.telemetry import Telemetry
|
||||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||||
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
|
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
|
||||||
from crewai.utilities import I18N, Converter, ConverterError, Printer
|
from crewai.utilities import I18N, Converter, ConverterError, Printer
|
||||||
import crewai.utilities.events as events
|
|
||||||
|
|
||||||
|
|
||||||
agentops = None
|
agentops = None
|
||||||
if os.environ.get("AGENTOPS_API_KEY"):
|
if os.environ.get("AGENTOPS_API_KEY"):
|
||||||
@@ -300,8 +299,11 @@ class ToolUsage:
|
|||||||
descriptions = []
|
descriptions = []
|
||||||
for tool in self.tools:
|
for tool in self.tools:
|
||||||
args = {
|
args = {
|
||||||
k: {k2: v2 for k2, v2 in v.items() if k2 in ["description", "type"]}
|
name: {
|
||||||
for k, v in tool.args.items()
|
"description": field.description,
|
||||||
|
"type": field.annotation.__name__,
|
||||||
|
}
|
||||||
|
for name, field in tool.args_schema.model_fields.items()
|
||||||
}
|
}
|
||||||
descriptions.append(
|
descriptions.append(
|
||||||
"\n".join(
|
"\n".join(
|
||||||
|
|||||||
@@ -75,8 +75,8 @@ def test_install_success(mock_get, mock_subprocess_run):
|
|||||||
[
|
[
|
||||||
"uv",
|
"uv",
|
||||||
"add",
|
"add",
|
||||||
"--extra-index-url",
|
"--index",
|
||||||
"https://app.crewai.com/pypi/sample-repo",
|
"sample-repo=https://example.com/repo",
|
||||||
"sample-tool",
|
"sample-tool",
|
||||||
],
|
],
|
||||||
capture_output=False,
|
capture_output=False,
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ from unittest.mock import MagicMock, patch
|
|||||||
import instructor
|
import instructor
|
||||||
import pydantic_core
|
import pydantic_core
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.agents.cache import CacheHandler
|
from crewai.agents.cache import CacheHandler
|
||||||
from crewai.crew import Crew
|
from crewai.crew import Crew
|
||||||
@@ -497,6 +498,7 @@ def test_cache_hitting_between_agents():
|
|||||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||||
def test_api_calls_throttling(capsys):
|
def test_api_calls_throttling(capsys):
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
from crewai_tools import tool
|
from crewai_tools import tool
|
||||||
|
|
||||||
@tool
|
@tool
|
||||||
@@ -779,11 +781,14 @@ def test_async_task_execution_call_count():
|
|||||||
list_important_history.output = mock_task_output
|
list_important_history.output = mock_task_output
|
||||||
write_article.output = mock_task_output
|
write_article.output = mock_task_output
|
||||||
|
|
||||||
with patch.object(
|
with (
|
||||||
|
patch.object(
|
||||||
Task, "execute_sync", return_value=mock_task_output
|
Task, "execute_sync", return_value=mock_task_output
|
||||||
) as mock_execute_sync, patch.object(
|
) as mock_execute_sync,
|
||||||
|
patch.object(
|
||||||
Task, "execute_async", return_value=mock_future
|
Task, "execute_async", return_value=mock_future
|
||||||
) as mock_execute_async:
|
) as mock_execute_async,
|
||||||
|
):
|
||||||
crew.kickoff()
|
crew.kickoff()
|
||||||
|
|
||||||
assert mock_execute_async.call_count == 2
|
assert mock_execute_async.call_count == 2
|
||||||
@@ -1105,6 +1110,7 @@ def test_dont_set_agents_step_callback_if_already_set():
|
|||||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||||
def test_crew_function_calling_llm():
|
def test_crew_function_calling_llm():
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
from crewai_tools import tool
|
from crewai_tools import tool
|
||||||
|
|
||||||
llm = "gpt-4o"
|
llm = "gpt-4o"
|
||||||
@@ -1448,52 +1454,6 @@ def test_crew_does_not_interpolate_without_inputs():
|
|||||||
interpolate_task_inputs.assert_not_called()
|
interpolate_task_inputs.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
# def test_crew_partial_inputs():
|
|
||||||
# agent = Agent(
|
|
||||||
# role="{topic} Researcher",
|
|
||||||
# goal="Express hot takes on {topic}.",
|
|
||||||
# backstory="You have a lot of experience with {topic}.",
|
|
||||||
# )
|
|
||||||
|
|
||||||
# task = Task(
|
|
||||||
# description="Give me an analysis around {topic}.",
|
|
||||||
# expected_output="{points} bullet points about {topic}.",
|
|
||||||
# )
|
|
||||||
|
|
||||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"topic": "AI"})
|
|
||||||
# inputs = {"topic": "AI"}
|
|
||||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
|
||||||
|
|
||||||
# assert crew.tasks[0].description == "Give me an analysis around AI."
|
|
||||||
# assert crew.tasks[0].expected_output == "{points} bullet points about AI."
|
|
||||||
# assert crew.agents[0].role == "AI Researcher"
|
|
||||||
# assert crew.agents[0].goal == "Express hot takes on AI."
|
|
||||||
# assert crew.agents[0].backstory == "You have a lot of experience with AI."
|
|
||||||
|
|
||||||
|
|
||||||
# def test_crew_invalid_inputs():
|
|
||||||
# agent = Agent(
|
|
||||||
# role="{topic} Researcher",
|
|
||||||
# goal="Express hot takes on {topic}.",
|
|
||||||
# backstory="You have a lot of experience with {topic}.",
|
|
||||||
# )
|
|
||||||
|
|
||||||
# task = Task(
|
|
||||||
# description="Give me an analysis around {topic}.",
|
|
||||||
# expected_output="{points} bullet points about {topic}.",
|
|
||||||
# )
|
|
||||||
|
|
||||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"subject": "AI"})
|
|
||||||
# inputs = {"subject": "AI"}
|
|
||||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
|
||||||
|
|
||||||
# assert crew.tasks[0].description == "Give me an analysis around {topic}."
|
|
||||||
# assert crew.tasks[0].expected_output == "{points} bullet points about {topic}."
|
|
||||||
# assert crew.agents[0].role == "{topic} Researcher"
|
|
||||||
# assert crew.agents[0].goal == "Express hot takes on {topic}."
|
|
||||||
# assert crew.agents[0].backstory == "You have a lot of experience with {topic}."
|
|
||||||
|
|
||||||
|
|
||||||
def test_task_callback_on_crew():
|
def test_task_callback_on_crew():
|
||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
@@ -1770,7 +1730,10 @@ def test_manager_agent_with_tools_raises_exception():
|
|||||||
@patch("crewai.crew.Crew.kickoff")
|
@patch("crewai.crew.Crew.kickoff")
|
||||||
@patch("crewai.crew.CrewTrainingHandler")
|
@patch("crewai.crew.CrewTrainingHandler")
|
||||||
@patch("crewai.crew.TaskEvaluator")
|
@patch("crewai.crew.TaskEvaluator")
|
||||||
def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
@patch("crewai.crew.Crew.copy")
|
||||||
|
def test_crew_train_success(
|
||||||
|
copy_mock, task_evaluator, crew_training_handler, kickoff_mock
|
||||||
|
):
|
||||||
task = Task(
|
task = Task(
|
||||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||||
expected_output="5 bullet points with a paragraph for each idea.",
|
expected_output="5 bullet points with a paragraph for each idea.",
|
||||||
@@ -1781,9 +1744,19 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
|||||||
agents=[researcher, writer],
|
agents=[researcher, writer],
|
||||||
tasks=[task],
|
tasks=[task],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Create a mock for the copied crew
|
||||||
|
copy_mock.return_value = crew
|
||||||
|
|
||||||
crew.train(
|
crew.train(
|
||||||
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
|
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Ensure kickoff is called on the copied crew
|
||||||
|
kickoff_mock.assert_has_calls(
|
||||||
|
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||||
|
)
|
||||||
|
|
||||||
task_evaluator.assert_has_calls(
|
task_evaluator.assert_has_calls(
|
||||||
[
|
[
|
||||||
mock.call(researcher),
|
mock.call(researcher),
|
||||||
@@ -1822,10 +1795,6 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
|||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
kickoff.assert_has_calls(
|
|
||||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_crew_train_error():
|
def test_crew_train_error():
|
||||||
task = Task(
|
task = Task(
|
||||||
@@ -1840,7 +1809,7 @@ def test_crew_train_error():
|
|||||||
)
|
)
|
||||||
|
|
||||||
with pytest.raises(TypeError) as e:
|
with pytest.raises(TypeError) as e:
|
||||||
crew.train()
|
crew.train() # type: ignore purposefully throwing err
|
||||||
assert "train() missing 1 required positional argument: 'n_iterations'" in str(
|
assert "train() missing 1 required positional argument: 'n_iterations'" in str(
|
||||||
e
|
e
|
||||||
)
|
)
|
||||||
@@ -2536,8 +2505,9 @@ def test_conditional_should_execute():
|
|||||||
|
|
||||||
|
|
||||||
@mock.patch("crewai.crew.CrewEvaluator")
|
@mock.patch("crewai.crew.CrewEvaluator")
|
||||||
|
@mock.patch("crewai.crew.Crew.copy")
|
||||||
@mock.patch("crewai.crew.Crew.kickoff")
|
@mock.patch("crewai.crew.Crew.kickoff")
|
||||||
def test_crew_testing_function(mock_kickoff, crew_evaluator):
|
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator):
|
||||||
task = Task(
|
task = Task(
|
||||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||||
expected_output="5 bullet points with a paragraph for each idea.",
|
expected_output="5 bullet points with a paragraph for each idea.",
|
||||||
@@ -2548,11 +2518,15 @@ def test_crew_testing_function(mock_kickoff, crew_evaluator):
|
|||||||
agents=[researcher],
|
agents=[researcher],
|
||||||
tasks=[task],
|
tasks=[task],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Create a mock for the copied crew
|
||||||
|
copy_mock.return_value = crew
|
||||||
|
|
||||||
n_iterations = 2
|
n_iterations = 2
|
||||||
crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"})
|
crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"})
|
||||||
|
|
||||||
assert len(mock_kickoff.mock_calls) == n_iterations
|
# Ensure kickoff is called on the copied crew
|
||||||
mock_kickoff.assert_has_calls(
|
kickoff_mock.assert_has_calls(
|
||||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@@ -1,5 +1,5 @@
|
|||||||
import pytest
|
import pytest
|
||||||
|
from unittest.mock import patch
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.crew import Crew
|
from crewai.crew import Crew
|
||||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||||
@@ -26,7 +26,6 @@ def short_term_memory():
|
|||||||
return ShortTermMemory(crew=Crew(agents=[agent], tasks=[task]))
|
return ShortTermMemory(crew=Crew(agents=[agent], tasks=[task]))
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
|
||||||
def test_save_and_search(short_term_memory):
|
def test_save_and_search(short_term_memory):
|
||||||
memory = ShortTermMemoryItem(
|
memory = ShortTermMemoryItem(
|
||||||
data="""test value test value test value test value test value test value
|
data="""test value test value test value test value test value test value
|
||||||
@@ -35,12 +34,28 @@ def test_save_and_search(short_term_memory):
|
|||||||
agent="test_agent",
|
agent="test_agent",
|
||||||
metadata={"task": "test_task"},
|
metadata={"task": "test_task"},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
with patch.object(ShortTermMemory, "save") as mock_save:
|
||||||
short_term_memory.save(
|
short_term_memory.save(
|
||||||
value=memory.data,
|
value=memory.data,
|
||||||
metadata=memory.metadata,
|
metadata=memory.metadata,
|
||||||
agent=memory.agent,
|
agent=memory.agent,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
mock_save.assert_called_once_with(
|
||||||
|
value=memory.data,
|
||||||
|
metadata=memory.metadata,
|
||||||
|
agent=memory.agent,
|
||||||
|
)
|
||||||
|
|
||||||
|
expected_result = [
|
||||||
|
{
|
||||||
|
"context": memory.data,
|
||||||
|
"metadata": {"agent": "test_agent"},
|
||||||
|
"score": 0.95,
|
||||||
|
}
|
||||||
|
]
|
||||||
|
with patch.object(ShortTermMemory, "search", return_value=expected_result):
|
||||||
find = short_term_memory.search("test value", score_threshold=0.01)[0]
|
find = short_term_memory.search("test value", score_threshold=0.01)[0]
|
||||||
assert find["context"] == memory.data, "Data value mismatch."
|
assert find["context"] == memory.data, "Data value mismatch."
|
||||||
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."
|
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."
|
||||||
|
|||||||
119
tests/tools/test_tool_usage.py
Normal file
119
tests/tools/test_tool_usage.py
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
import json
|
||||||
|
import random
|
||||||
|
from unittest.mock import MagicMock
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from crewai_tools import BaseTool
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
from crewai import Agent, Task
|
||||||
|
from crewai.tools.tool_usage import ToolUsage
|
||||||
|
|
||||||
|
|
||||||
|
class RandomNumberToolInput(BaseModel):
|
||||||
|
min_value: int = Field(
|
||||||
|
..., description="The minimum value of the range (inclusive)"
|
||||||
|
)
|
||||||
|
max_value: int = Field(
|
||||||
|
..., description="The maximum value of the range (inclusive)"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class RandomNumberTool(BaseTool):
|
||||||
|
name: str = "Random Number Generator"
|
||||||
|
description: str = "Generates a random number within a specified range"
|
||||||
|
args_schema: type[BaseModel] = RandomNumberToolInput
|
||||||
|
|
||||||
|
def _run(self, min_value: int, max_value: int) -> int:
|
||||||
|
return random.randint(min_value, max_value)
|
||||||
|
|
||||||
|
|
||||||
|
# Example agent and task
|
||||||
|
example_agent = Agent(
|
||||||
|
role="Number Generator",
|
||||||
|
goal="Generate random numbers for various purposes",
|
||||||
|
backstory="You are an AI agent specialized in generating random numbers within specified ranges.",
|
||||||
|
tools=[RandomNumberTool()],
|
||||||
|
verbose=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
example_task = Task(
|
||||||
|
description="Generate a random number between 1 and 100",
|
||||||
|
expected_output="A random number between 1 and 100",
|
||||||
|
agent=example_agent,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_random_number_tool_range():
|
||||||
|
tool = RandomNumberTool()
|
||||||
|
result = tool._run(1, 10)
|
||||||
|
assert 1 <= result <= 10
|
||||||
|
|
||||||
|
|
||||||
|
def test_random_number_tool_invalid_range():
|
||||||
|
tool = RandomNumberTool()
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
tool._run(10, 1) # min_value > max_value
|
||||||
|
|
||||||
|
|
||||||
|
def test_random_number_tool_schema():
|
||||||
|
tool = RandomNumberTool()
|
||||||
|
|
||||||
|
# Get the schema using model_json_schema()
|
||||||
|
schema = tool.args_schema.model_json_schema()
|
||||||
|
|
||||||
|
# Convert the schema to a string
|
||||||
|
schema_str = json.dumps(schema)
|
||||||
|
|
||||||
|
# Check if the schema string contains the expected fields
|
||||||
|
assert "min_value" in schema_str
|
||||||
|
assert "max_value" in schema_str
|
||||||
|
|
||||||
|
# Parse the schema string back to a dictionary
|
||||||
|
schema_dict = json.loads(schema_str)
|
||||||
|
|
||||||
|
# Check if the schema contains the correct field types
|
||||||
|
assert schema_dict["properties"]["min_value"]["type"] == "integer"
|
||||||
|
assert schema_dict["properties"]["max_value"]["type"] == "integer"
|
||||||
|
|
||||||
|
# Check if the schema contains the field descriptions
|
||||||
|
assert (
|
||||||
|
"minimum value" in schema_dict["properties"]["min_value"]["description"].lower()
|
||||||
|
)
|
||||||
|
assert (
|
||||||
|
"maximum value" in schema_dict["properties"]["max_value"]["description"].lower()
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_tool_usage_render():
|
||||||
|
tool = RandomNumberTool()
|
||||||
|
|
||||||
|
tool_usage = ToolUsage(
|
||||||
|
tools_handler=MagicMock(),
|
||||||
|
tools=[tool],
|
||||||
|
original_tools=[tool],
|
||||||
|
tools_description="Sample tool for testing",
|
||||||
|
tools_names="random_number_generator",
|
||||||
|
task=MagicMock(),
|
||||||
|
function_calling_llm=MagicMock(),
|
||||||
|
agent=MagicMock(),
|
||||||
|
action=MagicMock(),
|
||||||
|
)
|
||||||
|
|
||||||
|
rendered = tool_usage._render()
|
||||||
|
|
||||||
|
# Updated checks to match the actual output
|
||||||
|
assert "Tool Name: random number generator" in rendered
|
||||||
|
assert (
|
||||||
|
"Random Number Generator(min_value: 'integer', max_value: 'integer') - Generates a random number within a specified range min_value: 'The minimum value of the range (inclusive)', max_value: 'The maximum value of the range (inclusive)'"
|
||||||
|
in rendered
|
||||||
|
)
|
||||||
|
assert "Tool Arguments:" in rendered
|
||||||
|
assert (
|
||||||
|
"'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}"
|
||||||
|
in rendered
|
||||||
|
)
|
||||||
|
assert (
|
||||||
|
"'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}"
|
||||||
|
in rendered
|
||||||
|
)
|
||||||
146
uv.lock
generated
146
uv.lock
generated
@@ -627,15 +627,14 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "crewai"
|
name = "crewai"
|
||||||
version = "0.67.1"
|
version = "0.75.1"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "agentops" },
|
|
||||||
{ name = "appdirs" },
|
{ name = "appdirs" },
|
||||||
{ name = "auth0-python" },
|
{ name = "auth0-python" },
|
||||||
|
{ name = "chromadb" },
|
||||||
{ name = "click" },
|
{ name = "click" },
|
||||||
{ name = "crewai-tools" },
|
{ name = "crewai-tools" },
|
||||||
{ name = "embedchain" },
|
|
||||||
{ name = "instructor" },
|
{ name = "instructor" },
|
||||||
{ name = "json-repair" },
|
{ name = "json-repair" },
|
||||||
{ name = "jsonref" },
|
{ name = "jsonref" },
|
||||||
@@ -683,14 +682,13 @@ dev = [
|
|||||||
|
|
||||||
[package.metadata]
|
[package.metadata]
|
||||||
requires-dist = [
|
requires-dist = [
|
||||||
{ name = "agentops", specifier = ">=0.3.0" },
|
|
||||||
{ name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" },
|
{ name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" },
|
||||||
{ name = "appdirs", specifier = ">=1.4.4" },
|
{ name = "appdirs", specifier = ">=1.4.4" },
|
||||||
{ name = "auth0-python", specifier = ">=4.7.1" },
|
{ name = "auth0-python", specifier = ">=4.7.1" },
|
||||||
|
{ name = "chromadb", specifier = ">=0.4.24" },
|
||||||
{ name = "click", specifier = ">=8.1.7" },
|
{ name = "click", specifier = ">=8.1.7" },
|
||||||
{ name = "crewai-tools", specifier = ">=0.12.1" },
|
{ name = "crewai-tools", specifier = ">=0.13.2" },
|
||||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.12.1" },
|
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.13.2" },
|
||||||
{ name = "embedchain", specifier = ">=0.1.114" },
|
|
||||||
{ name = "instructor", specifier = ">=1.3.3" },
|
{ name = "instructor", specifier = ">=1.3.3" },
|
||||||
{ name = "json-repair", specifier = ">=0.25.2" },
|
{ name = "json-repair", specifier = ">=0.25.2" },
|
||||||
{ name = "jsonref", specifier = ">=1.1.0" },
|
{ name = "jsonref", specifier = ">=1.1.0" },
|
||||||
@@ -705,13 +703,13 @@ requires-dist = [
|
|||||||
{ name = "pyvis", specifier = ">=0.3.2" },
|
{ name = "pyvis", specifier = ">=0.3.2" },
|
||||||
{ name = "regex", specifier = ">=2024.9.11" },
|
{ name = "regex", specifier = ">=2024.9.11" },
|
||||||
{ name = "tomli-w", specifier = ">=1.1.0" },
|
{ name = "tomli-w", specifier = ">=1.1.0" },
|
||||||
{ name = "uv", specifier = ">=0.4.18" },
|
{ name = "uv", specifier = ">=0.4.25" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.metadata.requires-dev]
|
[package.metadata.requires-dev]
|
||||||
dev = [
|
dev = [
|
||||||
{ name = "cairosvg", specifier = ">=2.7.1" },
|
{ name = "cairosvg", specifier = ">=2.7.1" },
|
||||||
{ name = "crewai-tools", specifier = ">=0.12.1" },
|
{ name = "crewai-tools", specifier = ">=0.13.2" },
|
||||||
{ name = "mkdocs", specifier = ">=1.4.3" },
|
{ name = "mkdocs", specifier = ">=1.4.3" },
|
||||||
{ name = "mkdocs-material", specifier = ">=9.5.7" },
|
{ name = "mkdocs-material", specifier = ">=9.5.7" },
|
||||||
{ name = "mkdocs-material-extensions", specifier = ">=1.3.1" },
|
{ name = "mkdocs-material-extensions", specifier = ">=1.3.1" },
|
||||||
@@ -730,7 +728,7 @@ dev = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "crewai-tools"
|
name = "crewai-tools"
|
||||||
version = "0.12.1"
|
version = "0.13.2"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "beautifulsoup4" },
|
{ name = "beautifulsoup4" },
|
||||||
@@ -748,9 +746,9 @@ dependencies = [
|
|||||||
{ name = "requests" },
|
{ name = "requests" },
|
||||||
{ name = "selenium" },
|
{ name = "selenium" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/11/60/1860127d927939f9143cab9af059cfbe6f160839b6ba1d652a9ed4e04fa6/crewai_tools-0.12.1.tar.gz", hash = "sha256:22fa3ea57936913faed77a2a64c131371f78b2ced207e63dcc71220eac445698", size = 420190 }
|
sdist = { url = "https://files.pythonhosted.org/packages/96/02/136f42ed8a7bd706a85663714c615bdcb684e43e95e4719c892aa0ce3d53/crewai_tools-0.13.2.tar.gz", hash = "sha256:c6782f2e868c0e96b25891f1b40fb8c90c01e920bab2fd1388f89ef1d7a4b99b", size = 816250 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/23/e6/cc9acbc6ee828898956b18036643fc2150b6c1b976ab34f29b9cadc085b5/crewai_tools-0.12.1-py3-none-any.whl", hash = "sha256:e87d393dd1900834a224686644e025eb44e74171f317c4ff2df778aff6ade4b8", size = 463435 },
|
{ url = "https://files.pythonhosted.org/packages/28/30/df215173b6193b2cfb1902a339443be73056eae89579805b853c6f359761/crewai_tools-0.13.2-py3-none-any.whl", hash = "sha256:8c7583c9559fb625f594349c6553a5251ebd7b21918735ad6fbe8bab7ec3db50", size = 463444 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -921,7 +919,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "embedchain"
|
name = "embedchain"
|
||||||
version = "0.1.122"
|
version = "0.1.123"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "alembic" },
|
{ name = "alembic" },
|
||||||
@@ -934,6 +932,7 @@ dependencies = [
|
|||||||
{ name = "langchain-cohere" },
|
{ name = "langchain-cohere" },
|
||||||
{ name = "langchain-community" },
|
{ name = "langchain-community" },
|
||||||
{ name = "langchain-openai" },
|
{ name = "langchain-openai" },
|
||||||
|
{ name = "langsmith" },
|
||||||
{ name = "mem0ai" },
|
{ name = "mem0ai" },
|
||||||
{ name = "openai" },
|
{ name = "openai" },
|
||||||
{ name = "posthog" },
|
{ name = "posthog" },
|
||||||
@@ -945,9 +944,9 @@ dependencies = [
|
|||||||
{ name = "sqlalchemy" },
|
{ name = "sqlalchemy" },
|
||||||
{ name = "tiktoken" },
|
{ name = "tiktoken" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/25/9b/fa14dc95f8736c672bcebd677f48990670f1a9fac8ea1631222b8b820d69/embedchain-0.1.122.tar.gz", hash = "sha256:ea0a4d00a4a1909e0d662dc499fa6a0da119783ec4773df1271da74da3e8296b", size = 124799 }
|
sdist = { url = "https://files.pythonhosted.org/packages/5d/6a/955b5a72fa6727db203c4d46ae0e30ac47f4f50389f663cd5ea157b0d819/embedchain-0.1.123.tar.gz", hash = "sha256:aecaf81c21de05b5cdb649b6cde95ef68ffa759c69c54f6ff2eaa667f2ad0580", size = 124797 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/95/3f/42c97c1d3c9483076843987982a018115b6a28be02091fb475e6dbc743f2/embedchain-0.1.122-py3-none-any.whl", hash = "sha256:c137be81d0949b5ee16c689837d659837980cfabbb38643c2720cd1a794d8d27", size = 210911 },
|
{ url = "https://files.pythonhosted.org/packages/a7/51/0c78d26da4afbe68370306669556b274f1021cac02f3155d8da2be407763/embedchain-0.1.123-py3-none-any.whl", hash = "sha256:1210e993b6364d7c702b6bd44b053fc244dd77f2a65ea4b90b62709114ea6c25", size = 210909 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -1551,7 +1550,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "httpx"
|
name = "httpx"
|
||||||
version = "0.27.2"
|
version = "0.27.0"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "anyio" },
|
{ name = "anyio" },
|
||||||
@@ -1560,9 +1559,9 @@ dependencies = [
|
|||||||
{ name = "idna" },
|
{ name = "idna" },
|
||||||
{ name = "sniffio" },
|
{ name = "sniffio" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/78/82/08f8c936781f67d9e6b9eeb8a0c8b4e406136ea4c3d1f89a5db71d42e0e6/httpx-0.27.2.tar.gz", hash = "sha256:f7c2be1d2f3c3c3160d441802406b206c2b76f5947b11115e6df10c6c65e66c2", size = 144189 }
|
sdist = { url = "https://files.pythonhosted.org/packages/5c/2d/3da5bdf4408b8b2800061c339f240c1802f2e82d55e50bd39c5a881f47f0/httpx-0.27.0.tar.gz", hash = "sha256:a0cb88a46f32dc874e04ee956e4c2764aba2aa228f650b06788ba6bda2962ab5", size = 126413 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/56/95/9377bcb415797e44274b51d46e3249eba641711cf3348050f76ee7b15ffc/httpx-0.27.2-py3-none-any.whl", hash = "sha256:7bb2708e112d8fdd7829cd4243970f0c223274051cb35ee80c03301ee29a3df0", size = 76395 },
|
{ url = "https://files.pythonhosted.org/packages/41/7b/ddacf6dcebb42466abd03f368782142baa82e08fc0c1f8eaa05b4bae87d5/httpx-0.27.0-py3-none-any.whl", hash = "sha256:71d5465162c13681bff01ad59b2cc68dd838ea1f10e51574bac27103f00c91a5", size = 75590 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.optional-dependencies]
|
[package.optional-dependencies]
|
||||||
@@ -1908,7 +1907,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "langchain"
|
name = "langchain"
|
||||||
version = "0.2.16"
|
version = "0.3.3"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "aiohttp" },
|
{ name = "aiohttp" },
|
||||||
@@ -1923,30 +1922,31 @@ dependencies = [
|
|||||||
{ name = "sqlalchemy" },
|
{ name = "sqlalchemy" },
|
||||||
{ name = "tenacity" },
|
{ name = "tenacity" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/fd/53/8ebf21de8d17e7e0f0998f28d689f60d7ed420acb7ab2fba59ca04e80e54/langchain-0.2.16.tar.gz", hash = "sha256:ffb426a76a703b73ac69abad77cd16eaf03dda76b42cff55572f592d74944166", size = 414668 }
|
sdist = { url = "https://files.pythonhosted.org/packages/70/b2/258c6a33b5e5f817a57ecd22b1e74756f7246ac66f39d0cf6d2ef515fcb7/langchain-0.3.3.tar.gz", hash = "sha256:6435882996a029a60c61c356bbe51bab4a8f43a54210f5f03e3c4474d19d1842", size = 416891 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/0d/29/635343c0d155997569b544d26da5a2a9ebade2423baffc9cd6066b01a386/langchain-0.2.16-py3-none-any.whl", hash = "sha256:8f59ee8b45f268df4b924ea3b9c63e49286efa756d16b3f6a9de5c6e502c36e1", size = 1001195 },
|
{ url = "https://files.pythonhosted.org/packages/92/82/c17abaa44074ec716409305da4783f633b0eb9b09bb28ed5005220269bdb/langchain-0.3.3-py3-none-any.whl", hash = "sha256:05ac98c674853c2386d043172820e37ceac9b913aaaf1e51217f0fc424112c72", size = 1005176 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "langchain-cohere"
|
name = "langchain-cohere"
|
||||||
version = "0.1.9"
|
version = "0.3.1"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "cohere" },
|
{ name = "cohere" },
|
||||||
{ name = "langchain-core" },
|
{ name = "langchain-core" },
|
||||||
{ name = "langchain-experimental" },
|
{ name = "langchain-experimental" },
|
||||||
{ name = "pandas" },
|
{ name = "pandas" },
|
||||||
|
{ name = "pydantic" },
|
||||||
{ name = "tabulate" },
|
{ name = "tabulate" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/a4/a9/30462b68f8c15da886078fe5c96fab3085241168ea03d968eee1182e00a9/langchain_cohere-0.1.9.tar.gz", hash = "sha256:549620d23bc3d77f62d1045787095fe2c1cfa233dba69455139f9a2f65f952fa", size = 29987 }
|
sdist = { url = "https://files.pythonhosted.org/packages/ae/ea/53fd2515e353cac4ddd6d7a41dbb0651dfc9ffb0924acb7a1aa7a722f29b/langchain_cohere-0.3.1.tar.gz", hash = "sha256:990bd4db68e229371c90eee98a1a78b4f4d33a32c22c8da6c2cd30b5044de9eb", size = 36739 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/52/b1/ee8d44898cfe43703f05a0ffd95294d3ebe4c61879f19c6357c860131312/langchain_cohere-0.1.9-py3-none-any.whl", hash = "sha256:96d6a15125797319474ac84b54024e5024f3f5fc45032ebf228d95d6998c9b13", size = 35218 },
|
{ url = "https://files.pythonhosted.org/packages/64/5e/bbfb1b33703a973e7eef6582b523ae932e7e64c9b84ac7eecaa8af71475e/langchain_cohere-0.3.1-py3-none-any.whl", hash = "sha256:adf37542feb293562791b8dd1691580b0dcb2117fb987f2684f694912465f554", size = 43992 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "langchain-community"
|
name = "langchain-community"
|
||||||
version = "0.2.17"
|
version = "0.3.2"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "aiohttp" },
|
{ name = "aiohttp" },
|
||||||
@@ -1955,19 +1955,20 @@ dependencies = [
|
|||||||
{ name = "langchain-core" },
|
{ name = "langchain-core" },
|
||||||
{ name = "langsmith" },
|
{ name = "langsmith" },
|
||||||
{ name = "numpy" },
|
{ name = "numpy" },
|
||||||
|
{ name = "pydantic-settings" },
|
||||||
{ name = "pyyaml" },
|
{ name = "pyyaml" },
|
||||||
{ name = "requests" },
|
{ name = "requests" },
|
||||||
{ name = "sqlalchemy" },
|
{ name = "sqlalchemy" },
|
||||||
{ name = "tenacity" },
|
{ name = "tenacity" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/1f/54/be928e3962d24b40c31899f5c5ed99b0c7ef7c3bb7601eb2fe7a6ce75dc4/langchain_community-0.2.17.tar.gz", hash = "sha256:b0745c1fcf1bd532ed4388f90b47139d6a6c6ba48a87aa68aa32d4d6bb97259d", size = 1589425 }
|
sdist = { url = "https://files.pythonhosted.org/packages/86/6e/119bbbd4d55ab14dc6fc4a82a2466b88f7ddb989bdbdfcf96327c5daba4e/langchain_community-0.3.2.tar.gz", hash = "sha256:469bf5357a08c915cebc4c506dca4617eec737d82a9b6e340df5f3b814dc89bc", size = 1608524 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/ac/33/c6ee472412f751062311075bb391a7870ab57cdb8da5d47f359895b2d3c2/langchain_community-0.2.17-py3-none-any.whl", hash = "sha256:d07c31b641e425fb8c3e7148ad6a62e1b54a9adac6e1173021a7dd3148266063", size = 2339964 },
|
{ url = "https://files.pythonhosted.org/packages/cc/57/a8b4826eaa29d3663c957251ab32275a0c178bdb0e262a1204ed820f430c/langchain_community-0.3.2-py3-none-any.whl", hash = "sha256:fffcd484c7674e81ceaa72a809962338bfb17ec8f9e0377ce4e9d884e6fe8ca5", size = 2367818 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "langchain-core"
|
name = "langchain-core"
|
||||||
version = "0.2.41"
|
version = "0.3.12"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "jsonpatch" },
|
{ name = "jsonpatch" },
|
||||||
@@ -1978,48 +1979,48 @@ dependencies = [
|
|||||||
{ name = "tenacity" },
|
{ name = "tenacity" },
|
||||||
{ name = "typing-extensions" },
|
{ name = "typing-extensions" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/2a/92/2ad97f0c23b5ee5043df1a93d97edd4404136003e7d22b641de081738408/langchain_core-0.2.41.tar.gz", hash = "sha256:bc12032c5a298d85be754ccb129bc13ea21ccb1d6e22f8d7ba18b8da64315bb5", size = 316952 }
|
sdist = { url = "https://files.pythonhosted.org/packages/7b/15/76ec101e550e7e16de85e64fcb4ff2d281cb70cfe65c95ee6e56182a5f51/langchain_core-0.3.12.tar.gz", hash = "sha256:98a3c078e375786aa84939bfd1111263af2f3bc402bbe2cac9fa18a387459cf2", size = 327019 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/bc/02/2b2cf9550cee1a7ffa42fe60c55e2d0e7d397535609b42562611fb40e78d/langchain_core-0.2.41-py3-none-any.whl", hash = "sha256:3278fda5ba9a05defae8bb19f1226032add6aab21917db7b3bc74e750e263e84", size = 397013 },
|
{ url = "https://files.pythonhosted.org/packages/ce/4a/a6499d93805c3e6316e641b6934e23c98c011d00b9a2138835d567e976e5/langchain_core-0.3.12-py3-none-any.whl", hash = "sha256:46050d34f5fa36dc57dca971c6a26f505643dd05ee0492c7ac286d0a78a82037", size = 407737 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "langchain-experimental"
|
name = "langchain-experimental"
|
||||||
version = "0.0.65"
|
version = "0.3.2"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "langchain-community" },
|
{ name = "langchain-community" },
|
||||||
{ name = "langchain-core" },
|
{ name = "langchain-core" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/e1/e0/d92210398a006f6e43ddd25166537f79cb3e9ccc32e316e70d349353842b/langchain_experimental-0.0.65.tar.gz", hash = "sha256:83706df07d8a7e6ec1bda74174add7e4431b5f4a8818e19b65986b94c9c99b25", size = 138516 }
|
sdist = { url = "https://files.pythonhosted.org/packages/bf/41/84d3eac564261aaab45bc02bdc43b5e49242439c6f2844a24b81404a17cd/langchain_experimental-0.3.2.tar.gz", hash = "sha256:d41cc28c46f58616d18a1230595929f80a58d1982c4053dc3afe7f1c03f22426", size = 139583 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/97/ca/93913b7530b36869946ca8f93b161bea294ea46a367e748943a78bc3553c/langchain_experimental-0.0.65-py3-none-any.whl", hash = "sha256:2a0f268cfb8c79d43cedf9c4840f70bd8b25934e595311e6690804d0355dd7ee", size = 207160 },
|
{ url = "https://files.pythonhosted.org/packages/63/f6/d80592aa8d335af734054f5cfe130ecd38fdfb9c4f90ba0007f0419f2fce/langchain_experimental-0.3.2-py3-none-any.whl", hash = "sha256:b6a26f2a05e056a27ad30535ed306a6b9d8cc2e3c0326d15030d11b6e7505dbb", size = 208126 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "langchain-openai"
|
name = "langchain-openai"
|
||||||
version = "0.1.25"
|
version = "0.2.2"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "langchain-core" },
|
{ name = "langchain-core" },
|
||||||
{ name = "openai" },
|
{ name = "openai" },
|
||||||
{ name = "tiktoken" },
|
{ name = "tiktoken" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/2f/cb/98fe365f2e5eee39d0130279959a84182ab414879b666ffc2b9d69b95633/langchain_openai-0.1.25.tar.gz", hash = "sha256:eb116f744f820247a72f54313fb7c01524fba0927120d4e899e5e4ab41ad3928", size = 45224 }
|
sdist = { url = "https://files.pythonhosted.org/packages/55/4c/0a88c51192b0aeef5212019060da7112191750ab7a185195d8b45835578c/langchain_openai-0.2.2.tar.gz", hash = "sha256:9ae8e2ec7d1ca84fd3bfa82186724528d68e1510a1dc9cdf617a7c669b7a7768", size = 42364 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/7f/2e/a4430cad7a98e29e9612648f8b12d7449ab635a742c19bf1d62f8713ecaa/langchain_openai-0.1.25-py3-none-any.whl", hash = "sha256:f0b34a233d0d9cb8fce6006c903e57085c493c4f0e32862b99063b96eaedb109", size = 51550 },
|
{ url = "https://files.pythonhosted.org/packages/b0/4e/c62ce98a5412f031f7f03dda5c35b6ed474e0083986261073ca9da5554d5/langchain_openai-0.2.2-py3-none-any.whl", hash = "sha256:3a203228cb38e4711ebd8c0a3bd51854e447f1d017e8475b6467b07ce7dd3e88", size = 49687 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "langchain-text-splitters"
|
name = "langchain-text-splitters"
|
||||||
version = "0.2.4"
|
version = "0.3.0"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "langchain-core" },
|
{ name = "langchain-core" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/83/b3/b1ccde47c86c5fe2585dc012555cff7949c556bd6993dd9c09e49a356190/langchain_text_splitters-0.2.4.tar.gz", hash = "sha256:f7daa7a3b0aa8309ce248e2e2b6fc8115be01118d336c7f7f7dfacda0e89bf29", size = 20236 }
|
sdist = { url = "https://files.pythonhosted.org/packages/57/35/08ac1ca01c58da825f070bd1fdc9192a9ff52c0a048f74c93b05df70c127/langchain_text_splitters-0.3.0.tar.gz", hash = "sha256:f9fe0b4d244db1d6de211e7343d4abc4aa90295aa22e1f0c89e51f33c55cd7ce", size = 20234 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/8f/f3/d01591229e9d0eec1e8106ed6f9b670f299beb1c94fed4aa335afa78acb0/langchain_text_splitters-0.2.4-py3-none-any.whl", hash = "sha256:2702dee5b7cbdd595ccbe43b8d38d01a34aa8583f4d6a5a68ad2305ae3e7b645", size = 25552 },
|
{ url = "https://files.pythonhosted.org/packages/da/6a/d1303b722a3fa7a0a8c2f8f5307e42f0bdbded46d99cca436f3db0df5294/langchain_text_splitters-0.3.0-py3-none-any.whl", hash = "sha256:e84243e45eaff16e5b776cd9c81b6d07c55c010ebcb1965deb3d1792b7358e83", size = 25543 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -2166,7 +2167,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mem0ai"
|
name = "mem0ai"
|
||||||
version = "0.1.17"
|
version = "0.1.19"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "langchain-community" },
|
{ name = "langchain-community" },
|
||||||
@@ -2179,9 +2180,9 @@ dependencies = [
|
|||||||
{ name = "rank-bm25" },
|
{ name = "rank-bm25" },
|
||||||
{ name = "sqlalchemy" },
|
{ name = "sqlalchemy" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/9b/23/fc537f7125c88efeb81190b661b4e17786d039d4e00da2975ea253b45c8f/mem0ai-0.1.17.tar.gz", hash = "sha256:3b24c5904c96717c2285847f7ad98be0167421fd67b23c19771e81bef00ec2f1", size = 51167 }
|
sdist = { url = "https://files.pythonhosted.org/packages/6e/12/23f8f250a2ce798a51841417acbbfc9c12c294d3ae427e81a0a0dbab54f6/mem0ai-0.1.19.tar.gz", hash = "sha256:faf7c198a85df2f502ac41fe2bc1593ca0383f993b431a4e4a36e0aed3fa533c", size = 51167 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/89/49/4fa5e5f759004e90fa9b4adbc9f224f09f09e182bf3d4dfebed69b10fe8a/mem0ai-0.1.17-py3-none-any.whl", hash = "sha256:6505bc45880c26b25edf0a17242d71939ebaab27be0ae09b77f25fd400f61b76", size = 73252 },
|
{ url = "https://files.pythonhosted.org/packages/7e/43/04d22bc9cac6fa19b10a405c59c21e94b8ae2a180b40307ec4a577f6ee39/mem0ai-0.1.19-py3-none-any.whl", hash = "sha256:dfff9cfe191072abd34ed8bb4fcbee2819603eed430d89611ef3181b1a46fff9", size = 73240 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -2496,6 +2497,8 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/b2/07/8cbb75d6cfbe8712d8f7f6a5615f083c6e710ab916b748fbb20373ddb142/multiprocess-0.70.17-py311-none-any.whl", hash = "sha256:2884701445d0177aec5bd5f6ee0df296773e4fb65b11903b94c613fb46cfb7d1", size = 144346 },
|
{ url = "https://files.pythonhosted.org/packages/b2/07/8cbb75d6cfbe8712d8f7f6a5615f083c6e710ab916b748fbb20373ddb142/multiprocess-0.70.17-py311-none-any.whl", hash = "sha256:2884701445d0177aec5bd5f6ee0df296773e4fb65b11903b94c613fb46cfb7d1", size = 144346 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/a4/69/d3f343a61a2f86ef10ed7865a26beda7c71554136ce187b0384b1c2c9ca3/multiprocess-0.70.17-py312-none-any.whl", hash = "sha256:2818af14c52446b9617d1b0755fa70ca2f77c28b25ed97bdaa2c69a22c47b46c", size = 147990 },
|
{ url = "https://files.pythonhosted.org/packages/a4/69/d3f343a61a2f86ef10ed7865a26beda7c71554136ce187b0384b1c2c9ca3/multiprocess-0.70.17-py312-none-any.whl", hash = "sha256:2818af14c52446b9617d1b0755fa70ca2f77c28b25ed97bdaa2c69a22c47b46c", size = 147990 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/c8/b7/2e9a4fcd871b81e1f2a812cd5c6fb52ad1e8da7bf0d7646c55eaae220484/multiprocess-0.70.17-py313-none-any.whl", hash = "sha256:20c28ca19079a6c879258103a6d60b94d4ffe2d9da07dda93fb1c8bc6243f522", size = 149843 },
|
{ url = "https://files.pythonhosted.org/packages/c8/b7/2e9a4fcd871b81e1f2a812cd5c6fb52ad1e8da7bf0d7646c55eaae220484/multiprocess-0.70.17-py313-none-any.whl", hash = "sha256:20c28ca19079a6c879258103a6d60b94d4ffe2d9da07dda93fb1c8bc6243f522", size = 149843 },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ae/d7/fd7a092fc0ab1845a1a97ca88e61b9b7cc2e9d6fcf0ed24e9480590c2336/multiprocess-0.70.17-py38-none-any.whl", hash = "sha256:1d52f068357acd1e5bbc670b273ef8f81d57863235d9fbf9314751886e141968", size = 132635 },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f9/41/0618ac724b8a56254962c143759e04fa01c73b37aa69dd433f16643bd38b/multiprocess-0.70.17-py39-none-any.whl", hash = "sha256:c3feb874ba574fbccfb335980020c1ac631fbf2a3f7bee4e2042ede62558a021", size = 133359 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -3179,8 +3182,6 @@ version = "5.9.8"
|
|||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/90/c7/6dc0a455d111f68ee43f27793971cf03fe29b6ef972042549db29eec39a2/psutil-5.9.8.tar.gz", hash = "sha256:6be126e3225486dff286a8fb9a06246a5253f4c7c53b475ea5f5ac934e64194c", size = 503247 }
|
sdist = { url = "https://files.pythonhosted.org/packages/90/c7/6dc0a455d111f68ee43f27793971cf03fe29b6ef972042549db29eec39a2/psutil-5.9.8.tar.gz", hash = "sha256:6be126e3225486dff286a8fb9a06246a5253f4c7c53b475ea5f5ac934e64194c", size = 503247 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/fe/5f/c26deb822fd3daf8fde4bdb658bf87d9ab1ffd3fca483816e89a9a9a9084/psutil-5.9.8-cp27-none-win32.whl", hash = "sha256:36f435891adb138ed3c9e58c6af3e2e6ca9ac2f365efe1f9cfef2794e6c93b4e", size = 248660 },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/32/1d/cf66073d74d6146187e2d0081a7616df4437214afa294ee4f16f80a2f96a/psutil-5.9.8-cp27-none-win_amd64.whl", hash = "sha256:bd1184ceb3f87651a67b2708d4c3338e9b10c5df903f2e3776b62303b26cb631", size = 251966 },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/e7/e3/07ae864a636d70a8a6f58da27cb1179192f1140d5d1da10886ade9405797/psutil-5.9.8-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:aee678c8720623dc456fa20659af736241f575d79429a0e5e9cf88ae0605cc81", size = 248702 },
|
{ url = "https://files.pythonhosted.org/packages/e7/e3/07ae864a636d70a8a6f58da27cb1179192f1140d5d1da10886ade9405797/psutil-5.9.8-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:aee678c8720623dc456fa20659af736241f575d79429a0e5e9cf88ae0605cc81", size = 248702 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/b3/bd/28c5f553667116b2598b9cc55908ec435cb7f77a34f2bff3e3ca765b0f78/psutil-5.9.8-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8cb6403ce6d8e047495a701dc7c5bd788add903f8986d523e3e20b98b733e421", size = 285242 },
|
{ url = "https://files.pythonhosted.org/packages/b3/bd/28c5f553667116b2598b9cc55908ec435cb7f77a34f2bff3e3ca765b0f78/psutil-5.9.8-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8cb6403ce6d8e047495a701dc7c5bd788add903f8986d523e3e20b98b733e421", size = 285242 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/c5/4f/0e22aaa246f96d6ac87fe5ebb9c5a693fbe8877f537a1022527c47ca43c5/psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d06016f7f8625a1825ba3732081d77c94589dca78b7a3fc072194851e88461a4", size = 288191 },
|
{ url = "https://files.pythonhosted.org/packages/c5/4f/0e22aaa246f96d6ac87fe5ebb9c5a693fbe8877f537a1022527c47ca43c5/psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d06016f7f8625a1825ba3732081d77c94589dca78b7a3fc072194851e88461a4", size = 288191 },
|
||||||
@@ -3387,6 +3388,19 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/a9/f9/b6bcaf874f410564a78908739c80861a171788ef4d4f76f5009656672dfe/pydantic_core-2.23.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:9a5bce9d23aac8f0cf0836ecfc033896aa8443b501c58d0602dbfd5bd5b37753", size = 1920344 },
|
{ url = "https://files.pythonhosted.org/packages/a9/f9/b6bcaf874f410564a78908739c80861a171788ef4d4f76f5009656672dfe/pydantic_core-2.23.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:9a5bce9d23aac8f0cf0836ecfc033896aa8443b501c58d0602dbfd5bd5b37753", size = 1920344 },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pydantic-settings"
|
||||||
|
version = "2.6.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "pydantic" },
|
||||||
|
{ name = "python-dotenv" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/6c/66/5f1a9da10675bfb3b9da52f5b689c77e0a5612263fcce510cfac3e99a168/pydantic_settings-2.6.0.tar.gz", hash = "sha256:44a1804abffac9e6a30372bb45f6cafab945ef5af25e66b1c634c01dd39e0188", size = 75232 }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/34/19/26bb6bdb9fdad5f0dfce538780814084fb667b4bc37fcb28459c14b8d3b5/pydantic_settings-2.6.0-py3-none-any.whl", hash = "sha256:4a819166f119b74d7f8c765196b165f95cc7487ce58ea27dec8a5a26be0970e0", size = 28578 },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pygments"
|
name = "pygments"
|
||||||
version = "2.18.0"
|
version = "2.18.0"
|
||||||
@@ -3436,14 +3450,14 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pypdf"
|
name = "pypdf"
|
||||||
version = "4.3.1"
|
version = "5.0.1"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/f0/65/2ed7c9e1d31d860f096061b3dd2d665f501e09faaa0409a3f0d719d2a16d/pypdf-4.3.1.tar.gz", hash = "sha256:b2f37fe9a3030aa97ca86067a56ba3f9d3565f9a791b305c7355d8392c30d91b", size = 293266 }
|
sdist = { url = "https://files.pythonhosted.org/packages/9d/28/6bc2ca8a521512f2904e6aa3028af43a864fe2b66c77ea01bbbc97f52b98/pypdf-5.0.1.tar.gz", hash = "sha256:a361c3c372b4a659f9c8dd438d5ce29a753c79c620dc6e1fd66977651f5547ea", size = 4999113 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/3c/60/eccdd92dd4af3e4bea6d6a342f7588c618a15b9bec4b968af581e498bcc4/pypdf-4.3.1-py3-none-any.whl", hash = "sha256:64b31da97eda0771ef22edb1bfecd5deee4b72c3d1736b7df2689805076d6418", size = 295825 },
|
{ url = "https://files.pythonhosted.org/packages/48/8f/9bbf22ba6a00001a45dbc54337e5bbbd43e7d8f34c8158c92cddc45736af/pypdf-5.0.1-py3-none-any.whl", hash = "sha256:ff8a32da6c7a63fea9c32fa4dd837cdd0db7966adf6c14f043e3f12592e992db", size = 294470 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -4528,27 +4542,27 @@ socks = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "uv"
|
name = "uv"
|
||||||
version = "0.4.18"
|
version = "0.4.25"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/7d/60/bf5ad6895740e7269ee2f5cf7515cf2756cc8eb06c07c9783abcf1d7860f/uv-0.4.18.tar.gz", hash = "sha256:954964eff8c7e2bc63dd4beeb8d45bcaddb5149a7ef29a36abd77ec76c8b837e", size = 2008833 }
|
sdist = { url = "https://files.pythonhosted.org/packages/d0/bc/1a013408b7f9f437385705652f404b6b15127ecf108327d13be493bdfb81/uv-0.4.25.tar.gz", hash = "sha256:d39077cdfe3246885fcdf32e7066ae731a166101d063629f9cea08738f79e6a3", size = 2064863 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/9e/f9/b3f093abb8f91e2374461b903a4f5e37e96dd04dbf584e34b79bf9a6bbdf/uv-0.4.18-py3-none-linux_armv6l.whl", hash = "sha256:1944c0ee567ca7db60705c5d213a75b25601094b026cc17af3e704651c1e3753", size = 12264752 },
|
{ url = "https://files.pythonhosted.org/packages/84/18/9c9056d373620b1cf5182ce9b2d258e86d117d667cf8883e12870f2a5edf/uv-0.4.25-py3-none-linux_armv6l.whl", hash = "sha256:94fb2b454afa6bdfeeea4b4581c878944ca9cf3a13712e6762f245f5fbaaf952", size = 13028246 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/b6/98/3623ca28954953a5abdc988eb68d0460e1decf37b245c84db2d1323b17f8/uv-0.4.18-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:5234d47abe339c15c318e8b1bbd136ea61c4574503eda6944a5aaea91b7f6775", size = 12488345 },
|
{ url = "https://files.pythonhosted.org/packages/a1/19/8a3f09aba30ac5433dfecde55d5241a07c96bb12340c3b810bc58188a12e/uv-0.4.25-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:a7c3a18c20ddb527d296d1222bddf42b78031c50b5b4609d426569b5fb61f5b0", size = 13175265 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/29/2b/ff62b32b4a7cbfb445156b1d8757f29190f854aa702baa045e8645a19144/uv-0.4.18-py3-none-macosx_11_0_arm64.whl", hash = "sha256:0c4cb31594cb2ed21bd3b603a207e99dfb9610c3db44da9dbbff0f237270f582", size = 11568639 },
|
{ url = "https://files.pythonhosted.org/packages/e8/c9/2f924bb29bd53c51b839c1c6126bd2cf4c451d4a7d8f34be078f9e31c57e/uv-0.4.25-py3-none-macosx_11_0_arm64.whl", hash = "sha256:18100f0f36419a154306ed6211e3490bf18384cdf3f1a0950848bf64b62fa251", size = 12255610 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/bb/7f/49a724b0c8e09fca03c166e7f18ad48c8962c9be543899a27eecc13b8b86/uv-0.4.18-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:8af0b60adcfa2e87c77a3008d3ed6e0b577c0535468dc58e06f905ccbd27124f", size = 11812252 },
|
{ url = "https://files.pythonhosted.org/packages/b2/5a/d8f8971aeb3389679505cf633a786cd72a96ce232f80f14cfe5a693b4c64/uv-0.4.25-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:6e981b1465e30102e41946adede9cb08051a5d70c6daf09f91a7ea84f0b75c08", size = 12506511 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/e5/88/0b20af8d76e7b8e6ae19af6d14180a0a9e3c23ef6f3cd38370a2ba663364/uv-0.4.18-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f043c3c4514c149a00a86c3bf44df43062416d41002114e60df33895e8511c41", size = 12084699 },
|
{ url = "https://files.pythonhosted.org/packages/e3/96/8c73520daeba5022cec8749e44afd4ca9ef774bf728af9c258bddec3577f/uv-0.4.25-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:578ae385fad6bd6f3868828e33d54994c716b315b1bc49106ec1f54c640837e4", size = 12836250 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/a1/fe/afd83b6ed495fe40a4a738cce0de77465af452f8bd58b254a6cf7544a581/uv-0.4.18-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1b59d742b81c7acf75a3aac71d9b24e07407e044bebcf39d3fc3c87094014e20", size = 12793964 },
|
{ url = "https://files.pythonhosted.org/packages/67/3d/b0e810d365fb154fe1d380a0f43ee35a683cf9162f2501396d711bec2621/uv-0.4.25-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2d29a78f011ecc2f31c13605acb6574c2894c06d258b0f8d0dbb899986800450", size = 13521303 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/a6/54/623029d342f68518c25ed8a3863bc43ced0ad39da4dc83b310db3fe0a727/uv-0.4.18-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:fcc606da545d9a5ec5c2209e7eb2a4eb76627ad75df5eb5616c0b40789fe3933", size = 13386984 },
|
{ url = "https://files.pythonhosted.org/packages/2d/f4/dd3830ec7fc6e7e5237c184f30f2dbfed4f93605e472147eca1373bcc72b/uv-0.4.25-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:ec181be2bda10651a3558156409ac481549983e0276d0e3645e3b1464e7f8715", size = 14105308 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/e9/50/eace0e9326318bf278491aafc3d63e8675a3d03472d2bc58ef601564cbb4/uv-0.4.18-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:96c3ccee0fd8cf0a9d679407e157b76db1a854638a4ba4fa14f4d116b4e39b03", size = 13137886 },
|
{ url = "https://files.pythonhosted.org/packages/f4/4e/0fca02f8681e4870beda172552e747e0424f6e9186546b00a5e92525fea9/uv-0.4.25-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:50c7d0d9e7f392f81b13bf3b7e37768d1486f2fc9d533a54982aa0ed11e4db23", size = 13859475 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f7/f5/f21bec94affe10e677ecbc0cc1b89d766c950dbc8e23df87451c71848c3f/uv-0.4.18-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:df225a568da01f3d7e126d886c3694c5a4a7d8b85162a4d6e97822716ca0e7c4", size = 17098535 },
|
{ url = "https://files.pythonhosted.org/packages/33/07/1100e9bc652f2850930f466869515d16ffe9582aaaaa99bac332ebdfe3ea/uv-0.4.25-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2fc35b5273f1e018aecd66b70e0fd7d2eb6698853dde3e2fc644e7ebf9f825b1", size = 18100840 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/4e/89/77ad3d48f2ea11fd4e416b8cc1be18b26f189a4f0bf7918ac6fdb4255fa6/uv-0.4.18-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b08564c8c7e8b3665ad1d6c8924d4654451f96c956eb5f3b8ec995c77734163d", size = 12909876 },
|
{ url = "https://files.pythonhosted.org/packages/fa/98/ba1cb7dd2aa639a064a9e49721e08f12a3424456d60dde1327e7c6437930/uv-0.4.25-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a7022a71ff63a3838796f40e954b76bf7820fc27e96fe002c537e75ff8e34f1d", size = 13645464 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ca/29/1f451ef9b2138fdc777e24654da24fa60e42435936d29bcba0fb5bae3c44/uv-0.4.18-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:4be600474db6733078503012f2811c4383f490f77366e66b5f686316db52c870", size = 11976385 },
|
{ url = "https://files.pythonhosted.org/packages/0d/05/b97fb8c828a070e8291826922b2712d1146b11563b4860bc9ba80f5635d1/uv-0.4.25-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:e02afb0f6d4b58718347f7d7cfa5a801e985ce42181ba971ed85ef149f6658ca", size = 12694995 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f3/ea/4ac40da05e070f411edb4e99f01846aa8694071ce85f4eb83313f2cce423/uv-0.4.18-py3-none-musllinux_1_1_armv7l.whl", hash = "sha256:3e3ade81af961f48517fcd99318192c9c635ef9a38a7ca65026af0c803c71906", size = 12067581 },
|
{ url = "https://files.pythonhosted.org/packages/b3/97/63df050811379130202898f60e735a1a331ba3a93b8aa1e9bb466f533913/uv-0.4.25-py3-none-musllinux_1_1_armv7l.whl", hash = "sha256:3d7680795ea78cdbabbcce73d039b2651cf1fa635ddc1aa3082660f6d6255c50", size = 12831737 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/cd/49/f6113c4cea8f7ba9e0a70723e8cb3b042c8cb1288f5671594a6b8de491bd/uv-0.4.18-py3-none-musllinux_1_1_i686.whl", hash = "sha256:4ec60141f92c9667548ebad8daf4c13aabdb58b22c21dcd834641e791e55f289", size = 12559831 },
|
{ url = "https://files.pythonhosted.org/packages/dc/e0/08352dcffa6e8435328861ea60b2c05e8bd030f1e93998443ba66209db7b/uv-0.4.25-py3-none-musllinux_1_1_i686.whl", hash = "sha256:aae9dcafd20d5ba978c8a4939ab942e8e2e155c109e9945207fbbd81d2892c9e", size = 13273529 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/d2/e7/968414391249660bf4375123dd244eef36fc1c1676dcdc719aea1f319bd7/uv-0.4.18-py3-none-musllinux_1_1_ppc64le.whl", hash = "sha256:6566448278b6849846b6c586fc86748c66aa53ed70f5568e713122543cc86a50", size = 14181171 },
|
{ url = "https://files.pythonhosted.org/packages/25/f4/eaf95e5eee4e2e69884df0953d094deae07216f72068ef1df08c0f49841d/uv-0.4.25-py3-none-musllinux_1_1_ppc64le.whl", hash = "sha256:4c55040e67470f2b73e95e432aba06f103a0b348ea0b9c6689b1029c8d9e89fd", size = 15039860 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/bb/ec/1fa1cffaa837df4bfd545818779dc608d0465be5c0e57b4328b5ed91b97f/uv-0.4.18-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:ade18dbbeb05c8cba4f842cc15b20e59467069183f348844750901227df5008d", size = 13042177 },
|
{ url = "https://files.pythonhosted.org/packages/69/04/482b1cc9e8d599c7d766c4ba2d7a512ed3989921443792f92f26b8d44fe6/uv-0.4.25-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:bdbfd0c476b9e80a3f89af96aed6dd7d2782646311317a9c72614ccce99bb2ad", size = 13776302 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/31/32/fcd60657f45c072fce9f14916b2fcb876b40d8e3ee0ad1f9f212aecd9bfa/uv-0.4.18-py3-none-win32.whl", hash = "sha256:157e4a2c063b270de348862dd31abfe600d5601183fd2a6efe552840ac179626", size = 12184460 },
|
{ url = "https://files.pythonhosted.org/packages/cd/7e/3d1cb735cc3df6341ac884b73eeec1f51a29192721be40be8e9b1d82666d/uv-0.4.25-py3-none-win32.whl", hash = "sha256:7d266e02fefef930609328c31c075084295c3cb472bab3f69549fad4fd9d82b3", size = 12970553 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/36/bd/35de80c6ac6d28383d5e7c91e8cea54b4aae8ae144c3411a16e9d28643c8/uv-0.4.18-py3-none-win_amd64.whl", hash = "sha256:8250148484e1b0f89ec19467946e86ee303619985c23228b5a2f2d94d15c6d8b", size = 13893818 },
|
{ url = "https://files.pythonhosted.org/packages/04/e9/c00d2bb4a286b13fad0f06488ea9cbe9e76d0efcd81e7a907f72195d5b83/uv-0.4.25-py3-none-win_amd64.whl", hash = "sha256:be2a4fc4fcade9ea5e67e51738c95644360d6e59b6394b74fc579fb617f902f7", size = 14702875 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|||||||
Reference in New Issue
Block a user