Compare commits

..

1 Commits

Author SHA1 Message Date
Rip&Tear
9f1b26bde2 Added security.md file 2024-10-30 20:37:22 +08:00
91 changed files with 813 additions and 3011 deletions

View File

@@ -19,5 +19,5 @@ jobs:
run: pip install bandit run: pip install bandit
- name: Run Bandit - name: Run Bandit
run: bandit -c pyproject.toml -r src/ -ll run: bandit -c pyproject.toml -r src/ -lll

View File

@@ -22,8 +22,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. | | **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. | | **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. | | **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). | | **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. | | **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. | | **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. | | **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |

View File

@@ -18,63 +18,60 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows. 4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
5. **Input Flexibility**: Flows can accept inputs to initialize or update their state, with different handling for structured and unstructured state management.
## Getting Started ## Getting Started
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task. Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
### Passing Inputs to Flows ```python Code
Flows can accept inputs to initialize or update their state before execution. The way inputs are handled depends on whether the flow uses structured or unstructured state management.
#### Structured State Management
In structured state management, the flow's state is defined using a Pydantic `BaseModel`. Inputs must match the model's schema, and any updates will overwrite the default values.
```python
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from dotenv import load_dotenv
from litellm import completion
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow[ExampleState]): class ExampleFlow(Flow):
model = "gpt-4o-mini"
@start() @start()
def first_method(self): def generate_city(self):
# Implementation print("Starting flow")
flow = StructuredExampleFlow() response = completion(
flow.kickoff(inputs={"counter": 10}) model=self.model,
``` messages=[
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
)
In this example, the `counter` is initialized to `10`, while `message` retains its default value. random_city = response["choices"][0]["message"]["content"]
print(f"Random City: {random_city}")
#### Unstructured State Management return random_city
In unstructured state management, the flow's state is a dictionary. You can pass any dictionary to update the state. @listen(generate_city)
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
)
```python fun_fact = response["choices"][0]["message"]["content"]
from crewai.flow.flow import Flow, listen, start return fun_fact
class UnstructuredExampleFlow(Flow):
@start()
def first_method(self):
# Implementation
flow = UnstructuredExampleFlow()
flow.kickoff(inputs={"counter": 5, "message": "Initial message"})
```
Here, both `counter` and `message` are updated based on the provided inputs. flow = ExampleFlow()
result = flow.kickoff()
**Note:** Ensure that inputs for structured state management adhere to the defined schema to avoid validation errors. print(f"Generated fun fact: {result}")
### Example Flow
```python
# Existing example code
``` ```
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task. In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
@@ -97,14 +94,14 @@ The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered. 1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python ```python Code
@listen("generate_city") @listen("generate_city")
def generate_fun_fact(self, random_city): def generate_fun_fact(self, random_city):
# Implementation # Implementation
``` ```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered. 2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python ```python Code
@listen(generate_city) @listen(generate_city)
def generate_fun_fact(self, random_city): def generate_fun_fact(self, random_city):
# Implementation # Implementation
@@ -121,7 +118,7 @@ When you run a Flow, the final output is determined by the last method that comp
Here's how you can access the final output: Here's how you can access the final output:
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
class OutputExampleFlow(Flow): class OutputExampleFlow(Flow):
@@ -133,17 +130,18 @@ class OutputExampleFlow(Flow):
def second_method(self, first_output): def second_method(self, first_output):
return f"Second method received: {first_output}" return f"Second method received: {first_output}"
flow = OutputExampleFlow() flow = OutputExampleFlow()
final_output = flow.kickoff() final_output = flow.kickoff()
print("---- Final Output ----") print("---- Final Output ----")
print(final_output) print(final_output)
``` ````
```text ``` text Output
---- Final Output ---- ---- Final Output ----
Second method received: Output from first_method Second method received: Output from first_method
``` ````
</CodeGroup> </CodeGroup>
@@ -158,7 +156,7 @@ Here's an example of how to update and access the state:
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from pydantic import BaseModel
@@ -186,7 +184,7 @@ print("Final State:")
print(flow.state) print(flow.state)
``` ```
```text ```text Output
Final Output: Hello from first_method - updated by second_method Final Output: Hello from first_method - updated by second_method
Final State: Final State:
counter=2 message='Hello from first_method - updated by second_method' counter=2 message='Hello from first_method - updated by second_method'
@@ -210,10 +208,10 @@ allowing developers to choose the approach that best fits their application's ne
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class. In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema. This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
class UnstructuredExampleFlow(Flow): class UntructuredExampleFlow(Flow):
@start() @start()
def first_method(self): def first_method(self):
@@ -232,7 +230,8 @@ class UnstructuredExampleFlow(Flow):
print(f"State after third_method: {self.state}") print(f"State after third_method: {self.state}")
flow = UnstructuredExampleFlow()
flow = UntructuredExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
@@ -246,14 +245,16 @@ flow.kickoff()
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow. Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments. By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
```python ```python Code
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from pydantic import BaseModel
class ExampleState(BaseModel): class ExampleState(BaseModel):
counter: int = 0 counter: int = 0
message: str = "" message: str = ""
class StructuredExampleFlow(Flow[ExampleState]): class StructuredExampleFlow(Flow[ExampleState]):
@start() @start()
@@ -272,6 +273,7 @@ class StructuredExampleFlow(Flow[ExampleState]):
print(f"State after third_method: {self.state}") print(f"State after third_method: {self.state}")
flow = StructuredExampleFlow() flow = StructuredExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
@@ -305,7 +307,7 @@ The `or_` function in Flows allows you to listen to multiple methods and trigger
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, listen, or_, start from crewai.flow.flow import Flow, listen, or_, start
class OrExampleFlow(Flow): class OrExampleFlow(Flow):
@@ -322,11 +324,13 @@ class OrExampleFlow(Flow):
def logger(self, result): def logger(self, result):
print(f"Logger: {result}") print(f"Logger: {result}")
flow = OrExampleFlow() flow = OrExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
```text ```text Output
Logger: Hello from the start method Logger: Hello from the start method
Logger: Hello from the second method Logger: Hello from the second method
``` ```
@@ -342,7 +346,7 @@ The `and_` function in Flows allows you to listen to multiple methods and trigge
<CodeGroup> <CodeGroup>
```python ```python Code
from crewai.flow.flow import Flow, and_, listen, start from crewai.flow.flow import Flow, and_, listen, start
class AndExampleFlow(Flow): class AndExampleFlow(Flow):
@@ -364,7 +368,7 @@ flow = AndExampleFlow()
flow.kickoff() flow.kickoff()
``` ```
```text ```text Output
---- Logger ---- ---- Logger ----
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'} {'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
``` ```
@@ -381,7 +385,7 @@ You can specify different routes based on the output of the method, allowing you
<CodeGroup> <CodeGroup>
```python ```python Code
import random import random
from crewai.flow.flow import Flow, listen, router, start from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel from pydantic import BaseModel
@@ -412,11 +416,12 @@ class RouterFlow(Flow[ExampleState]):
def fourth_method(self): def fourth_method(self):
print("Fourth method running") print("Fourth method running")
flow = RouterFlow() flow = RouterFlow()
flow.kickoff() flow.kickoff()
``` ```
```text ```text Output
Starting the structured flow Starting the structured flow
Third method running Third method running
Fourth method running Fourth method running
@@ -479,7 +484,7 @@ The `main.py` file is where you create your flow and connect the crews together.
Here's an example of how you can connect the `poem_crew` in the `main.py` file: Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python ```python Code
#!/usr/bin/env python #!/usr/bin/env python
from random import randint from random import randint
@@ -572,20 +577,18 @@ This command will create a new directory for your crew within the `crews` folder
After adding a new crew, your folder structure will look like this: After adding a new crew, your folder structure will look like this:
| Directory/File | Description | name_of_flow/
| :--------------------- | :----------------------------------------------------------------- | ├── crews/
| `name_of_flow/` | Root directory for the flow. | │ ├── poem_crew/
| ├── `crews/` | Contains directories for specific crews. | ├── config/
| ├── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. | │ │ ├── agents.yaml
| │ │ ── `config/` | Configuration files directory for the "poem_crew". | │ │ ── tasks.yaml
| │ │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". | │ │ └── poem_crew.py
| │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". | │ └── name_of_crew/
| ── `poem_crew.py` | Script for "poem_crew" functionality. | ── config/
| └── `name_of_crew/` | Directory for the new crew. | │ │ ├── agents.yaml
| ├── `config/` | Configuration files directory for the new crew. | └── tasks.yaml
| ── `agents.yaml` | YAML file defining the agents for the new crew. | ── name_of_crew.py
| │ └── `tasks.yaml` | YAML file defining the tasks for the new crew. |
| └── `name_of_crew.py` | Script for the new crew functionality. |
You can then customize the `agents.yaml` and `tasks.yaml` files to define the agents and tasks for your new crew. The `name_of_crew.py` file will contain the crew's logic, which you can modify to suit your needs. You can then customize the `agents.yaml` and `tasks.yaml` files to define the agents and tasks for your new crew. The `name_of_crew.py` file will contain the crew's logic, which you can modify to suit your needs.
@@ -607,7 +610,7 @@ CrewAI provides two convenient methods to generate plots of your flows:
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow. If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python ```python Code
# Assuming you have a flow instance # Assuming you have a flow instance
flow.plot("my_flow_plot") flow.plot("my_flow_plot")
``` ```

View File

@@ -25,148 +25,52 @@ By default, CrewAI uses the `gpt-4o-mini` model. It uses environment variables i
- `OPENAI_API_BASE` - `OPENAI_API_BASE`
- `OPENAI_API_KEY` - `OPENAI_API_KEY`
### 2. Updating YAML files ### 2. String Identifier
You can update the `agents.yml` file to refer to the LLM you want to use: ```python Code
agent = Agent(llm="gpt-4o", ...)
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: openai/gpt-4o
# llm: azure/gpt-4o-mini
# llm: gemini/gemini-pro
# llm: anthropic/claude-3-5-sonnet-20240620
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: mistral/mistral-large-latest
# llm: ollama/llama3:70b
# llm: groq/llama-3.2-90b-vision-preview
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# ...
``` ```
Keep in mind that you will need to set certain ENV vars depending on the model you are ### 3. LLM Instance
using to account for the credentials or set a custom LLM object like described below.
Here are some of the required ENV vars for some of the LLM integrations:
<AccordionGroup> List of [more providers](https://docs.litellm.ai/docs/providers).
<Accordion title="OpenAI">
```python Code
OPENAI_API_KEY=<your-api-key>
OPENAI_API_BASE=<optional-custom-base-url>
OPENAI_MODEL_NAME=<openai-model-name>
OPENAI_ORGANIZATION=<your-org-id> # OPTIONAL
OPENAI_API_BASE=<openaiai-api-base> # OPTIONAL
```
</Accordion>
<Accordion title="Anthropic"> ```python Code
```python Code from crewai import LLM
ANTHROPIC_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Google"> llm = LLM(model="gpt-4", temperature=0.7)
```python Code agent = Agent(llm=llm, ...)
GEMINI_API_KEY=<your-api-key> ```
```
</Accordion>
<Accordion title="Azure"> ### 4. Custom LLM Objects
```python Code
AZURE_API_KEY=<your-api-key> # "my-azure-api-key"
AZURE_API_BASE=<your-resource-url> # "https://example-endpoint.openai.azure.com"
AZURE_API_VERSION=<api-version> # "2023-05-15"
AZURE_AD_TOKEN=<your-azure-ad-token> # Optional
AZURE_API_TYPE=<your-azure-api-type> # Optional
```
</Accordion>
<Accordion title="AWS Bedrock">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
</Accordion>
<Accordion title="Mistral">
```python Code
MISTRAL_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
WATSONX_URL=<your-url> # (required) Base URL of your WatsonX instance
WATSONX_APIKEY=<your-apikey> # (required) IBM cloud API key
WATSONX_TOKEN=<your-token> # (required) IAM auth token (alternative to APIKEY)
WATSONX_PROJECT_ID=<your-project-id> # (optional) Project ID of your WatsonX instance
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id> # (optional) ID of deployment space for deployed models
```
</Accordion>
</AccordionGroup>
### 3. Custom LLM Objects
Pass a custom LLM implementation or object from another library. Pass a custom LLM implementation or object from another library.
See below for examples.
<Tabs>
<Tab title="String Identifier">
```python Code
agent = Agent(llm="gpt-4o", ...)
```
</Tab>
<Tab title="LLM Instance">
```python Code
from crewai import LLM
llm = LLM(model="gpt-4", temperature=0.7)
agent = Agent(llm=llm, ...)
```
</Tab>
</Tabs>
## Connecting to OpenAI-Compatible LLMs ## Connecting to OpenAI-Compatible LLMs
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class: You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
<Tabs> 1. Using environment variables:
<Tab title="Using Environment Variables">
```python Code
import os
os.environ["OPENAI_API_KEY"] = "your-api-key" ```python Code
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1" import os
```
</Tab>
<Tab title="Using LLM Class Attributes">
```python Code
from crewai import LLM
llm = LLM( os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
```
2. Using LLM class attributes:
```python Code
from crewai import LLM
llm = LLM(
model="custom-model-name", model="custom-model-name",
api_key="your-api-key", api_key="your-api-key",
base_url="https://api.your-provider.com/v1" base_url="https://api.your-provider.com/v1"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
``` ```
</Tab>
</Tabs>
## LLM Configuration Options ## LLM Configuration Options
@@ -193,15 +97,12 @@ When configuring an LLM for your agent, you have access to a wide range of param
| **api_key** | `str` | Your API key for authentication. | | **api_key** | `str` | Your API key for authentication. |
These are examples of how to configure LLMs for your agent. ## OpenAI Example Configuration
<AccordionGroup> ```python Code
<Accordion title="OpenAI"> from crewai import LLM
```python Code llm = LLM(
from crewai import LLM
llm = LLM(
model="gpt-4", model="gpt-4",
temperature=0.8, temperature=0.8,
max_tokens=150, max_tokens=150,
@@ -212,161 +113,39 @@ These are examples of how to configure LLMs for your agent.
seed=42, seed=42,
base_url="https://api.openai.com/v1", base_url="https://api.openai.com/v1",
api_key="your-api-key-here" api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
``` ```
</Accordion>
<Accordion title="Cerebras"> ## Cerebras Example Configuration
```python Code ```python Code
from crewai import LLM from crewai import LLM
llm = LLM( llm = LLM(
model="cerebras/llama-3.1-70b", model="cerebras/llama-3.1-70b",
base_url="https://api.cerebras.ai/v1",
api_key="your-api-key-here" api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
``` ```
</Accordion>
<Accordion title="Ollama (Local LLMs)"> ## Using Ollama (Local LLMs)
CrewAI supports using Ollama for running open-source models locally: CrewAI supports using Ollama for running open-source models locally:
1. Install Ollama: [ollama.ai](https://ollama.ai/) 1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2` 2. Run a model: `ollama run llama2`
3. Configure agent: 3. Configure agent:
```python Code ```python Code
from crewai import LLM from crewai import LLM
agent = Agent( agent = Agent(
llm=LLM( llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
model="ollama/llama3.1",
base_url="http://localhost:11434"
),
... ...
) )
``` ```
</Accordion>
<Accordion title="Groq">
```python Code
from crewai import LLM
llm = LLM(
model="groq/llama3-8b-8192",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Anthropic">
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Fireworks AI">
```python Code
from crewai import LLM
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Gemini">
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro-002",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Perplexity AI (pplx-api)">
```python Code
from crewai import LLM
llm = LLM(
model="perplexity/mistral-7b-instruct",
base_url="https://api.perplexity.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="IBM watsonx.ai">
You can use IBM Watson by seeting the following ENV vars:
```python Code
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
```
You can then define your agents llms by updating the `agents.yml`
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: watsonx/meta-llama/llama-3-1-70b-instruct
```
You can also set up agents more dynamically as a base level LLM instance, like bellow:
```python Code
from crewai import LLM
llm = LLM(
model="watsonx/ibm/granite-13b-chat-v2",
base_url="https://api.watsonx.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Hugging Face">
```python Code
from crewai import LLM
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
api_key="your-api-key-here",
base_url="your_api_endpoint"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
</AccordionGroup>
## Changing the Base API URL ## Changing the Base API URL

View File

@@ -18,7 +18,6 @@ reason, and learn from past interactions.
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. | | **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. | | **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. | | **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
## How Memory Systems Empower Agents ## How Memory Systems Empower Agents
@@ -93,47 +92,6 @@ my_crew = Crew(
) )
``` ```
## Integrating Mem0 for Enhanced User Memory
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
```python Code
import os
from crewai import Crew, Process
from mem0 import MemoryClient
# Set environment variables for Mem0
os.environ["MEM0_API_KEY"] = "m0-xx"
# Step 1: Record preferences based on past conversation or user input
client = MemoryClient()
messages = [
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
{"role": "user", "content": "I am more of a beach person than a mountain person."},
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
{"role": "user", "content": "I like Airbnb more."},
]
client.add(messages, user_id="john")
# Step 2: Create a Crew with User Memory
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
process=Process.sequential,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
},
)
```
## Additional Embedding Providers ## Additional Embedding Providers
@@ -296,31 +254,6 @@ my_crew = Crew(
) )
``` ```
### Using Watson embeddings
```python Code
from crewai import Crew, Agent, Task, Process
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "watson",
"config": {
"model": "<model_name>",
"api_url": "<api_url>",
"api_key": "<YOUR_API_KEY>",
"project_id": "<YOUR_PROJECT_ID>",
}
}
)
```
### Resetting Memory ### Resetting Memory
```shell ```shell

View File

@@ -5,7 +5,6 @@ icon: screwdriver-wrench
--- ---
## Introduction ## Introduction
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers. CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools. This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
@@ -105,7 +104,7 @@ crew.kickoff()
Here is a list of the available tools and their descriptions: Here is a list of the available tools and their descriptions:
| Tool | Description | | Tool | Description |
| :------------------------------- | :--------------------------------------------------------------------------------------------- | | :-------------------------- | :-------------------------------------------------------------------------------------------- |
| **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. | | **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. | | **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
| **CodeInterpreterTool** | A tool for interpreting python code. | | **CodeInterpreterTool** | A tool for interpreting python code. |
@@ -120,7 +119,7 @@ Here is a list of the available tools and their descriptions:
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. | | **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. | | **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. | | **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search. | | **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. | | **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. | | **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
| **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. | | **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. |
@@ -134,23 +133,27 @@ Here is a list of the available tools and their descriptions:
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. | | **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. | | **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. | | **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool** | A RAG tool for searching within YouTube channels, useful for video content analysis. | | **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. | | **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
## Creating your own Tools ## Creating your own Tools
<Tip> <Tip>
Developers can craft `custom tools` tailored for their agents needs or Developers can craft `custom tools` tailored for their agents needs or utilize pre-built options.
utilize pre-built options.
</Tip> </Tip>
There are two main ways for one to create a CrewAI tool: To create your own CrewAI tools you will need to install our extra tools package:
```bash
pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a CrewAI tool:
### Subclassing `BaseTool` ### Subclassing `BaseTool`
```python Code ```python Code
from crewai.tools import BaseTool from crewai_tools import BaseTool
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
@@ -164,7 +167,7 @@ class MyCustomTool(BaseTool):
### Utilizing the `tool` Decorator ### Utilizing the `tool` Decorator
```python Code ```python Code
from crewai.tools import tool from crewai_tools import tool
@tool("Name of my tool") @tool("Name of my tool")
def my_tool(question: str) -> str: def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, your agent will need this information to use it.""" """Clear description for what this tool is useful for, your agent will need this information to use it."""
@@ -175,13 +178,11 @@ def my_tool(question: str) -> str:
### Custom Caching Mechanism ### Custom Caching Mechanism
<Tip> <Tip>
Tools can optionally implement a `cache_function` to fine-tune caching Tools can optionally implement a `cache_function` to fine-tune caching behavior. This function determines when to cache results based on specific conditions, offering granular control over caching logic.
behavior. This function determines when to cache results based on specific
conditions, offering granular control over caching logic.
</Tip> </Tip>
```python Code ```python Code
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def multiplication_tool(first_number: int, second_number: int) -> str: def multiplication_tool(first_number: int, second_number: int) -> str:

View File

@@ -10,13 +10,21 @@ This guide provides detailed instructions on creating custom tools for the CrewA
incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools, incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools,
enabling agents to perform a wide range of actions. enabling agents to perform a wide range of actions.
### Prerequisites
Before creating your own tools, ensure you have the crewAI extra tools package installed:
```bash
pip install 'crewai[tools]'
```
### Subclassing `BaseTool` ### Subclassing `BaseTool`
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes, including the `args_schema` for input validation, and the `_run` method. To create a personalized tool, inherit from `BaseTool` and define the necessary attributes, including the `args_schema` for input validation, and the `_run` method.
```python Code ```python Code
from typing import Type from typing import Type
from crewai.tools import BaseTool from crewai_tools import BaseTool
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
class MyToolInput(BaseModel): class MyToolInput(BaseModel):
@@ -39,7 +47,7 @@ Alternatively, you can use the tool decorator `@tool`. This approach allows you
offering a concise and efficient way to create specialized tools tailored to your needs. offering a concise and efficient way to create specialized tools tailored to your needs.
```python Code ```python Code
from crewai.tools import tool from crewai_tools import tool
@tool("Tool Name") @tool("Tool Name")
def my_simple_tool(question: str) -> str: def my_simple_tool(question: str) -> str:

View File

@@ -330,4 +330,4 @@ This will clear the crew's memory, allowing for a fresh start.
## Deploying Your Project ## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI Enterprise](http://app.crewai.com/), where you can deploy your crew in a few clicks. The easiest way to deploy your crew is through [CrewAI Enterprise](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.

View File

@@ -34,7 +34,6 @@ from crewai_tools import GithubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository # Initialize the tool for semantic searches within a specific GitHub repository
tool = GithubSearchTool( tool = GithubSearchTool(
github_repo='https://github.com/example/repo', github_repo='https://github.com/example/repo',
gh_token='your_github_personal_access_token',
content_types=['code', 'issue'] # Options: code, repo, pr, issue content_types=['code', 'issue'] # Options: code, repo, pr, issue
) )
@@ -42,7 +41,6 @@ tool = GithubSearchTool(
# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution # Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution
tool = GithubSearchTool( tool = GithubSearchTool(
gh_token='your_github_personal_access_token',
content_types=['code', 'issue'] # Options: code, repo, pr, issue content_types=['code', 'issue'] # Options: code, repo, pr, issue
) )
``` ```
@@ -50,7 +48,6 @@ tool = GithubSearchTool(
## Arguments ## Arguments
- `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search. - `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search.
- `gh_token` : Your GitHub Personal Access Token (PAT) required for authentication. You can create one in your GitHub account settings under Developer Settings > Personal Access Tokens.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code, - `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code,
`repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues. `repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues.
This field is mandatory and allows tailoring the search to specific content types within the GitHub repository. This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.
@@ -81,3 +78,4 @@ tool = GithubSearchTool(
), ),
) )
) )
```

6
poetry.lock generated
View File

@@ -1597,12 +1597,12 @@ files = [
google-auth = ">=2.14.1,<3.0.dev0" google-auth = ">=2.14.1,<3.0.dev0"
googleapis-common-protos = ">=1.56.2,<2.0.dev0" googleapis-common-protos = ">=1.56.2,<2.0.dev0"
grpcio = [ grpcio = [
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""}, {version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
] ]
grpcio-status = [ grpcio-status = [
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""}, {version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
] ]
proto-plus = ">=1.22.3,<2.0.0dev" proto-plus = ">=1.22.3,<2.0.0dev"
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0" protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
@@ -4286,8 +4286,8 @@ files = [
[package.dependencies] [package.dependencies]
numpy = [ numpy = [
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""}, {version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""}, {version = ">=1.26.0", markers = "python_version >= \"3.12\""},
] ]
python-dateutil = ">=2.8.2" python-dateutil = ">=2.8.2"

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.79.4" version = "0.76.9"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
@@ -16,7 +16,7 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http>=1.22.0", "opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3", "instructor>=1.3.3",
"regex>=2024.9.11", "regex>=2024.9.11",
"crewai-tools>=0.14.0", "crewai-tools>=0.13.4",
"click>=8.1.7", "click>=8.1.7",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",
"appdirs>=1.4.4", "appdirs>=1.4.4",
@@ -27,8 +27,8 @@ dependencies = [
"pyvis>=0.3.2", "pyvis>=0.3.2",
"uv>=0.4.25", "uv>=0.4.25",
"tomli-w>=1.1.0", "tomli-w>=1.1.0",
"chromadb>=0.4.24",
"tomli>=2.0.2", "tomli>=2.0.2",
"chromadb>=0.5.18",
] ]
[project.urls] [project.urls]
@@ -37,9 +37,8 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI" Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies] [project.optional-dependencies]
tools = ["crewai-tools>=0.14.0"] tools = ["crewai-tools>=0.13.4"]
agentops = ["agentops>=0.3.0"] agentops = ["agentops>=0.3.0"]
mem0 = ["mem0ai>=0.1.29"]
[tool.uv] [tool.uv]
dev-dependencies = [ dev-dependencies = [
@@ -53,7 +52,7 @@ dev-dependencies = [
"mkdocs-material-extensions>=1.3.1", "mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0", "pillow>=10.2.0",
"cairosvg>=2.7.1", "cairosvg>=2.7.1",
"crewai-tools>=0.14.0", "crewai-tools>=0.13.4",
"pytest>=8.0.0", "pytest>=8.0.0",
"pytest-vcr>=1.0.2", "pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",

View File

@@ -14,5 +14,5 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.79.4" __version__ = "0.76.9"
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"] __all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]

View File

@@ -8,11 +8,9 @@ from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS
from crewai.llm import LLM from crewai.llm import LLM
from crewai.memory.contextual.contextual_memory import ContextualMemory from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools import AgentTools
from crewai.tools import BaseTool
from crewai.utilities import Converter, Prompts from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.token_counter_callback import TokenCalcHandler from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -123,11 +121,6 @@ class Agent(BaseAgent):
@model_validator(mode="after") @model_validator(mode="after")
def post_init_setup(self): def post_init_setup(self):
self.agent_ops_agent_name = self.role self.agent_ops_agent_name = self.role
unnacepted_attributes = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
]
# Handle different cases for self.llm # Handle different cases for self.llm
if isinstance(self.llm, str): if isinstance(self.llm, str):
@@ -137,12 +130,8 @@ class Agent(BaseAgent):
# If it's already an LLM instance, keep it as is # If it's already an LLM instance, keep it as is
pass pass
elif self.llm is None: elif self.llm is None:
# Determine the model name from environment variables or use default # If it's None, use environment variables or default
model_name = ( model_name = os.environ.get("OPENAI_MODEL_NAME", "gpt-4o-mini")
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
or "gpt-4o-mini"
)
llm_params = {"model": model_name} llm_params = {"model": model_name}
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get( api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
@@ -151,44 +140,9 @@ class Agent(BaseAgent):
if api_base: if api_base:
llm_params["base_url"] = api_base llm_params["base_url"] = api_base
set_provider = model_name.split("/")[0] if "/" in model_name else "openai" api_key = os.environ.get("OPENAI_API_KEY")
if api_key:
# Iterate over all environment variables to find matching API keys or use defaults llm_params["api_key"] = api_key
for provider, env_vars in ENV_VARS.items():
if provider == set_provider:
for env_var in env_vars:
if env_var["key_name"] in unnacepted_attributes:
continue
# Check if the environment variable is set
if "key_name" in env_var:
env_value = os.environ.get(env_var["key_name"])
if env_value:
# Map key names containing "API_KEY" to "api_key"
key_name = (
"api_key"
if "API_KEY" in env_var["key_name"]
else env_var["key_name"]
)
# Map key names containing "API_BASE" to "api_base"
key_name = (
"api_base"
if "API_BASE" in env_var["key_name"]
else key_name
)
# Map key names containing "API_VERSION" to "api_version"
key_name = (
"api_version"
if "API_VERSION" in env_var["key_name"]
else key_name
)
llm_params[key_name] = env_value
# Check for default values if the environment variable is not set
elif env_var.get("default", False):
for key, value in env_var.items():
if key not in ["prompt", "key_name", "default"]:
# Only add default if the key is already set in os.environ
if key in os.environ:
llm_params[key] = value
self.llm = LLM(**llm_params) self.llm = LLM(**llm_params)
else: else:
@@ -238,7 +192,7 @@ class Agent(BaseAgent):
self, self,
task: Any, task: Any,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None, tools: Optional[List[Any]] = None,
) -> str: ) -> str:
"""Execute a task with the agent. """Execute a task with the agent.
@@ -262,11 +216,9 @@ class Agent(BaseAgent):
if self.crew and self.crew.memory: if self.crew and self.crew.memory:
contextual_memory = ContextualMemory( contextual_memory = ContextualMemory(
self.crew.memory_config,
self.crew._short_term_memory, self.crew._short_term_memory,
self.crew._long_term_memory, self.crew._long_term_memory,
self.crew._entity_memory, self.crew._entity_memory,
self.crew._user_memory,
) )
memory = contextual_memory.build_context_for_task(task, context) memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "": if memory.strip() != "":
@@ -307,9 +259,7 @@ class Agent(BaseAgent):
return result return result
def create_agent_executor( def create_agent_executor(self, tools=None, task=None) -> None:
self, tools: Optional[List[BaseTool]] = None, task=None
) -> None:
"""Create an agent executor for the agent. """Create an agent executor for the agent.
Returns: Returns:
@@ -382,7 +332,7 @@ class Agent(BaseAgent):
tools_list = [] tools_list = []
try: try:
# tentatively try to import from crewai_tools import BaseTool as CrewAITool # tentatively try to import from crewai_tools import BaseTool as CrewAITool
from crewai.tools import BaseTool as CrewAITool from crewai_tools import BaseTool as CrewAITool
for tool in tools: for tool in tools:
if isinstance(tool, CrewAITool): if isinstance(tool, CrewAITool):
@@ -441,7 +391,7 @@ class Agent(BaseAgent):
return description return description
def _render_text_description_and_args(self, tools: List[BaseTool]) -> str: def _render_text_description_and_args(self, tools: List[Any]) -> str:
"""Render the tool name, description, and args in plain text. """Render the tool name, description, and args in plain text.
Output will be in the format of: Output will be in the format of:
@@ -454,7 +404,17 @@ class Agent(BaseAgent):
""" """
tool_strings = [] tool_strings = []
for tool in tools: for tool in tools:
tool_strings.append(tool.description) args_schema = {
name: {
"description": field.description,
"type": field.annotation.__name__,
}
for name, field in tool.args_schema.model_fields.items()
}
description = (
f"Tool Name: {tool.name}\nTool Description: {tool.description}"
)
tool_strings.append(f"{description}\nTool Arguments: {args_schema}")
return "\n".join(tool_strings) return "\n".join(tool_strings)

View File

@@ -18,7 +18,6 @@ from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.tools import BaseTool
from crewai.utilities import I18N, Logger, RPMController from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities.config import process_config from crewai.utilities.config import process_config
@@ -50,11 +49,11 @@ class BaseAgent(ABC, BaseModel):
Methods: Methods:
execute_task(task: Any, context: Optional[str] = None, tools: Optional[List[BaseTool]] = None) -> str: execute_task(task: Any, context: Optional[str] = None, tools: Optional[List[Any]] = None) -> str:
Abstract method to execute a task. Abstract method to execute a task.
create_agent_executor(tools=None) -> None: create_agent_executor(tools=None) -> None:
Abstract method to create an agent executor. Abstract method to create an agent executor.
_parse_tools(tools: List[BaseTool]) -> List[Any]: _parse_tools(tools: List[Any]) -> List[Any]:
Abstract method to parse tools. Abstract method to parse tools.
get_delegation_tools(agents: List["BaseAgent"]): get_delegation_tools(agents: List["BaseAgent"]):
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew. Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
@@ -106,7 +105,7 @@ class BaseAgent(ABC, BaseModel):
default=False, default=False,
description="Enable agent to delegate and ask questions among each other.", description="Enable agent to delegate and ask questions among each other.",
) )
tools: Optional[List[BaseTool]] = Field( tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents' disposal" default_factory=list, description="Tools at agents' disposal"
) )
max_iter: Optional[int] = Field( max_iter: Optional[int] = Field(
@@ -189,7 +188,7 @@ class BaseAgent(ABC, BaseModel):
self, self,
task: Any, task: Any,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None, tools: Optional[List[Any]] = None,
) -> str: ) -> str:
pass pass
@@ -198,11 +197,11 @@ class BaseAgent(ABC, BaseModel):
pass pass
@abstractmethod @abstractmethod
def _parse_tools(self, tools: List[BaseTool]) -> List[BaseTool]: def _parse_tools(self, tools: List[Any]) -> List[Any]:
pass pass
@abstractmethod @abstractmethod
def get_delegation_tools(self, agents: List["BaseAgent"]) -> List[BaseTool]: def get_delegation_tools(self, agents: List["BaseAgent"]) -> List[Any]:
"""Set the task tools that init BaseAgenTools class.""" """Set the task tools that init BaseAgenTools class."""
pass pass

View File

@@ -1,19 +1,22 @@
from typing import Optional, Union from abc import ABC, abstractmethod
from pydantic import Field from typing import List, Optional, Union
from pydantic import BaseModel, Field
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task from crewai.task import Task
from crewai.utilities import I18N from crewai.utilities import I18N
class BaseAgentTool(BaseTool): class BaseAgentTools(BaseModel, ABC):
"""Base class for agent-related tools""" """Default tools around agent delegation"""
agents: list[BaseAgent] = Field(description="List of available agents") agents: List[BaseAgent] = Field(description="List of agents in this crew.")
i18n: I18N = Field( i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
default_factory=I18N, description="Internationalization settings"
) @abstractmethod
def tools(self):
pass
def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]: def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]:
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker") coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
@@ -21,11 +24,27 @@ class BaseAgentTool(BaseTool):
is_list = coworker.startswith("[") and coworker.endswith("]") is_list = coworker.startswith("[") and coworker.endswith("]")
if is_list: if is_list:
coworker = coworker[1:-1].split(",")[0] coworker = coworker[1:-1].split(",")[0]
return coworker return coworker
def delegate_work(
self, task: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, task, context)
def ask_question(
self, question: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to ask a question, opinion or take from a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, question, context)
def _execute( def _execute(
self, agent_name: Union[str, None], task: str, context: Union[str, None] self, agent_name: Union[str, None], task: str, context: Union[str, None]
) -> str: ):
"""Execute the command."""
try: try:
if agent_name is None: if agent_name is None:
agent_name = "" agent_name = ""
@@ -38,6 +57,7 @@ class BaseAgentTool(BaseTool):
# when it should look like this: # when it should look like this:
# {"task": "....", "coworker": "...."} # {"task": "....", "coworker": "...."}
agent_name = agent_name.casefold().replace('"', "").replace("\n", "") agent_name = agent_name.casefold().replace('"', "").replace("\n", "")
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None") agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
available_agent available_agent
for available_agent in self.agents for available_agent in self.agents

View File

@@ -4,7 +4,6 @@ from crewai.types.usage_metrics import UsageMetrics
class TokenProcess: class TokenProcess:
total_tokens: int = 0 total_tokens: int = 0
prompt_tokens: int = 0 prompt_tokens: int = 0
cached_prompt_tokens: int = 0
completion_tokens: int = 0 completion_tokens: int = 0
successful_requests: int = 0 successful_requests: int = 0
@@ -16,9 +15,6 @@ class TokenProcess:
self.completion_tokens = self.completion_tokens + tokens self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens self.total_tokens = self.total_tokens + tokens
def sum_cached_prompt_tokens(self, tokens: int):
self.cached_prompt_tokens = self.cached_prompt_tokens + tokens
def sum_successful_requests(self, requests: int): def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests self.successful_requests = self.successful_requests + requests
@@ -26,7 +22,6 @@ class TokenProcess:
return UsageMetrics( return UsageMetrics(
total_tokens=self.total_tokens, total_tokens=self.total_tokens,
prompt_tokens=self.prompt_tokens, prompt_tokens=self.prompt_tokens,
cached_prompt_tokens=self.cached_prompt_tokens,
completion_tokens=self.completion_tokens, completion_tokens=self.completion_tokens,
successful_requests=self.successful_requests, successful_requests=self.successful_requests,
) )

View File

@@ -117,15 +117,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
callbacks=self.callbacks, callbacks=self.callbacks,
) )
if answer is None or answer == "":
self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError(
"Invalid response from LLM call - None or empty."
)
if not self.use_stop_words: if not self.use_stop_words:
try: try:
self._format_answer(answer) self._format_answer(answer)
@@ -151,7 +142,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self._should_force_answer(): if self._should_force_answer():
if self.have_forced_answer: if self.have_forced_answer:
return AgentFinish( return AgentFinish(
thought="",
output=self._i18n.errors( output=self._i18n.errors(
"force_final_answer_error" "force_final_answer_error"
).format(formatted_answer.text), ).format(formatted_answer.text),
@@ -333,9 +323,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.crew is not None and hasattr(self.crew, "_train_iteration"): if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration train_iteration = self.crew._train_iteration
if agent_id in training_data and isinstance(train_iteration, int): if agent_id in training_data and isinstance(train_iteration, int):
training_data[agent_id][train_iteration]["improved_output"] = ( training_data[agent_id][train_iteration][
result.output "improved_output"
) ] = result.output
training_handler.save(training_data) training_handler.save(training_data)
else: else:
self._logger.log( self._logger.log(
@@ -386,5 +376,4 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
return CrewAgentParser(agent=self.agent).parse(answer) return CrewAgentParser(agent=self.agent).parse(answer)
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]: def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
prompt = prompt.rstrip()
return {"role": role, "content": prompt} return {"role": role, "content": prompt}

View File

@@ -1,6 +1,6 @@
from typing import Any, Optional, Union from typing import Any, Optional, Union
from ..tools.cache_tools.cache_tools import CacheTools from ..tools.cache_tools import CacheTools
from ..tools.tool_calling import InstructorToolCalling, ToolCalling from ..tools.tool_calling import InstructorToolCalling, ToolCalling
from .cache.cache_handler import CacheHandler from .cache.cache_handler import CacheHandler

View File

@@ -54,7 +54,7 @@ def create_embedded_crew(crew_name: str, parent_folder: Path) -> None:
templates_dir = Path(__file__).parent / "templates" / "crew" templates_dir = Path(__file__).parent / "templates" / "crew"
config_template_files = ["agents.yaml", "tasks.yaml"] config_template_files = ["agents.yaml", "tasks.yaml"]
crew_template_file = f"{folder_name}.py" # Updated file name crew_template_file = f"{folder_name}_crew.py" # Updated file name
for file_name in config_template_files: for file_name in config_template_files:
src_file = templates_dir / "config" / file_name src_file = templates_dir / "config" / file_name

View File

@@ -34,9 +34,7 @@ class AuthenticationCommand:
"scope": "openid", "scope": "openid",
"audience": AUTH0_AUDIENCE, "audience": AUTH0_AUDIENCE,
} }
response = requests.post( response = requests.post(url=self.DEVICE_CODE_URL, data=device_code_payload)
url=self.DEVICE_CODE_URL, data=device_code_payload, timeout=20
)
response.raise_for_status() response.raise_for_status()
return response.json() return response.json()
@@ -56,7 +54,7 @@ class AuthenticationCommand:
attempts = 0 attempts = 0
while True and attempts < 5: while True and attempts < 5:
response = requests.post(self.TOKEN_URL, data=token_payload, timeout=30) response = requests.post(self.TOKEN_URL, data=token_payload)
token_data = response.json() token_data = response.json()
if response.status_code == 200: if response.status_code == 200:

View File

@@ -1,38 +0,0 @@
import json
from pathlib import Path
from pydantic import BaseModel, Field
from typing import Optional
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
class Settings(BaseModel):
tool_repository_username: Optional[str] = Field(None, description="Username for interacting with the Tool Repository")
tool_repository_password: Optional[str] = Field(None, description="Password for interacting with the Tool Repository")
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):
"""Load Settings from config path"""
config_path.parent.mkdir(parents=True, exist_ok=True)
file_data = {}
if config_path.is_file():
try:
with config_path.open("r") as f:
file_data = json.load(f)
except json.JSONDecodeError:
file_data = {}
merged_data = {**file_data, **data}
super().__init__(config_path=config_path, **merged_data)
def dump(self) -> None:
"""Save current settings to settings.json"""
if self.config_path.is_file():
with self.config_path.open("r") as f:
existing_data = json.load(f)
else:
existing_data = {}
updated_data = {**existing_data, **self.model_dump(exclude_unset=True)}
with self.config_path.open("w") as f:
json.dump(updated_data, f, indent=4)

View File

@@ -1,168 +1,19 @@
ENV_VARS = { ENV_VARS = {
"openai": [ 'openai': ['OPENAI_API_KEY'],
{ 'anthropic': ['ANTHROPIC_API_KEY'],
"prompt": "Enter your OPENAI API key (press Enter to skip)", 'gemini': ['GEMINI_API_KEY'],
"key_name": "OPENAI_API_KEY", 'groq': ['GROQ_API_KEY'],
} 'ollama': ['FAKE_KEY'],
],
"anthropic": [
{
"prompt": "Enter your ANTHROPIC API key (press Enter to skip)",
"key_name": "ANTHROPIC_API_KEY",
}
],
"gemini": [
{
"prompt": "Enter your GEMINI API key (press Enter to skip)",
"key_name": "GEMINI_API_KEY",
}
],
"groq": [
{
"prompt": "Enter your GROQ API key (press Enter to skip)",
"key_name": "GROQ_API_KEY",
}
],
"watson": [
{
"prompt": "Enter your WATSONX URL (press Enter to skip)",
"key_name": "WATSONX_URL",
},
{
"prompt": "Enter your WATSONX API Key (press Enter to skip)",
"key_name": "WATSONX_APIKEY",
},
{
"prompt": "Enter your WATSONX Project Id (press Enter to skip)",
"key_name": "WATSONX_PROJECT_ID",
},
],
"ollama": [
{
"default": True,
"API_BASE": "http://localhost:11434",
}
],
"bedrock": [
{
"prompt": "Enter your AWS Access Key ID (press Enter to skip)",
"key_name": "AWS_ACCESS_KEY_ID",
},
{
"prompt": "Enter your AWS Secret Access Key (press Enter to skip)",
"key_name": "AWS_SECRET_ACCESS_KEY",
},
{
"prompt": "Enter your AWS Region Name (press Enter to skip)",
"key_name": "AWS_REGION_NAME",
},
],
"azure": [
{
"prompt": "Enter your Azure deployment name (must start with 'azure/')",
"key_name": "model",
},
{
"prompt": "Enter your AZURE API key (press Enter to skip)",
"key_name": "AZURE_API_KEY",
},
{
"prompt": "Enter your AZURE API base URL (press Enter to skip)",
"key_name": "AZURE_API_BASE",
},
{
"prompt": "Enter your AZURE API version (press Enter to skip)",
"key_name": "AZURE_API_VERSION",
},
],
"cerebras": [
{
"prompt": "Enter your Cerebras model name (must start with 'cerebras/')",
"key_name": "model",
},
{
"prompt": "Enter your Cerebras API version (press Enter to skip)",
"key_name": "CEREBRAS_API_KEY",
},
],
} }
PROVIDERS = ['openai', 'anthropic', 'gemini', 'groq', 'ollama']
PROVIDERS = [
"openai",
"anthropic",
"gemini",
"groq",
"ollama",
"watson",
"bedrock",
"azure",
"cerebras",
]
MODELS = { MODELS = {
"openai": ["gpt-4", "gpt-4o", "gpt-4o-mini", "o1-mini", "o1-preview"], 'openai': ['gpt-4', 'gpt-4o', 'gpt-4o-mini', 'o1-mini', 'o1-preview'],
"anthropic": [ 'anthropic': ['claude-3-5-sonnet-20240620', 'claude-3-sonnet-20240229', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'],
"claude-3-5-sonnet-20240620", 'gemini': ['gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-gemma-2-9b-it', 'gemini-gemma-2-27b-it'],
"claude-3-sonnet-20240229", 'groq': ['llama-3.1-8b-instant', 'llama-3.1-70b-versatile', 'llama-3.1-405b-reasoning', 'gemma2-9b-it', 'gemma-7b-it'],
"claude-3-opus-20240229", 'ollama': ['llama3.1', 'mixtral'],
"claude-3-haiku-20240307",
],
"gemini": [
"gemini/gemini-1.5-flash",
"gemini/gemini-1.5-pro",
"gemini/gemini-gemma-2-9b-it",
"gemini/gemini-gemma-2-27b-it",
],
"groq": [
"groq/llama-3.1-8b-instant",
"groq/llama-3.1-70b-versatile",
"groq/llama-3.1-405b-reasoning",
"groq/gemma2-9b-it",
"groq/gemma-7b-it",
],
"ollama": ["ollama/llama3.1", "ollama/mixtral"],
"watson": [
"watsonx/google/flan-t5-xxl",
"watsonx/google/flan-ul2",
"watsonx/bigscience/mt0-xxl",
"watsonx/eleutherai/gpt-neox-20b",
"watsonx/ibm/mpt-7b-instruct2",
"watsonx/bigcode/starcoder",
"watsonx/meta-llama/llama-2-70b-chat",
"watsonx/meta-llama/llama-2-13b-chat",
"watsonx/ibm/granite-13b-instruct-v1",
"watsonx/ibm/granite-13b-chat-v1",
"watsonx/google/flan-t5-xl",
"watsonx/ibm/granite-13b-chat-v2",
"watsonx/ibm/granite-13b-instruct-v2",
"watsonx/elyza/elyza-japanese-llama-2-7b-instruct",
"watsonx/ibm-mistralai/mixtral-8x7b-instruct-v01-q",
],
"bedrock": [
"bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0",
"bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
"bedrock/anthropic.claude-3-haiku-20240307-v1:0",
"bedrock/anthropic.claude-3-opus-20240229-v1:0",
"bedrock/anthropic.claude-v2:1",
"bedrock/anthropic.claude-v2",
"bedrock/anthropic.claude-instant-v1",
"bedrock/meta.llama3-1-405b-instruct-v1:0",
"bedrock/meta.llama3-1-70b-instruct-v1:0",
"bedrock/meta.llama3-1-8b-instruct-v1:0",
"bedrock/meta.llama3-70b-instruct-v1:0",
"bedrock/meta.llama3-8b-instruct-v1:0",
"bedrock/amazon.titan-text-lite-v1",
"bedrock/amazon.titan-text-express-v1",
"bedrock/cohere.command-text-v14",
"bedrock/ai21.j2-mid-v1",
"bedrock/ai21.j2-ultra-v1",
"bedrock/ai21.jamba-instruct-v1:0",
"bedrock/meta.llama2-13b-chat-v1",
"bedrock/meta.llama2-70b-chat-v1",
"bedrock/mistral.mistral-7b-instruct-v0:2",
"bedrock/mistral.mixtral-8x7b-instruct-v0:1",
],
} }
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json" JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"

View File

@@ -1,11 +1,11 @@
import shutil
import sys import sys
from pathlib import Path from pathlib import Path
import click import click
from crewai.cli.constants import ENV_VARS, MODELS from crewai.cli.constants import ENV_VARS
from crewai.cli.provider import ( from crewai.cli.provider import (
PROVIDERS,
get_provider_data, get_provider_data,
select_model, select_model,
select_provider, select_provider,
@@ -29,14 +29,14 @@ def create_folder_structure(name, parent_folder=None):
click.secho("Operation cancelled.", fg="yellow") click.secho("Operation cancelled.", fg="yellow")
sys.exit(0) sys.exit(0)
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True) click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
shutil.rmtree(folder_path) # Delete the existing folder and its contents else:
click.secho( click.secho(
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...", f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
fg="green", fg="green",
bold=True, bold=True,
) )
if not folder_path.exists():
folder_path.mkdir(parents=True) folder_path.mkdir(parents=True)
(folder_path / "tests").mkdir(exist_ok=True) (folder_path / "tests").mkdir(exist_ok=True)
if not parent_folder: if not parent_folder:
@@ -92,10 +92,7 @@ def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
existing_provider = None existing_provider = None
for provider, env_keys in ENV_VARS.items(): for provider, env_keys in ENV_VARS.items():
if any( if any(key in env_vars for key in env_keys):
"key_name" in details and details["key_name"] in env_vars
for details in env_keys
):
existing_provider = provider existing_provider = provider
break break
@@ -121,8 +118,6 @@ def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
"No provider selected. Please try again or press 'q' to exit.", fg="red" "No provider selected. Please try again or press 'q' to exit.", fg="red"
) )
# Check if the selected provider has predefined models
if selected_provider in MODELS and MODELS[selected_provider]:
while True: while True:
selected_model = select_model(selected_provider, provider_models) selected_model = select_model(selected_provider, provider_models)
if selected_model is None: # User typed 'q' if selected_model is None: # User typed 'q'
@@ -131,38 +126,39 @@ def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
if selected_model: # Valid selection if selected_model: # Valid selection
break break
click.secho( click.secho(
"No model selected. Please try again or press 'q' to exit.", "No model selected. Please try again or press 'q' to exit.", fg="red"
fg="red",
) )
env_vars["MODEL"] = selected_model
# Check if the selected provider requires API keys if selected_provider in PROVIDERS:
if selected_provider in ENV_VARS: api_key_var = ENV_VARS[selected_provider][0]
provider_env_vars = ENV_VARS[selected_provider] else:
for details in provider_env_vars: api_key_var = click.prompt(
if details.get("default", False): f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
# Automatically add default key-value pairs type=str,
for key, value in details.items(): default="",
if key not in ["prompt", "key_name", "default"]: )
env_vars[key] = value
elif "key_name" in details: api_key_value = ""
# Prompt for non-default key-value pairs click.echo(
prompt = details["prompt"] f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
key_name = details["key_name"] nl=False,
api_key_value = click.prompt(prompt, default="", show_default=False) )
try:
api_key_value = input()
except (KeyboardInterrupt, EOFError):
api_key_value = ""
if api_key_value.strip(): if api_key_value.strip():
env_vars[key_name] = api_key_value env_vars = {api_key_var: api_key_value}
if env_vars:
write_env_file(folder_path, env_vars) write_env_file(folder_path, env_vars)
click.secho("API keys and model saved to .env file", fg="green") click.secho("API key saved to .env file", fg="green")
else: else:
click.secho( click.secho(
"No API keys provided. Skipping .env file creation.", fg="yellow" "No API key provided. Skipping .env file creation.", fg="yellow"
) )
click.secho(f"Selected model: {env_vars.get('MODEL', 'N/A')}", fg="green") env_vars["MODEL"] = selected_model
click.secho(f"Selected model: {selected_model}", fg="green")
package_dir = Path(__file__).parent package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew" templates_dir = package_dir / "templates" / "crew"

View File

@@ -164,7 +164,7 @@ def fetch_provider_data(cache_file):
- dict or None: The fetched provider data or None if the operation fails. - dict or None: The fetched provider data or None if the operation fails.
""" """
try: try:
response = requests.get(JSON_URL, stream=True, timeout=60) response = requests.get(JSON_URL, stream=True, timeout=10)
response.raise_for_status() response.raise_for_status()
data = download_data(response) data = download_data(response)
with open(cache_file, "w") as f: with open(cache_file, "w") as f:

View File

@@ -24,6 +24,7 @@ def run_crew() -> None:
f"Please run `crewai update` to update your pyproject.toml to use uv.", f"Please run `crewai update` to update your pyproject.toml to use uv.",
fg="red", fg="red",
) )
print()
try: try:
subprocess.run(command, capture_output=False, text=True, check=True) subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -8,12 +8,9 @@ from crewai.project import CrewBase, agent, crew, task
# from crewai_tools import SerperDevTool # from crewai_tools import SerperDevTool
@CrewBase @CrewBase
class {{crew_name}}(): class {{crew_name}}Crew():
"""{{crew_name}} crew""" """{{crew_name}} crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent @agent
def researcher(self) -> Agent: def researcher(self) -> Agent:
return Agent( return Agent(

View File

@@ -1,10 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
import sys import sys
import warnings from {{folder_name}}.crew import {{crew_name}}Crew
from {{folder_name}}.crew import {{crew_name}}
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
# This main file is intended to be a way for you to run your # This main file is intended to be a way for you to run your
# crew locally, so refrain from adding unnecessary logic into this file. # crew locally, so refrain from adding unnecessary logic into this file.
@@ -18,7 +14,7 @@ def run():
inputs = { inputs = {
'topic': 'AI LLMs' 'topic': 'AI LLMs'
} }
{{crew_name}}().crew().kickoff(inputs=inputs) {{crew_name}}Crew().crew().kickoff(inputs=inputs)
def train(): def train():
@@ -29,7 +25,7 @@ def train():
"topic": "AI LLMs" "topic": "AI LLMs"
} }
try: try:
{{crew_name}}().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs) {{crew_name}}Crew().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs)
except Exception as e: except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}") raise Exception(f"An error occurred while training the crew: {e}")
@@ -39,7 +35,7 @@ def replay():
Replay the crew execution from a specific task. Replay the crew execution from a specific task.
""" """
try: try:
{{crew_name}}().crew().replay(task_id=sys.argv[1]) {{crew_name}}Crew().crew().replay(task_id=sys.argv[1])
except Exception as e: except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}") raise Exception(f"An error occurred while replaying the crew: {e}")
@@ -52,7 +48,7 @@ def test():
"topic": "AI LLMs" "topic": "AI LLMs"
} }
try: try:
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs) {{crew_name}}Crew().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)
except Exception as e: except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}") raise Exception(f"An error occurred while replaying the crew: {e}")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.79.4,<1.0.0" "crewai[tools]>=0.76.9,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -1,8 +1,7 @@
from crewai.tools import BaseTool
from typing import Type from typing import Type
from crewai_tools import BaseTool
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel): class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool.""" """Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.") argument: str = Field(..., description="Description of the argument.")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.79.4,<1.0.0", "crewai[tools]>=0.76.9,<1.0.0",
] ]
[project.scripts] [project.scripts]

View File

@@ -1,6 +1,6 @@
from typing import Type from typing import Type
from crewai.tools import BaseTool from crewai_tools import BaseTool
from pydantic import BaseModel, Field from pydantic import BaseModel, Field

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.79.4,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.76.9,<1.0.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -1,8 +1,7 @@
from typing import Type from typing import Type
from crewai.tools import BaseTool from crewai_tools import BaseTool
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel): class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool.""" """Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.") argument: str = Field(..., description="Description of the argument.")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"] authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.79.4,<1.0.0" "crewai[tools]>=0.76.9,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -1,8 +1,7 @@
from typing import Type from typing import Type
from crewai.tools import BaseTool from crewai_tools import BaseTool
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel): class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool.""" """Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.") argument: str = Field(..., description="Description of the argument.")

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.79.4" "crewai[tools]>=0.76.9"
] ]

View File

@@ -1,5 +1,4 @@
from crewai.tools import BaseTool from crewai_tools import BaseTool
class {{class_name}}(BaseTool): class {{class_name}}(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"

View File

@@ -1,15 +1,17 @@
import base64 import base64
import os import os
import platform
import subprocess import subprocess
import tempfile import tempfile
from pathlib import Path from pathlib import Path
from netrc import netrc
import stat
import click import click
from rich.console import Console from rich.console import Console
from crewai.cli import git from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
from crewai.cli.utils import ( from crewai.cli.utils import (
get_project_description, get_project_description,
get_project_name, get_project_name,
@@ -151,16 +153,26 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
raise SystemExit raise SystemExit
login_response_json = login_response.json() login_response_json = login_response.json()
self._set_netrc_credentials(login_response_json["credential"])
settings = Settings()
settings.tool_repository_username = login_response_json["credential"]["username"]
settings.tool_repository_password = login_response_json["credential"]["password"]
settings.dump()
console.print( console.print(
"Successfully authenticated to the tool repository.", style="bold green" "Successfully authenticated to the tool repository.", style="bold green"
) )
def _set_netrc_credentials(self, credentials, netrc_path=None):
if not netrc_path:
netrc_filename = "_netrc" if platform.system() == "Windows" else ".netrc"
netrc_path = Path.home() / netrc_filename
netrc_path.touch(mode=stat.S_IRUSR | stat.S_IWUSR, exist_ok=True)
netrc_instance = netrc(file=netrc_path)
netrc_instance.hosts["app.crewai.com"] = (credentials["username"], "", credentials["password"])
with open(netrc_path, 'w') as file:
file.write(str(netrc_instance))
console.print(f"Added credentials to {netrc_path}", style="bold green")
def _add_package(self, tool_details): def _add_package(self, tool_details):
tool_handle = tool_details["handle"] tool_handle = tool_details["handle"]
repository_handle = tool_details["repository"]["handle"] repository_handle = tool_details["repository"]["handle"]
@@ -175,11 +187,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
tool_handle, tool_handle,
] ]
add_package_result = subprocess.run( add_package_result = subprocess.run(
add_package_command, add_package_command, capture_output=False, text=True, check=True
capture_output=False,
env=self._build_env_with_credentials(repository_handle),
text=True,
check=True
) )
if add_package_result.stderr: if add_package_result.stderr:
@@ -198,13 +206,3 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
"[bold yellow]Tip:[/bold yellow] Navigate to a different directory and try again." "[bold yellow]Tip:[/bold yellow] Navigate to a different directory and try again."
) )
raise SystemExit raise SystemExit
def _build_env_with_credentials(self, repository_handle: str):
repository_handle = repository_handle.upper().replace("-", "_")
settings = Settings()
env = os.environ.copy()
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(settings.tool_repository_username or "")
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(settings.tool_repository_password or "")
return env

View File

@@ -27,13 +27,12 @@ from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.user.user_memory import UserMemory
from crewai.process import Process from crewai.process import Process
from crewai.task import Task from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools import AgentTools
from crewai.types.usage_metrics import UsageMetrics from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import ( from crewai.utilities.constants import (
@@ -72,7 +71,6 @@ class Crew(BaseModel):
manager_llm: The language model that will run manager agent. manager_llm: The language model that will run manager agent.
manager_agent: Custom agent that will be used as manager. manager_agent: Custom agent that will be used as manager.
memory: Whether the crew should use memory to store memories of it's execution. memory: Whether the crew should use memory to store memories of it's execution.
memory_config: Configuration for the memory to be used for the crew.
cache: Whether the crew should use a cache to store the results of the tools execution. cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents. function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential, hierarchical). process: The process flow that the crew will follow (e.g., sequential, hierarchical).
@@ -96,7 +94,6 @@ class Crew(BaseModel):
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr() _short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr() _long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr() _entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False) _train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr() _train_iteration: Optional[int] = PrivateAttr()
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None) _inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
@@ -117,10 +114,6 @@ class Crew(BaseModel):
default=False, default=False,
description="Whether the crew should use memory to store memories of it's execution", description="Whether the crew should use memory to store memories of it's execution",
) )
memory_config: Optional[Dict[str, Any]] = Field(
default=None,
description="Configuration for the memory to be used for the crew.",
)
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field( short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
default=None, default=None,
description="An Instance of the ShortTermMemory to be used by the Crew", description="An Instance of the ShortTermMemory to be used by the Crew",
@@ -133,11 +126,7 @@ class Crew(BaseModel):
default=None, default=None,
description="An Instance of the EntityMemory to be used by the Crew", description="An Instance of the EntityMemory to be used by the Crew",
) )
user_memory: Optional[InstanceOf[UserMemory]] = Field( embedder: Optional[Any] = Field(
default=None,
description="An instance of the UserMemory to be used by the Crew to store/fetch memories of a specific user.",
)
embedder: Optional[dict] = Field(
default=None, default=None,
description="Configuration for the embedder to be used for the crew.", description="Configuration for the embedder to be used for the crew.",
) )
@@ -249,22 +238,13 @@ class Crew(BaseModel):
self._short_term_memory = ( self._short_term_memory = (
self.short_term_memory self.short_term_memory
if self.short_term_memory if self.short_term_memory
else ShortTermMemory( else ShortTermMemory(crew=self, embedder_config=self.embedder)
crew=self,
embedder_config=self.embedder,
)
) )
self._entity_memory = ( self._entity_memory = (
self.entity_memory self.entity_memory
if self.entity_memory if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder) else EntityMemory(crew=self, embedder_config=self.embedder)
) )
if hasattr(self, "memory_config") and self.memory_config is not None:
self._user_memory = (
self.user_memory if self.user_memory else UserMemory(crew=self)
)
else:
self._user_memory = None
return self return self
@model_validator(mode="after") @model_validator(mode="after")
@@ -465,7 +445,6 @@ class Crew(BaseModel):
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load() training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
for agent in train_crew.agents: for agent in train_crew.agents:
if training_data.get(str(agent.id)):
result = TaskEvaluator(agent).evaluate_training_data( result = TaskEvaluator(agent).evaluate_training_data(
training_data=training_data, agent_id=str(agent.id) training_data=training_data, agent_id=str(agent.id)
) )

View File

@@ -1,20 +1,8 @@
import asyncio import asyncio
import inspect import inspect
from typing import ( from typing import Any, Callable, Dict, Generic, List, Set, Type, TypeVar, Union
Any,
Callable,
Dict,
Generic,
List,
Optional,
Set,
Type,
TypeVar,
Union,
cast,
)
from pydantic import BaseModel, ValidationError from pydantic import BaseModel
from crewai.flow.flow_visualizer import plot_flow from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.utils import get_possible_return_constants from crewai.flow.utils import get_possible_return_constants
@@ -131,6 +119,7 @@ class FlowMeta(type):
condition_type = getattr(attr_value, "__condition_type__", "OR") condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods) listeners[attr_name] = (condition_type, methods)
# TODO: should we add a check for __condition_type__ 'AND'?
elif hasattr(attr_value, "__is_router__"): elif hasattr(attr_value, "__is_router__"):
routers[attr_value.__router_for__] = attr_name routers[attr_value.__router_for__] = attr_name
possible_returns = get_possible_return_constants(attr_value) possible_returns = get_possible_return_constants(attr_value)
@@ -170,7 +159,8 @@ class Flow(Generic[T], metaclass=FlowMeta):
def __init__(self) -> None: def __init__(self) -> None:
self._methods: Dict[str, Callable] = {} self._methods: Dict[str, Callable] = {}
self._state: T = self._create_initial_state() self._state: T = self._create_initial_state()
self._method_execution_counts: Dict[str, int] = {} self._executed_methods: Set[str] = set()
self._scheduled_tasks: Set[str] = set()
self._pending_and_listeners: Dict[str, Set[str]] = {} self._pending_and_listeners: Dict[str, Set[str]] = {}
self._method_outputs: List[Any] = [] # List to store all method outputs self._method_outputs: List[Any] = [] # List to store all method outputs
@@ -201,74 +191,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
"""Returns the list of all outputs from executed methods.""" """Returns the list of all outputs from executed methods."""
return self._method_outputs return self._method_outputs
def _initialize_state(self, inputs: Dict[str, Any]) -> None: def kickoff(self) -> Any:
"""
Initializes or updates the state with the provided inputs.
Args:
inputs: Dictionary of inputs to initialize or update the state.
Raises:
ValueError: If inputs do not match the structured state model.
TypeError: If state is neither a BaseModel instance nor a dictionary.
"""
if isinstance(self._state, BaseModel):
# Structured state management
try:
# Define a function to create the dynamic class
def create_model_with_extra_forbid(
base_model: Type[BaseModel],
) -> Type[BaseModel]:
class ModelWithExtraForbid(base_model): # type: ignore
model_config = base_model.model_config.copy()
model_config["extra"] = "forbid"
return ModelWithExtraForbid
# Create the dynamic class
ModelWithExtraForbid = create_model_with_extra_forbid(
self._state.__class__
)
# Create a new instance using the combined state and inputs
self._state = cast(
T, ModelWithExtraForbid(**{**self._state.model_dump(), **inputs})
)
except ValidationError as e:
raise ValueError(f"Invalid inputs for structured state: {e}") from e
elif isinstance(self._state, dict):
# Unstructured state management
self._state.update(inputs)
else:
raise TypeError("State must be a BaseModel instance or a dictionary.")
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""
Starts the execution of the flow synchronously.
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
return asyncio.run(self.kickoff_async()) return asyncio.run(self.kickoff_async())
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any: async def kickoff_async(self) -> Any:
"""
Starts the execution of the flow asynchronously.
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
if not self._start_methods: if not self._start_methods:
raise ValueError("No start method defined") raise ValueError("No start method defined")
@@ -307,10 +233,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
) )
self._method_outputs.append(result) # Store the output self._method_outputs.append(result) # Store the output
# Track method execution counts self._executed_methods.add(method_name)
self._method_execution_counts[method_name] = (
self._method_execution_counts.get(method_name, 0) + 1
)
return result return result
@@ -320,33 +243,34 @@ class Flow(Generic[T], metaclass=FlowMeta):
if trigger_method in self._routers: if trigger_method in self._routers:
router_method = self._methods[self._routers[trigger_method]] router_method = self._methods[self._routers[trigger_method]]
path = await self._execute_method( path = await self._execute_method(
self._routers[trigger_method], router_method trigger_method, router_method
) ) # TODO: Change or not?
# Use the path as the new trigger method
trigger_method = path trigger_method = path
for listener_name, (condition_type, methods) in self._listeners.items(): for listener_name, (condition_type, methods) in self._listeners.items():
if condition_type == "OR": if condition_type == "OR":
if trigger_method in methods: if trigger_method in methods:
# Schedule the listener without preventing re-execution if (
listener_name not in self._executed_methods
and listener_name not in self._scheduled_tasks
):
self._scheduled_tasks.add(listener_name)
listener_tasks.append( listener_tasks.append(
self._execute_single_listener(listener_name, result) self._execute_single_listener(listener_name, result)
) )
elif condition_type == "AND": elif condition_type == "AND":
# Initialize pending methods for this listener if not already done if all(method in self._executed_methods for method in methods):
if listener_name not in self._pending_and_listeners: if (
self._pending_and_listeners[listener_name] = set(methods) listener_name not in self._executed_methods
# Remove the trigger method from pending methods and listener_name not in self._scheduled_tasks
self._pending_and_listeners[listener_name].discard(trigger_method) ):
if not self._pending_and_listeners[listener_name]: self._scheduled_tasks.add(listener_name)
# All required methods have been executed
listener_tasks.append( listener_tasks.append(
self._execute_single_listener(listener_name, result) self._execute_single_listener(listener_name, result)
) )
# Reset pending methods for this listener
self._pending_and_listeners.pop(listener_name, None)
# Run all listener tasks concurrently and wait for them to complete # Run all listener tasks concurrently and wait for them to complete
if listener_tasks:
await asyncio.gather(*listener_tasks) await asyncio.gather(*listener_tasks)
async def _execute_single_listener(self, listener_name: str, result: Any) -> None: async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
@@ -367,6 +291,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
# If listener does not expect parameters, call without arguments # If listener does not expect parameters, call without arguments
listener_result = await self._execute_method(listener_name, method) listener_result = await self._execute_method(listener_name, method)
# Remove from scheduled tasks after execution
self._scheduled_tasks.discard(listener_name)
# Execute listeners of this listener # Execute listeners of this listener
await self._execute_listeners(listener_name, listener_result) await self._execute_listeners(listener_name, listener_result)
except Exception as e: except Exception as e:

View File

@@ -1,10 +1,7 @@
import io
import logging
import sys
import warnings
from contextlib import contextmanager from contextlib import contextmanager
from typing import Any, Dict, List, Optional, Union from typing import Any, Dict, List, Optional, Union
import logging
import warnings
import litellm import litellm
from litellm import get_supported_openai_params from litellm import get_supported_openai_params
@@ -12,6 +9,9 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException, LLMContextLengthExceededException,
) )
import sys
import io
class FilteredStream(io.StringIO): class FilteredStream(io.StringIO):
def write(self, s): def write(self, s):
@@ -118,12 +118,12 @@ class LLM:
litellm.drop_params = True litellm.drop_params = True
litellm.set_verbose = False litellm.set_verbose = False
self.set_callbacks(callbacks) litellm.callbacks = callbacks
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str: def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
with suppress_warnings(): with suppress_warnings():
if callbacks and len(callbacks) > 0: if callbacks and len(callbacks) > 0:
self.set_callbacks(callbacks) litellm.callbacks = callbacks
try: try:
params = { params = {
@@ -181,15 +181,3 @@ class LLM:
def get_context_window_size(self) -> int: def get_context_window_size(self) -> int:
# Only using 75% of the context window size to avoid cutting the message in the middle # Only using 75% of the context window size to avoid cutting the message in the middle
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75) return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
def set_callbacks(self, callbacks: List[Any]):
callback_types = [type(callback) for callback in callbacks]
for callback in litellm.success_callback[:]:
if type(callback) in callback_types:
litellm.success_callback.remove(callback)
for callback in litellm._async_success_callback[:]:
if type(callback) in callback_types:
litellm._async_success_callback.remove(callback)
litellm.callbacks = callbacks

View File

@@ -1,6 +1,5 @@
from .entity.entity_memory import EntityMemory from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory from .short_term.short_term_memory import ShortTermMemory
from .user.user_memory import UserMemory
__all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"] __all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -1,25 +1,13 @@
from typing import Optional, Dict, Any from typing import Optional
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory
class ContextualMemory: class ContextualMemory:
def __init__( def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory):
self,
memory_config: Optional[Dict[str, Any]],
stm: ShortTermMemory,
ltm: LongTermMemory,
em: EntityMemory,
um: UserMemory,
):
if memory_config is not None:
self.memory_provider = memory_config.get("provider")
else:
self.memory_provider = None
self.stm = stm self.stm = stm
self.ltm = ltm self.ltm = ltm
self.em = em self.em = em
self.um = um
def build_context_for_task(self, task, context) -> str: def build_context_for_task(self, task, context) -> str:
""" """
@@ -35,8 +23,6 @@ class ContextualMemory:
context.append(self._fetch_ltm_context(task.description)) context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query)) context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query)) context.append(self._fetch_entity_context(query))
if self.memory_provider == "mem0":
context.append(self._fetch_user_context(query))
return "\n".join(filter(None, context)) return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str: def _fetch_stm_context(self, query) -> str:
@@ -46,10 +32,7 @@ class ContextualMemory:
""" """
stm_results = self.stm.search(query) stm_results = self.stm.search(query)
formatted_results = "\n".join( formatted_results = "\n".join(
[ [f"- {result['context']}" for result in stm_results]
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in stm_results
]
) )
return f"Recent Insights:\n{formatted_results}" if stm_results else "" return f"Recent Insights:\n{formatted_results}" if stm_results else ""
@@ -79,26 +62,6 @@ class ContextualMemory:
""" """
em_results = self.em.search(query) em_results = self.em.search(query)
formatted_results = "\n".join( formatted_results = "\n".join(
[ [f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in em_results
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
) )
return f"Entities:\n{formatted_results}" if em_results else "" return f"Entities:\n{formatted_results}" if em_results else ""
def _fetch_user_context(self, query: str) -> str:
"""
Fetches and formats relevant user information from User Memory.
Args:
query (str): The search query to find relevant user memories.
Returns:
str: Formatted user memories as bullet points, or an empty string if none found.
"""
user_memories = self.um.search(query)
if not user_memories:
return ""
formatted_memories = "\n".join(
f"- {result['memory']}" for result in user_memories
)
return f"User memories/preferences:\n{formatted_memories}"

View File

@@ -11,20 +11,6 @@ class EntityMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
if hasattr(crew, "memory_config") and crew.memory_config is not None:
self.memory_provider = crew.memory_config.get("provider")
else:
self.memory_provider = None
if self.memory_provider == "mem0":
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="entities", crew=crew)
else:
storage = ( storage = (
storage storage
if storage if storage
@@ -39,14 +25,6 @@ class EntityMemory(Memory):
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory" def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage.""" """Saves an entity item into the SQLite storage."""
if self.memory_provider == "mem0":
data = f"""
Remember details about the following entity:
Name: {item.name}
Type: {item.type}
Entity Description: {item.description}
"""
else:
data = f"{item.name}({item.type}): {item.description}" data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata) super().save(data, item.metadata)

View File

@@ -23,12 +23,5 @@ class Memory:
self.storage.save(value, metadata) self.storage.save(value, metadata)
def search( def search(self, query: str) -> List[Dict[str, Any]]:
self, return self.storage.search(query)
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
)

View File

@@ -14,20 +14,6 @@ class ShortTermMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
if hasattr(crew, "memory_config") and crew.memory_config is not None:
self.memory_provider = crew.memory_config.get("provider")
else:
self.memory_provider = None
if self.memory_provider == "mem0":
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="short_term", crew=crew)
else:
storage = ( storage = (
storage storage
if storage if storage
@@ -44,20 +30,11 @@ class ShortTermMemory(Memory):
agent: Optional[str] = None, agent: Optional[str] = None,
) -> None: ) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent) item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
if self.memory_provider == "mem0":
item.data = f"Remember the following insights from Agent run: {item.data}"
super().save(value=item.data, metadata=item.metadata, agent=item.agent) super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search( def search(self, query: str, score_threshold: float = 0.35):
self, return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def reset(self) -> None: def reset(self) -> None:
try: try:

View File

@@ -7,10 +7,8 @@ class Storage:
def save(self, value: Any, metadata: Dict[str, Any]) -> None: def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass pass
def search( def search(self, key: str) -> List[Dict[str, Any]]: # type: ignore
self, query: str, limit: int, score_threshold: float pass
) -> Dict[str, Any] | List[Any]:
return {}
def reset(self) -> None: def reset(self) -> None:
pass pass

View File

@@ -70,7 +70,7 @@ class KickoffTaskOutputsSQLiteStorage:
task.expected_output, task.expected_output,
json.dumps(output, cls=CrewJSONEncoder), json.dumps(output, cls=CrewJSONEncoder),
task_index, task_index,
json.dumps(inputs, cls=CrewJSONEncoder), json.dumps(inputs),
was_replayed, was_replayed,
), ),
) )
@@ -103,7 +103,7 @@ class KickoffTaskOutputsSQLiteStorage:
else value else value
) )
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?"
values.append(task_index) values.append(task_index)
cursor.execute(query, tuple(values)) cursor.execute(query, tuple(values))

View File

@@ -83,7 +83,7 @@ class LTMSQLiteStorage:
WHERE task_description = ? WHERE task_description = ?
ORDER BY datetime DESC, score ASC ORDER BY datetime DESC, score ASC
LIMIT {latest_n} LIMIT {latest_n}
""", # nosec """,
(task_description,), (task_description,),
) )
rows = cursor.fetchall() rows = cursor.fetchall()

View File

@@ -1,104 +0,0 @@
import os
from typing import Any, Dict, List
from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage
class Mem0Storage(Storage):
"""
Extends Storage to handle embedding and searching across entities using Mem0.
"""
def __init__(self, type, crew=None):
super().__init__()
if type not in ["user", "short_term", "long_term", "entities"]:
raise ValueError("Invalid type for Mem0Storage. Must be 'user' or 'agent'.")
self.memory_type = type
self.crew = crew
self.memory_config = crew.memory_config
# User ID is required for user memory type "user" since it's used as a unique identifier for the user.
user_id = self._get_user_id()
if type == "user" and not user_id:
raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable
mem0_api_key = self.memory_config.get("config", {}).get("api_key") or os.getenv(
"MEM0_API_KEY"
)
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str:
"""
Sanitizes agent roles to ensure valid directory names.
"""
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
user_id = self._get_user_id()
agent_name = self._get_agent_name()
if self.memory_type == "user":
self.memory.add(value, user_id=user_id, metadata={**metadata})
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
self.memory.add(
value, agent_id=agent_name, metadata={"type": "short_term", **metadata}
)
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
self.memory.add(
value,
agent_id=agent_name,
infer=False,
metadata={"type": "long_term", **metadata},
)
elif self.memory_type == "entities":
entity_name = None
self.memory.add(
value, user_id=entity_name, metadata={"type": "entity", **metadata}
)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit}
if self.memory_type == "user":
user_id = self._get_user_id()
params["user_id"] = user_id
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "short_term"}
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "long_term"}
elif self.memory_type == "entities":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "entity"}
# Discard the filters for now since we create the filters
# automatically when the crew is created.
results = self.memory.search(**params)
return [r for r in results if r["score"] >= score_threshold]
def _get_user_id(self):
if self.memory_type == "user":
if hasattr(self, "memory_config") and self.memory_config is not None:
return self.memory_config.get("config", {}).get("user_id")
else:
return None
return None
def _get_agent_name(self):
agents = self.crew.agents if self.crew else []
agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents)
return agents

View File

@@ -4,13 +4,13 @@ import logging
import os import os
import shutil import shutil
import uuid import uuid
from typing import Any, Dict, List, Optional, cast from typing import Any, Dict, List, Optional
from chromadb import Documents, EmbeddingFunction, Embeddings
from chromadb.api import ClientAPI
from chromadb.api.types import validate_embedding_function
from crewai.memory.storage.base_rag_storage import BaseRAGStorage from crewai.memory.storage.base_rag_storage import BaseRAGStorage
from crewai.utilities.paths import db_storage_path from crewai.utilities.paths import db_storage_path
from chromadb.api import ClientAPI
from chromadb.api.types import validate_embedding_function
from chromadb import Documents, EmbeddingFunction, Embeddings
from typing import cast
@contextlib.contextmanager @contextlib.contextmanager
@@ -21,11 +21,9 @@ def suppress_logging(
logger = logging.getLogger(logger_name) logger = logging.getLogger(logger_name)
original_level = logger.getEffectiveLevel() original_level = logger.getEffectiveLevel()
logger.setLevel(level) logger.setLevel(level)
with ( with contextlib.redirect_stdout(io.StringIO()), contextlib.redirect_stderr(
contextlib.redirect_stdout(io.StringIO()), io.StringIO()
contextlib.redirect_stderr(io.StringIO()), ), contextlib.suppress(UserWarning):
contextlib.suppress(UserWarning),
):
yield yield
logger.setLevel(original_level) logger.setLevel(original_level)
@@ -51,6 +49,8 @@ class RAGStorage(BaseRAGStorage):
self._initialize_app() self._initialize_app()
def _set_embedder_config(self): def _set_embedder_config(self):
import chromadb.utils.embedding_functions as embedding_functions
if self.embedder_config is None: if self.embedder_config is None:
self.embedder_config = self._create_default_embedding_function() self.embedder_config = self._create_default_embedding_function()
@@ -59,20 +59,12 @@ class RAGStorage(BaseRAGStorage):
config = self.embedder_config.get("config", {}) config = self.embedder_config.get("config", {})
model_name = config.get("model") model_name = config.get("model")
if provider == "openai": if provider == "openai":
from chromadb.utils.embedding_functions.openai_embedding_function import ( self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
OpenAIEmbeddingFunction,
)
self.embedder_config = OpenAIEmbeddingFunction(
api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"), api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"),
model_name=model_name, model_name=model_name,
) )
elif provider == "azure": elif provider == "azure":
from chromadb.utils.embedding_functions.openai_embedding_function import ( self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
OpenAIEmbeddingFunction,
)
self.embedder_config = OpenAIEmbeddingFunction(
api_key=config.get("api_key"), api_key=config.get("api_key"),
api_base=config.get("api_base"), api_base=config.get("api_base"),
api_type=config.get("api_type", "azure"), api_type=config.get("api_type", "azure"),
@@ -80,103 +72,53 @@ class RAGStorage(BaseRAGStorage):
model_name=model_name, model_name=model_name,
) )
elif provider == "ollama": elif provider == "ollama":
from chromadb.utils.embedding_functions.ollama_embedding_function import ( from openai import OpenAI
OllamaEmbeddingFunction,
)
self.embedder_config = OllamaEmbeddingFunction( class OllamaEmbeddingFunction(EmbeddingFunction):
url=config.get("url", "http://localhost:11434/api/embeddings"),
model_name=model_name,
)
elif provider == "vertexai":
from chromadb.utils.embedding_functions.google_embedding_function import (
GoogleVertexEmbeddingFunction,
)
self.embedder_config = GoogleVertexEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "google":
from chromadb.utils.embedding_functions.google_embedding_function import (
GoogleGenerativeAiEmbeddingFunction,
)
self.embedder_config = GoogleGenerativeAiEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "cohere":
from chromadb.utils.embedding_functions.cohere_embedding_function import (
CohereEmbeddingFunction,
)
self.embedder_config = CohereEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "bedrock":
from chromadb.utils.embedding_functions.amazon_bedrock_embedding_function import (
AmazonBedrockEmbeddingFunction,
)
self.embedder_config = AmazonBedrockEmbeddingFunction(
session=config.get("session"),
)
elif provider == "huggingface":
from chromadb.utils.embedding_functions.huggingface_embedding_function import (
HuggingFaceEmbeddingServer,
)
self.embedder_config = HuggingFaceEmbeddingServer(
url=config.get("api_url"),
)
elif provider == "watson":
try:
import ibm_watsonx_ai.foundation_models as watson_models
from ibm_watsonx_ai import Credentials
from ibm_watsonx_ai.metanames import (
EmbedTextParamsMetaNames as EmbedParams,
)
except ImportError as e:
raise ImportError(
"IBM Watson dependencies are not installed. Please install them to use Watson embedding."
) from e
class WatsonEmbeddingFunction(EmbeddingFunction):
def __call__(self, input: Documents) -> Embeddings: def __call__(self, input: Documents) -> Embeddings:
if isinstance(input, str): client = OpenAI(
input = [input] base_url="http://localhost:11434/v1",
api_key=config.get("api_key", "ollama"),
embed_params = {
EmbedParams.TRUNCATE_INPUT_TOKENS: 3,
EmbedParams.RETURN_OPTIONS: {"input_text": True},
}
embedding = watson_models.Embeddings(
model_id=config.get("model"),
params=embed_params,
credentials=Credentials(
api_key=config.get("api_key"), url=config.get("api_url")
),
project_id=config.get("project_id"),
) )
try: try:
embeddings = embedding.embed_documents(input) response = client.embeddings.create(
input=input, model=model_name
)
embeddings = [item.embedding for item in response.data]
return cast(Embeddings, embeddings) return cast(Embeddings, embeddings)
except Exception as e: except Exception as e:
print("Error during Watson embedding:", e)
raise e raise e
self.embedder_config = WatsonEmbeddingFunction() self.embedder_config = OllamaEmbeddingFunction()
else: elif provider == "vertexai":
raise Exception( self.embedder_config = (
f"Unsupported embedding provider: {provider}, supported providers: [openai, azure, ollama, vertexai, google, cohere, huggingface, watson]" embedding_functions.GoogleVertexEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
)
elif provider == "google":
self.embedder_config = (
embedding_functions.GoogleGenerativeAiEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
)
elif provider == "cohere":
self.embedder_config = embedding_functions.CohereEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "huggingface":
self.embedder_config = embedding_functions.HuggingFaceEmbeddingServer(
url=config.get("api_url"),
) )
else: else:
validate_embedding_function(self.embedder_config) raise Exception(
f"Unsupported embedding provider: {provider}, supported providers: [openai, azure, ollama, vertexai, google, cohere, huggingface]"
)
else:
validate_embedding_function(self.embedder_config) # type: ignore # used for validating embedder_config if defined a embedding function/class
self.embedder_config = self.embedder_config self.embedder_config = self.embedder_config
def _initialize_app(self): def _initialize_app(self):
@@ -269,10 +211,8 @@ class RAGStorage(BaseRAGStorage):
) )
def _create_default_embedding_function(self): def _create_default_embedding_function(self):
from chromadb.utils.embedding_functions.openai_embedding_function import ( import chromadb.utils.embedding_functions as embedding_functions
OpenAIEmbeddingFunction,
)
return OpenAIEmbeddingFunction( return embedding_functions.OpenAIEmbeddingFunction(
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small" api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
) )

View File

@@ -1,45 +0,0 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
class UserMemory(Memory):
"""
UserMemory class for handling user memory storage and retrieval.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, crew=None):
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="user", crew=crew)
super().__init__(storage)
def save(
self,
value,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
# TODO: Change this function since we want to take care of the case where we save memories for the usr
data = f"Remember the details about the user: {value}"
super().save(data, metadata)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
results = super().search(
query=query,
limit=limit,
score_threshold=score_threshold,
)
return results

View File

@@ -1,8 +0,0 @@
from typing import Any, Dict, Optional
class UserMemoryItem:
def __init__(self, data: Any, user: str, metadata: Optional[Dict[str, Any]] = None):
self.data = data
self.user = user
self.metadata = metadata if metadata is not None else {}

View File

@@ -20,7 +20,6 @@ from pydantic import (
from pydantic_core import PydanticCustomError from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.tasks.output_format import OutputFormat from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry from crewai.telemetry.telemetry import Telemetry
@@ -92,7 +91,7 @@ class Task(BaseModel):
output: Optional[TaskOutput] = Field( output: Optional[TaskOutput] = Field(
description="Task output, it's final result after being executed", default=None description="Task output, it's final result after being executed", default=None
) )
tools: Optional[List[BaseTool]] = Field( tools: Optional[List[Any]] = Field(
default_factory=list, default_factory=list,
description="Tools the agent is limited to use for this task.", description="Tools the agent is limited to use for this task.",
) )
@@ -186,7 +185,7 @@ class Task(BaseModel):
self, self,
agent: Optional[BaseAgent] = None, agent: Optional[BaseAgent] = None,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None, tools: Optional[List[Any]] = None,
) -> TaskOutput: ) -> TaskOutput:
"""Execute the task synchronously.""" """Execute the task synchronously."""
return self._execute_core(agent, context, tools) return self._execute_core(agent, context, tools)
@@ -203,7 +202,7 @@ class Task(BaseModel):
self, self,
agent: BaseAgent | None = None, agent: BaseAgent | None = None,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None, tools: Optional[List[Any]] = None,
) -> Future[TaskOutput]: ) -> Future[TaskOutput]:
"""Execute the task asynchronously.""" """Execute the task asynchronously."""
future: Future[TaskOutput] = Future() future: Future[TaskOutput] = Future()

View File

@@ -48,10 +48,6 @@ class Telemetry:
def __init__(self): def __init__(self):
self.ready = False self.ready = False
self.trace_set = False self.trace_set = False
if os.getenv("OTEL_SDK_DISABLED", "false").lower() == "true":
return
try: try:
telemetry_endpoint = "https://telemetry.crewai.com:4319" telemetry_endpoint = "https://telemetry.crewai.com:4319"
self.resource = Resource( self.resource = Resource(

View File

@@ -1 +0,0 @@
from .base_tool import BaseTool, tool

View File

@@ -0,0 +1,25 @@
from crewai.agents.agent_builder.utilities.base_agent_tool import BaseAgentTools
class AgentTools(BaseAgentTools):
"""Default tools around agent delegation"""
def tools(self):
from langchain.tools import StructuredTool
coworkers = ", ".join([f"{agent.role}" for agent in self.agents])
tools = [
StructuredTool.from_function(
func=self.delegate_work,
name="Delegate work to coworker",
description=self.i18n.tools("delegate_work").format(
coworkers=coworkers
),
),
StructuredTool.from_function(
func=self.ask_question,
name="Ask question to coworker",
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
),
]
return tools

View File

@@ -1,32 +0,0 @@
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.utilities import I18N
from .delegate_work_tool import DelegateWorkTool
from .ask_question_tool import AskQuestionTool
class AgentTools:
"""Manager class for agent-related tools"""
def __init__(self, agents: list[BaseAgent], i18n: I18N = I18N()):
self.agents = agents
self.i18n = i18n
def tools(self) -> list[BaseTool]:
"""Get all available agent tools"""
coworkers = ", ".join([f"{agent.role}" for agent in self.agents])
delegate_tool = DelegateWorkTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("delegate_work").format(coworkers=coworkers),
)
ask_tool = AskQuestionTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
)
return [delegate_tool, ask_tool]

View File

@@ -1,26 +0,0 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
class AskQuestionToolSchema(BaseModel):
question: str = Field(..., description="The question to ask")
context: str = Field(..., description="The context for the question")
coworker: str = Field(..., description="The role/name of the coworker to ask")
class AskQuestionTool(BaseAgentTool):
"""Tool for asking questions to coworkers"""
name: str = "Ask question to coworker"
args_schema: type[BaseModel] = AskQuestionToolSchema
def _run(
self,
question: str,
context: str,
coworker: Optional[str] = None,
**kwargs,
) -> str:
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, question, context)

View File

@@ -1,29 +0,0 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
class DelegateWorkToolSchema(BaseModel):
task: str = Field(..., description="The task to delegate")
context: str = Field(..., description="The context for the task")
coworker: str = Field(
..., description="The role/name of the coworker to delegate to"
)
class DelegateWorkTool(BaseAgentTool):
"""Tool for delegating work to coworkers"""
name: str = "Delegate work to coworker"
args_schema: type[BaseModel] = DelegateWorkToolSchema
def _run(
self,
task: str,
context: str,
coworker: Optional[str] = None,
**kwargs,
) -> str:
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, task, context)

View File

@@ -1,186 +0,0 @@
from abc import ABC, abstractmethod
from typing import Any, Callable, Type, get_args, get_origin
from langchain_core.tools import StructuredTool
from pydantic import BaseModel, ConfigDict, Field, validator
from pydantic import BaseModel as PydanticBaseModel
class BaseTool(BaseModel, ABC):
class _ArgsSchemaPlaceholder(PydanticBaseModel):
pass
model_config = ConfigDict()
name: str
"""The unique name of the tool that clearly communicates its purpose."""
description: str
"""Used to tell the model how/when/why to use the tool."""
args_schema: Type[PydanticBaseModel] = Field(default_factory=_ArgsSchemaPlaceholder)
"""The schema for the arguments that the tool accepts."""
description_updated: bool = False
"""Flag to check if the description has been updated."""
cache_function: Callable = lambda _args=None, _result=None: True
"""Function that will be used to determine if the tool should be cached, should return a boolean. If None, the tool will be cached."""
result_as_answer: bool = False
"""Flag to check if the tool should be the final agent answer."""
@validator("args_schema", always=True, pre=True)
def _default_args_schema(
cls, v: Type[PydanticBaseModel]
) -> Type[PydanticBaseModel]:
if not isinstance(v, cls._ArgsSchemaPlaceholder):
return v
return type(
f"{cls.__name__}Schema",
(PydanticBaseModel,),
{
"__annotations__": {
k: v for k, v in cls._run.__annotations__.items() if k != "return"
},
},
)
def model_post_init(self, __context: Any) -> None:
self._generate_description()
super().model_post_init(__context)
def run(
self,
*args: Any,
**kwargs: Any,
) -> Any:
print(f"Using Tool: {self.name}")
return self._run(*args, **kwargs)
@abstractmethod
def _run(
self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Here goes the actual implementation of the tool."""
def to_langchain(self) -> StructuredTool:
self._set_args_schema()
return StructuredTool(
name=self.name,
description=self.description,
args_schema=self.args_schema,
func=self._run,
)
@classmethod
def from_langchain(cls, tool: StructuredTool) -> "BaseTool":
if cls == Tool:
if tool.func is None:
raise ValueError("StructuredTool must have a callable 'func'")
return Tool(
name=tool.name,
description=tool.description,
args_schema=tool.args_schema,
func=tool.func,
)
raise NotImplementedError(f"from_langchain not implemented for {cls.__name__}")
def _set_args_schema(self):
if self.args_schema is None:
class_name = f"{self.__class__.__name__}Schema"
self.args_schema = type(
class_name,
(PydanticBaseModel,),
{
"__annotations__": {
k: v
for k, v in self._run.__annotations__.items()
if k != "return"
},
},
)
def _generate_description(self):
args_schema = {
name: {
"description": field.description,
"type": BaseTool._get_arg_annotations(field.annotation),
}
for name, field in self.args_schema.model_fields.items()
}
self.description = f"Tool Name: {self.name}\nTool Arguments: {args_schema}\nTool Description: {self.description}"
@staticmethod
def _get_arg_annotations(annotation: type[Any] | None) -> str:
if annotation is None:
return "None"
origin = get_origin(annotation)
args = get_args(annotation)
if origin is None:
return (
annotation.__name__
if hasattr(annotation, "__name__")
else str(annotation)
)
if args:
args_str = ", ".join(BaseTool._get_arg_annotations(arg) for arg in args)
return f"{origin.__name__}[{args_str}]"
return origin.__name__
class Tool(BaseTool):
func: Callable
"""The function that will be executed when the tool is called."""
def _run(self, *args: Any, **kwargs: Any) -> Any:
return self.func(*args, **kwargs)
def to_langchain(
tools: list[BaseTool | StructuredTool],
) -> list[StructuredTool]:
return [t.to_langchain() if isinstance(t, BaseTool) else t for t in tools]
def tool(*args):
"""
Decorator to create a tool from a function.
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_tool(f: Callable) -> BaseTool:
if f.__doc__ is None:
raise ValueError("Function must have a docstring")
if f.__annotations__ is None:
raise ValueError("Function must have type annotations")
class_name = "".join(tool_name.split()).title()
args_schema = type(
class_name,
(PydanticBaseModel,),
{
"__annotations__": {
k: v for k, v in f.__annotations__.items() if k != "return"
},
},
)
return Tool(
name=tool_name,
description=f.__doc__,
func=f,
args_schema=args_schema,
)
return _make_tool
if len(args) == 1 and callable(args[0]):
return _make_with_name(args[0].__name__)(args[0])
if len(args) == 1 and isinstance(args[0], str):
return _make_with_name(args[0])
raise ValueError("Invalid arguments")

View File

@@ -10,7 +10,6 @@ import crewai.utilities.events as events
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task from crewai.task import Task
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools import BaseTool
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
from crewai.utilities import I18N, Converter, ConverterError, Printer from crewai.utilities import I18N, Converter, ConverterError, Printer
@@ -50,7 +49,7 @@ class ToolUsage:
def __init__( def __init__(
self, self,
tools_handler: ToolsHandler, tools_handler: ToolsHandler,
tools: List[BaseTool], tools: List[Any],
original_tools: List[Any], original_tools: List[Any],
tools_description: str, tools_description: str,
tools_names: str, tools_names: str,
@@ -299,7 +298,22 @@ class ToolUsage:
"""Render the tool name and description in plain text.""" """Render the tool name and description in plain text."""
descriptions = [] descriptions = []
for tool in self.tools: for tool in self.tools:
descriptions.append(tool.description) args = {
name: {
"description": field.description,
"type": field.annotation.__name__,
}
for name, field in tool.args_schema.model_fields.items()
}
descriptions.append(
"\n".join(
[
f"Tool Name: {tool.name.lower()}",
f"Tool Description: {tool.description}",
f"Tool Arguments: {args}",
]
)
)
return "\n--\n".join(descriptions) return "\n--\n".join(descriptions)
def _function_calling(self, tool_string: str): def _function_calling(self, tool_string: str):

View File

@@ -8,7 +8,6 @@ class UsageMetrics(BaseModel):
Attributes: Attributes:
total_tokens: Total number of tokens used. total_tokens: Total number of tokens used.
prompt_tokens: Number of tokens used in prompts. prompt_tokens: Number of tokens used in prompts.
cached_prompt_tokens: Number of cached prompt tokens used.
completion_tokens: Number of tokens used in completions. completion_tokens: Number of tokens used in completions.
successful_requests: Number of successful requests made. successful_requests: Number of successful requests made.
""" """
@@ -17,9 +16,6 @@ class UsageMetrics(BaseModel):
prompt_tokens: int = Field( prompt_tokens: int = Field(
default=0, description="Number of tokens used in prompts." default=0, description="Number of tokens used in prompts."
) )
cached_prompt_tokens: int = Field(
default=0, description="Number of cached prompt tokens used."
)
completion_tokens: int = Field( completion_tokens: int = Field(
default=0, description="Number of tokens used in completions." default=0, description="Number of tokens used in completions."
) )
@@ -36,6 +32,5 @@ class UsageMetrics(BaseModel):
""" """
self.total_tokens += usage_metrics.total_tokens self.total_tokens += usage_metrics.total_tokens
self.prompt_tokens += usage_metrics.prompt_tokens self.prompt_tokens += usage_metrics.prompt_tokens
self.cached_prompt_tokens += usage_metrics.cached_prompt_tokens
self.completion_tokens += usage_metrics.completion_tokens self.completion_tokens += usage_metrics.completion_tokens
self.successful_requests += usage_metrics.successful_requests self.successful_requests += usage_metrics.successful_requests

View File

@@ -2,14 +2,13 @@ from datetime import datetime, date
import json import json
from uuid import UUID from uuid import UUID
from pydantic import BaseModel from pydantic import BaseModel
from decimal import Decimal
class CrewJSONEncoder(json.JSONEncoder): class CrewJSONEncoder(json.JSONEncoder):
def default(self, obj): def default(self, obj):
if isinstance(obj, BaseModel): if isinstance(obj, BaseModel):
return self._handle_pydantic_model(obj) return self._handle_pydantic_model(obj)
elif isinstance(obj, UUID) or isinstance(obj, Decimal): elif isinstance(obj, UUID):
return str(obj) return str(obj)
elif isinstance(obj, datetime) or isinstance(obj, date): elif isinstance(obj, datetime) or isinstance(obj, date):

View File

@@ -16,11 +16,7 @@ class FileHandler:
def log(self, **kwargs): def log(self, **kwargs):
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S") now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
message = ( message = f"{now}: " + ", ".join([f"{key}=\"{value}\"" for key, value in kwargs.items()]) + "\n"
f"{now}: "
+ ", ".join([f'{key}="{value}"' for key, value in kwargs.items()])
+ "\n"
)
with open(self._path, "a", encoding="utf-8") as file: with open(self._path, "a", encoding="utf-8") as file:
file.write(message + "\n") file.write(message + "\n")
@@ -67,7 +63,7 @@ class PickleHandler:
with open(self.file_path, "rb") as file: with open(self.file_path, "rb") as file:
try: try:
return pickle.load(file) # nosec return pickle.load(file)
except EOFError: except EOFError:
return {} # Return an empty dictionary if the file is empty or corrupted return {} # Return an empty dictionary if the file is empty or corrupted
except Exception: except Exception:

View File

@@ -1,5 +1,5 @@
from litellm.integrations.custom_logger import CustomLogger from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import Usage
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -11,11 +11,8 @@ class TokenCalcHandler(CustomLogger):
if self.token_cost_process is None: if self.token_cost_process is None:
return return
usage : Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1) self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens) self.token_cost_process.sum_prompt_tokens(response_obj["usage"].prompt_tokens)
self.token_cost_process.sum_completion_tokens(usage.completion_tokens) self.token_cost_process.sum_completion_tokens(
if usage.prompt_tokens_details: response_obj["usage"].completion_tokens
self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
) )

View File

@@ -5,6 +5,7 @@ from unittest import mock
from unittest.mock import patch from unittest.mock import patch
import pytest import pytest
from crewai_tools import tool
from crewai import Agent, Crew, Task from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
@@ -13,7 +14,6 @@ from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserExcep
from crewai.llm import LLM from crewai.llm import LLM
from crewai.tools.tool_calling import InstructorToolCalling from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage from crewai.tools.tool_usage import ToolUsage
from crewai.tools import tool
from crewai.tools.tool_usage_events import ToolUsageFinished from crewai.tools.tool_usage_events import ToolUsageFinished
from crewai.utilities import RPMController from crewai.utilities import RPMController
from crewai.utilities.events import Emitter from crewai.utilities.events import Emitter
@@ -277,10 +277,9 @@ def test_cache_hitting():
"multiplier-{'first_number': 12, 'second_number': 3}": 36, "multiplier-{'first_number': 12, 'second_number': 3}": 36,
} }
with ( with patch.object(CacheHandler, "read") as read, patch.object(
patch.object(CacheHandler, "read") as read, Emitter, "emit"
patch.object(Emitter, "emit") as emit, ) as emit:
):
read.return_value = "0" read.return_value = "0"
task = Task( task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.", description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
@@ -605,7 +604,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys): def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -643,7 +642,7 @@ def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
def test_agent_without_max_rpm_respet_crew_rpm(capsys): def test_agent_without_max_rpm_respet_crew_rpm(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -697,7 +696,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
def test_agent_error_on_parsing_tool(capsys): def test_agent_error_on_parsing_tool(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -740,7 +739,7 @@ def test_agent_error_on_parsing_tool(capsys):
def test_agent_remembers_output_format_after_using_tools_too_many_times(): def test_agent_remembers_output_format_after_using_tools_too_many_times():
from unittest.mock import patch from unittest.mock import patch
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -864,16 +863,11 @@ def test_agent_function_calling_llm():
from crewai.tools.tool_usage import ToolUsage from crewai.tools.tool_usage import ToolUsage
with ( with patch.object(
patch.object(
instructor, "from_litellm", wraps=instructor.from_litellm instructor, "from_litellm", wraps=instructor.from_litellm
) as mock_from_litellm, ) as mock_from_litellm, patch.object(
patch.object( ToolUsage, "_original_tool_calling", side_effect=Exception("Forced exception")
ToolUsage, ) as mock_original_tool_calling:
"_original_tool_calling",
side_effect=Exception("Forced exception"),
) as mock_original_tool_calling,
):
crew.kickoff() crew.kickoff()
mock_from_litellm.assert_called() mock_from_litellm.assert_called()
mock_original_tool_calling.assert_called() mock_original_tool_calling.assert_called()
@@ -900,7 +894,7 @@ def test_agent_count_formatting_error():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_tool_result_as_answer_is_the_final_answer_for_the_agent(): def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
from crewai.tools import BaseTool from crewai_tools import BaseTool
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Get Greetings" name: str = "Get Greetings"
@@ -930,7 +924,7 @@ def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_tool_usage_information_is_appended_to_agent(): def test_tool_usage_information_is_appended_to_agent():
from crewai.tools import BaseTool from crewai_tools import BaseTool
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Decide Greetings" name: str = "Decide Greetings"

View File

@@ -3,7 +3,7 @@
import pytest import pytest
from crewai.agent import Agent from crewai.agent import Agent
from crewai.tools.agent_tools.agent_tools import AgentTools from crewai.tools.agent_tools import AgentTools
researcher = Agent( researcher = Agent(
role="researcher", role="researcher",
@@ -11,14 +11,12 @@ researcher = Agent(
backstory="You're an expert researcher, specialized in technology", backstory="You're an expert researcher, specialized in technology",
allow_delegation=False, allow_delegation=False,
) )
tools = AgentTools(agents=[researcher]).tools() tools = AgentTools(agents=[researcher])
delegate_tool = tools[0]
ask_tool = tools[1]
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_delegate_work(): def test_delegate_work():
result = delegate_tool.run( result = tools.delegate_work(
coworker="researcher", coworker="researcher",
task="share your take on AI Agents", task="share your take on AI Agents",
context="I heard you hate them", context="I heard you hate them",
@@ -32,8 +30,8 @@ def test_delegate_work():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_delegate_work_with_wrong_co_worker_variable(): def test_delegate_work_with_wrong_co_worker_variable():
result = delegate_tool.run( result = tools.delegate_work(
coworker="researcher", co_worker="researcher",
task="share your take on AI Agents", task="share your take on AI Agents",
context="I heard you hate them", context="I heard you hate them",
) )
@@ -46,7 +44,7 @@ def test_delegate_work_with_wrong_co_worker_variable():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_ask_question(): def test_ask_question():
result = ask_tool.run( result = tools.ask_question(
coworker="researcher", coworker="researcher",
question="do you hate AI Agents?", question="do you hate AI Agents?",
context="I heard you LOVE them", context="I heard you LOVE them",
@@ -60,8 +58,8 @@ def test_ask_question():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_ask_question_with_wrong_co_worker_variable(): def test_ask_question_with_wrong_co_worker_variable():
result = ask_tool.run( result = tools.ask_question(
coworker="researcher", co_worker="researcher",
question="do you hate AI Agents?", question="do you hate AI Agents?",
context="I heard you LOVE them", context="I heard you LOVE them",
) )
@@ -74,8 +72,8 @@ def test_ask_question_with_wrong_co_worker_variable():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_delegate_work_withwith_coworker_as_array(): def test_delegate_work_withwith_coworker_as_array():
result = delegate_tool.run( result = tools.delegate_work(
coworker="[researcher]", co_worker="[researcher]",
task="share your take on AI Agents", task="share your take on AI Agents",
context="I heard you hate them", context="I heard you hate them",
) )
@@ -88,8 +86,8 @@ def test_delegate_work_withwith_coworker_as_array():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_ask_question_with_coworker_as_array(): def test_ask_question_with_coworker_as_array():
result = ask_tool.run( result = tools.ask_question(
coworker="[researcher]", co_worker="[researcher]",
question="do you hate AI Agents?", question="do you hate AI Agents?",
context="I heard you LOVE them", context="I heard you LOVE them",
) )
@@ -101,7 +99,7 @@ def test_ask_question_with_coworker_as_array():
def test_delegate_work_to_wrong_agent(): def test_delegate_work_to_wrong_agent():
result = ask_tool.run( result = tools.ask_question(
coworker="writer", coworker="writer",
question="share your take on AI Agents", question="share your take on AI Agents",
context="I heard you hate them", context="I heard you hate them",
@@ -114,7 +112,7 @@ def test_delegate_work_to_wrong_agent():
def test_ask_question_to_wrong_agent(): def test_ask_question_to_wrong_agent():
result = ask_tool.run( result = tools.ask_question(
coworker="writer", coworker="writer",
question="do you hate AI Agents?", question="do you hate AI Agents?",
context="I heard you LOVE them", context="I heard you LOVE them",

View File

@@ -2,7 +2,6 @@ import hashlib
from typing import Any, List, Optional from typing import Any, List, Optional
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from pydantic import BaseModel from pydantic import BaseModel
@@ -11,13 +10,13 @@ class TestAgent(BaseAgent):
self, self,
task: Any, task: Any,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None, tools: Optional[List[Any]] = None,
) -> str: ) -> str:
return "" return ""
def create_agent_executor(self, tools=None) -> None: ... def create_agent_executor(self, tools=None) -> None: ...
def _parse_tools(self, tools: List[BaseTool]) -> List[BaseTool]: def _parse_tools(self, tools: List[Any]) -> List[Any]:
return [] return []
def get_delegation_tools(self, agents: List["BaseAgent"]): ... def get_delegation_tools(self, agents: List["BaseAgent"]): ...

View File

@@ -10,8 +10,7 @@ interactions:
criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -20,50 +19,49 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '919' - '869'
content-type: content-type:
- application/json - application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.52.1 - OpenAI/Python 1.47.0
x-stainless-arch: x-stainless-arch:
- x64 - arm64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- Linux - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.52.1 - 1.47.0
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.9 - 3.11.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
body: content: "{\n \"id\": \"chatcmpl-AB7auGDrAVE0iXSBBhySZp3xE8gvP\",\n \"object\":
string: !!binary | \"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n
H4sIAAAAAAAAA4xSy27bMBC86ysWPEuB7ciV7VuAIkUObQ+59QFhTa0kttQuS9Jx08D/XkhyLAVJ \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
gV4EaGdnMLPDpwRAmUrtQOkWo+6czW7u41q2t3+cvCvuPvxafSG+58XHTzXlxWeV9gzZ/yAdn1lX \"assistant\",\n \"content\": \"I now can give a great answer\\nFinal
WjpnKRrhEdaeMFKvuiyul3m+uV7lA9BJRbanNS5muWSdYZOtFqs8WxTZcnNmt2I0BbWDrwkAwNPw Answer: Dogs are unparalleled in loyalty and companionship to humans.\",\n \"refusal\":
7X1yRb/VDhbp86SjELAhtbssASgvtp8oDMGEiBxVOoFaOBIP1u+A5QgaGRrzQIDQ9LYBORzJA3zj null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
W8No4Wb438F7aQKgJ7DyiBb6zMhGOKRA3CJrww10xBEttIQ2toBcgTyQR2vhSNZmezLcXM39eKoP \ }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\":
Afub8MHa8/x0CWilcV724Yxf5rVhE9rSEwbhPkyI4tSAnhKA78MhDy9uo5yXzsUyyk/iMHSzHvXU 21,\n \"total_tokens\": 196,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
1N+Ejo0BqCgR7Yy13aZv6JUVRTQ2zKpQGnVL1USdesNDZWQGJLPUr928pT0mN9z8j/wEaE0uUlU6 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
T5XRLxNPa5765/2vtcuVB8MqPIZIXVkbbsg7b8bHVbuyXm9xs8xXRa2SU/IXAAD//wMAq2ZCBWoD
AAA=
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8e19bf36db158761-GRU - 8c85f22ddda01cf3-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -71,27 +69,19 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 12 Nov 2024 21:52:04 GMT - Tue, 24 Sep 2024 21:42:44 GMT
Server: Server:
- cloudflare - cloudflare
Set-Cookie:
- __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
path=/; expires=Tue, 12-Nov-24 22:22:04 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding: Transfer-Encoding:
- chunked - chunked
X-Content-Type-Options: X-Content-Type-Options:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '601' - '349'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -99,20 +89,19 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '200000' - '30000000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '199793' - '29999792'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 8.64s - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 62ms - 0s
x-request-id: x-request-id:
- req_77fb166b4e272bfd45c37c08d2b93b0c - req_4c8cd76fdfba7b65e5ce85397b33c22b
status: http_version: HTTP/1.1
code: 200 status_code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
have a lot of experience with cat.\nYour personal goal is: Express hot takes have a lot of experience with cat.\nYour personal goal is: Express hot takes
@@ -124,8 +113,7 @@ interactions:
criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}'
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -134,53 +122,49 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '919' - '869'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw; - __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000 _cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.52.1 - OpenAI/Python 1.47.0
x-stainless-arch: x-stainless-arch:
- x64 - arm64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- Linux - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.52.1 - 1.47.0
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.9 - 3.11.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
body: content: "{\n \"id\": \"chatcmpl-AB7auNbAqjT3rgBX92rhxBLuhaLBj\",\n \"object\":
string: !!binary | \"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n
H4sIAAAAAAAAA4xSy27bMBC86ysWPFuB7MhN6ltQIGmBnlL00BcEmlxJ21JLhlzFLQL/eyH5IRlt \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
gV4EaGZnMLPLlwxAkVUbUKbVYrrg8rsPsg4P+Orxs9XvPz0U8eP966dS6sdo3wa1GBR++x2NnFRX \"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
xnfBoZDnA20iasHBdXlzvSzL2+vVeiQ6b9ENsiZIXvq8I6Z8VazKvLjJl7dHdevJYFIb+JIBALyM Answer: Cats are highly independent, agile, and intuitive creatures beloved
3yEnW/ypNlAsTkiHKekG1eY8BKCidwOidEqURLOoxUQaz4I8Rn8H7HdgNENDzwgamiE2aE47jABf by millions worldwide.\",\n \"refusal\": null\n },\n \"logprobs\":
+Z5YO7gb/zfwRksCHRGGGAHZIg/D1GmXFiBtpGfiBjyDtEgR/I5BMHYJNFvomZ56hIAxedaOhDBd null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
zYNFrPukh+Vw79wR35+bOt+E6LfpyJ/xmphSW0XUyfPQKokPamT3GcC3caP9xZJUiL4LUon/gZzG 175,\n \"completion_tokens\": 28,\n \"total_tokens\": 203,\n \"completion_tokens_details\":
I60Pfmo65MSuTqR40W6GF8c7XPpVFkWTS7ObKKNNi3aSTgfUvSU/I7JZ6z/T/M370Jy4+R/7iTAG {\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
g6CtQkRL5rLxNBZxeOf/GjtveQys0q8k2FU1cYMxRDq8sjpUxVYXdrkq66XK9tlvAAAA//8DAIjK
KzJzAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8e19bf3fae118761-GRU - 8c85f2321c1c1cf3-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -188,7 +172,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 12 Nov 2024 21:52:05 GMT - Tue, 24 Sep 2024 21:42:45 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -197,12 +181,10 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '464' - '430'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -210,20 +192,19 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '200000' - '30000000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9998' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '199792' - '29999792'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 16.369s - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 62ms - 0s
x-request-id: x-request-id:
- req_91706b23d0ef23458ba63ec18304cd28 - req_ace859b7d9e83d9fa7753ce23bb03716
status: http_version: HTTP/1.1
code: 200 status_code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are apple Researcher. body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
You have a lot of experience with apple.\nYour personal goal is: Express hot You have a lot of experience with apple.\nYour personal goal is: Express hot
@@ -236,7 +217,7 @@ interactions:
under 15 words.\nyou MUST return the actual complete content as the final answer, under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model": and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}' "gpt-4o"}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -245,53 +226,49 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '929' - '879'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw; - __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000 _cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.52.1 - OpenAI/Python 1.47.0
x-stainless-arch: x-stainless-arch:
- x64 - arm64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- Linux - MacOS
x-stainless-package-version: x-stainless-package-version:
- 1.52.1 - 1.47.0
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.9 - 3.11.7
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
body: content: "{\n \"id\": \"chatcmpl-AB7avZ0yqY18ukQS7SnLkZydsx72b\",\n \"object\":
string: !!binary | \"chat.completion\",\n \"created\": 1727214165,\n \"model\": \"gpt-4o-2024-05-13\",\n
H4sIAAAAAAAAA4xSPW/bMBDd9SsOXLpIgeTITarNS4t26JJubSHQ5IliSh1ZHv0RBP7vhSTHctAU \ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
6CJQ7909vHd3zxmAsFo0IFQvkxqCKzYPaf1b2/hhW+8PR9N9Kh9W5Zdhjebr4zeRjx1++4gqvXTd \"assistant\",\n \"content\": \"I now can give a great answer.\\n\\nFinal
KD8Eh8l6mmkVUSYcVau726qu729X7ydi8Brd2GZCKmpfDJZssSpXdVHeFdX9ubv3ViGLBr5nAADP Answer: Apples are incredibly versatile, nutritious, and a staple in diets globally.\",\n
03f0SRqPooEyf0EGZJYGRXMpAhDRuxERktlykpREvpDKU0KarH8G8gdQksDYPYIEM9oGSXzACPCD \ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
PlqSDjbTfwObEBy+Y0Dl+YkTDmApoYkyIUMvoz7IiDmw79L8kqSBMe7HMMAoB4fM7ikHpF6SsmRg \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\":
xxgBjwGjRVJ4c+00YrdjOU6Lds6d8dMluvMmRL/lM3/BO0uW+zaiZE9jTE4+iIk9ZQA/pxHvXk1N 25,\n \"total_tokens\": 200,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
hOiHkNrkfyHxtLX1rCeWzS7svEsAkXyS7govq/wNvVZjktbx1ZKEkqpHvbQuG5U7bf0VkV2l/tvN 0\n }\n },\n \"system_fingerprint\": \"fp_a5d11b2ef2\"\n}\n"
W9pzckvmf+QXQikMCXUbImqrXideyiKOh/+vssuUJ8NiPpO2s2Qwhmjns+tCW25lqatV3VUiO2V/
AAAA//8DAPtpFJCEAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8e19bf447ba48761-GRU - 8c85f2369a761cf3-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -299,7 +276,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 12 Nov 2024 21:52:06 GMT - Tue, 24 Sep 2024 21:42:46 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -308,12 +285,10 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq - crewai-iuxna1
openai-processing-ms: openai-processing-ms:
- '655' - '389'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -321,18 +296,17 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '200000' - '30000000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9997' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '199791' - '29999791'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 24.239s - 6ms
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 62ms - 0s
x-request-id: x-request-id:
- req_a228208b0e965ecee334a6947d6c9e7c - req_0167388f0a7a7f1a1026409834ceb914
status: http_version: HTTP/1.1
code: 200 status_code: 200
message: OK
version: 1 version: 1

View File

@@ -1,205 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Hello, world!"}], "model": "gpt-4o-mini",
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '101'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSwWrcMBS8+ytedY6LvWvYZi8lpZSkBJLSQiChGK307FUi66nSc9Ml7L8H2e56
l7bQiw8zb8Yzg14yAGG0WINQW8mq8za/+Oqv5MUmXv+8+/Hl3uO3j59u1efreHO+/PAszpKCNo+o
+LfqraLOW2RDbqRVQMmYXMvVsqyWy1VVDERHGm2StZ7zivLOOJMvikWVF6u8fDept2QURrGGhwwA
4GX4ppxO4y+xhsFrQDqMUbYo1ocjABHIJkTIGE1k6ViczaQix+iG6JdoLb2BS3oGJR1cwSiAHfXA
pOXu/bEwYNNHmcK73toJ3x+SWGp9oE2c+APeGGfitg4oI7n018jkxcDuM4DvQ+P+pITwgTrPNdMT
umRYlqOdmHeeyfOJY2JpZ3gxjXRqVmtkaWw8GkwoqbaoZ+W8ruy1oSMiO6r8Z5a/eY+1jWv/x34m
lELPqGsfUBt12nc+C5ge4b/ODhMPgUXcRcauboxrMfhgxifQ+LrYyEKXi6opRbbPXgEAAP//AwAM
DMWoEAMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8e185b2c1b790303-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 12 Nov 2024 17:49:00 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
path=/; expires=Tue, 12-Nov-24 18:19:00 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms:
- '322'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '200000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '199978'
x-ratelimit-reset-requests:
- 8.64s
x-ratelimit-reset-tokens:
- 6ms
x-request-id:
- req_037288753767e763a51a04eae757ca84
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Hello, world from another agent!"}],
"model": "gpt-4o-mini", "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '120'
content-type:
- application/json
cookie:
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
_cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSy27bMBC86yu2PFuBZAt14UvRU5MA7aVAEKAIBJpcSUwoLkuu6jiB/z3QI5aM
tkAvPMzsDGZ2+ZoACKPFDoRqJKvW2/TLD3+z//oiD8dfL7d339zvW125x9zX90/3mVj1Cto/ouJ3
1ZWi1ltkQ26kVUDJ2Lvm201ebDbbIh+IljTaXlZ7TgtKW+NMus7WRZpt0/zTpG7IKIxiBz8TAIDX
4e1zOo3PYgfZ6h1pMUZZo9idhwBEINsjQsZoIkvHYjWTihyjG6Jfo7X0Ab4bhcAEipxDxXAw3IB0
xA0GkDU6voJrOoCSDm5gNIUjdcCk5fHz0jxg1UXZF3SdtRN+Oqe1VPtA+zjxZ7wyzsSmDCgjuT5Z
ZPJiYE8JwMOwle6iqPCBWs8l0xO63jAvRjsx32JBfpxIJpZ2xjfTJi/dSo0sjY2LrQolVYN6Vs4n
kJ02tCCSRec/w/zNe+xtXP0/9jOhFHpGXfqA2qjLwvNYwP6n/mvsvOMhsIjHyNiWlXE1Bh/M+E8q
X2Z7mel8XVS5SE7JGwAAAP//AwA/cK4yNQMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8e185b31398a0303-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 12 Nov 2024 17:49:02 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms:
- '889'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '200000'
x-ratelimit-remaining-requests:
- '9998'
x-ratelimit-remaining-tokens:
- '199975'
x-ratelimit-reset-requests:
- 16.489s
x-ratelimit-reset-tokens:
- 7ms
x-request-id:
- req_bde3810b36a4859688e53d1df64bdd20
status:
code: 200
message: OK
version: 1

View File

@@ -1,109 +0,0 @@
import unittest
import json
import tempfile
import shutil
from pathlib import Path
from crewai.cli.config import Settings
class TestSettings(unittest.TestCase):
def setUp(self):
self.test_dir = Path(tempfile.mkdtemp())
self.config_path = self.test_dir / "settings.json"
def tearDown(self):
shutil.rmtree(self.test_dir)
def test_empty_initialization(self):
settings = Settings(config_path=self.config_path)
self.assertIsNone(settings.tool_repository_username)
self.assertIsNone(settings.tool_repository_password)
def test_initialization_with_data(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
)
self.assertEqual(settings.tool_repository_username, "user1")
self.assertIsNone(settings.tool_repository_password)
def test_initialization_with_existing_file(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({"tool_repository_username": "file_user"}, f)
settings = Settings(config_path=self.config_path)
self.assertEqual(settings.tool_repository_username, "file_user")
def test_merge_file_and_input_data(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({
"tool_repository_username": "file_user",
"tool_repository_password": "file_pass"
}, f)
settings = Settings(
config_path=self.config_path,
tool_repository_username="new_user"
)
self.assertEqual(settings.tool_repository_username, "new_user")
self.assertEqual(settings.tool_repository_password, "file_pass")
def test_dump_new_settings(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
)
settings.dump()
with self.config_path.open("r") as f:
saved_data = json.load(f)
self.assertEqual(saved_data["tool_repository_username"], "user1")
def test_update_existing_settings(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({"existing_setting": "value"}, f)
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
)
settings.dump()
with self.config_path.open("r") as f:
saved_data = json.load(f)
self.assertEqual(saved_data["existing_setting"], "value")
self.assertEqual(saved_data["tool_repository_username"], "user1")
def test_none_values(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username=None
)
settings.dump()
with self.config_path.open("r") as f:
saved_data = json.load(f)
self.assertIsNone(saved_data.get("tool_repository_username"))
def test_invalid_json_in_config(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
f.write("invalid json")
try:
settings = Settings(config_path=self.config_path)
self.assertIsNone(settings.tool_repository_username)
except json.JSONDecodeError:
self.fail("Settings initialization should handle invalid JSON")
def test_empty_config_file(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
self.config_path.touch()
settings = Settings(config_path=self.config_path)
self.assertIsNone(settings.tool_repository_username)

View File

@@ -82,7 +82,6 @@ def test_install_success(mock_get, mock_subprocess_run):
capture_output=False, capture_output=False,
text=True, text=True,
check=True, check=True,
env=unittest.mock.ANY
) )
assert "Succesfully installed sample-tool" in output assert "Succesfully installed sample-tool" in output

View File

@@ -456,7 +456,7 @@ def test_crew_verbose_output(capsys):
def test_cache_hitting_between_agents(): def test_cache_hitting_between_agents():
from unittest.mock import call, patch from unittest.mock import call, patch
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -499,7 +499,7 @@ def test_cache_hitting_between_agents():
def test_api_calls_throttling(capsys): def test_api_calls_throttling(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -564,7 +564,6 @@ def test_crew_kickoff_usage_metrics():
assert result.token_usage.prompt_tokens > 0 assert result.token_usage.prompt_tokens > 0
assert result.token_usage.completion_tokens > 0 assert result.token_usage.completion_tokens > 0
assert result.token_usage.successful_requests > 0 assert result.token_usage.successful_requests > 0
assert result.token_usage.cached_prompt_tokens == 0
def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set(): def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
@@ -1112,7 +1111,7 @@ def test_dont_set_agents_step_callback_if_already_set():
def test_crew_function_calling_llm(): def test_crew_function_calling_llm():
from unittest.mock import patch from unittest.mock import patch
from crewai.tools import tool from crewai_tools import tool
llm = "gpt-4o" llm = "gpt-4o"
@@ -1147,7 +1146,7 @@ def test_crew_function_calling_llm():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_task_with_no_arguments(): def test_task_with_no_arguments():
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def return_data() -> str: def return_data() -> str:
@@ -1281,11 +1280,10 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
assert result.raw == "Howdy!" assert result.raw == "Howdy!"
assert result.token_usage == UsageMetrics( assert result.token_usage == UsageMetrics(
total_tokens=1673, total_tokens=2626,
prompt_tokens=1562, prompt_tokens=2482,
completion_tokens=111, completion_tokens=144,
successful_requests=3, successful_requests=5,
cached_prompt_tokens=0
) )
@@ -1311,9 +1309,8 @@ def test_hierarchical_crew_creation_tasks_with_agents():
assert crew.manager_agent is not None assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None assert crew.manager_agent.tools is not None
assert ( assert crew.manager_agent.tools[0].description.startswith(
"Delegate a specific task to one of the following coworkers: Senior Writer\n" "Delegate a specific task to one of the following coworkers: Senior Writer"
in crew.manager_agent.tools[0].description
) )
@@ -1340,9 +1337,8 @@ def test_hierarchical_crew_creation_tasks_with_async_execution():
crew.kickoff() crew.kickoff()
assert crew.manager_agent is not None assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None assert crew.manager_agent.tools is not None
assert ( assert crew.manager_agent.tools[0].description.startswith(
"Delegate a specific task to one of the following coworkers: Senior Writer\n" "Delegate a specific task to one of the following coworkers: Senior Writer\n"
in crew.manager_agent.tools[0].description
) )
@@ -1374,9 +1370,8 @@ def test_hierarchical_crew_creation_tasks_with_sync_last():
crew.kickoff() crew.kickoff()
assert crew.manager_agent is not None assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None assert crew.manager_agent.tools is not None
assert ( assert crew.manager_agent.tools[0].description.startswith(
"Delegate a specific task to one of the following coworkers: Senior Writer, Researcher, CEO\n" "Delegate a specific task to one of the following coworkers: Senior Writer, Researcher, CEO\n"
in crew.manager_agent.tools[0].description
) )
@@ -1499,7 +1494,7 @@ def test_task_callback_on_crew():
def test_tools_with_custom_caching(): def test_tools_with_custom_caching():
from unittest.mock import patch from unittest.mock import patch
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def multiplcation_tool(first_number: int, second_number: int) -> int: def multiplcation_tool(first_number: int, second_number: int) -> int:
@@ -1701,7 +1696,7 @@ def test_manager_agent_in_agents_raises_exception():
def test_manager_agent_with_tools_raises_exception(): def test_manager_agent_with_tools_raises_exception():
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def testing_tool(first_number: int, second_number: int) -> int: def testing_tool(first_number: int, second_number: int) -> int:
@@ -1779,22 +1774,26 @@ def test_crew_train_success(
] ]
) )
crew_training_handler.assert_any_call("training_data.pkl") crew_training_handler.assert_has_calls(
crew_training_handler().load.assert_called() [
mock.call("training_data.pkl"),
crew_training_handler.assert_any_call("trained_agents_data.pkl") mock.call().load(),
crew_training_handler().load.assert_called() mock.call("trained_agents_data.pkl"),
mock.call().save_trained_data(
crew_training_handler().save_trained_data.assert_has_calls([
mock.call(
agent_id="Researcher", agent_id="Researcher",
trained_data=task_evaluator().evaluate_training_data().model_dump(), trained_data=task_evaluator().evaluate_training_data().model_dump(),
), ),
mock.call( mock.call("trained_agents_data.pkl"),
mock.call().save_trained_data(
agent_id="Senior Writer", agent_id="Senior Writer",
trained_data=task_evaluator().evaluate_training_data().model_dump(), trained_data=task_evaluator().evaluate_training_data().model_dump(),
),
mock.call(),
mock.call().load(),
mock.call(),
mock.call().load(),
]
) )
])
def test_crew_train_error(): def test_crew_train_error():

View File

@@ -1,264 +0,0 @@
"""Test Flow creation and execution basic functionality."""
import asyncio
import pytest
from crewai.flow.flow import Flow, and_, listen, or_, router, start
def test_simple_sequential_flow():
"""Test a simple flow with two steps called sequentially."""
execution_order = []
class SimpleFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
@listen(step_1)
def step_2(self):
execution_order.append("step_2")
flow = SimpleFlow()
flow.kickoff()
assert execution_order == ["step_1", "step_2"]
def test_flow_with_multiple_starts():
"""Test a flow with multiple start methods."""
execution_order = []
class MultiStartFlow(Flow):
@start()
def step_a(self):
execution_order.append("step_a")
@start()
def step_b(self):
execution_order.append("step_b")
@listen(step_a)
def step_c(self):
execution_order.append("step_c")
@listen(step_b)
def step_d(self):
execution_order.append("step_d")
flow = MultiStartFlow()
flow.kickoff()
assert "step_a" in execution_order
assert "step_b" in execution_order
assert "step_c" in execution_order
assert "step_d" in execution_order
assert execution_order.index("step_c") > execution_order.index("step_a")
assert execution_order.index("step_d") > execution_order.index("step_b")
def test_cyclic_flow():
"""Test a cyclic flow that runs a finite number of iterations."""
execution_order = []
class CyclicFlow(Flow):
iteration = 0
max_iterations = 3
@start("loop")
def step_1(self):
if self.iteration >= self.max_iterations:
return # Do not proceed further
execution_order.append(f"step_1_{self.iteration}")
@listen(step_1)
def step_2(self):
execution_order.append(f"step_2_{self.iteration}")
@router(step_2)
def step_3(self):
execution_order.append(f"step_3_{self.iteration}")
self.iteration += 1
if self.iteration < self.max_iterations:
return "loop"
return "exit"
flow = CyclicFlow()
flow.kickoff()
expected_order = []
for i in range(flow.max_iterations):
expected_order.extend([f"step_1_{i}", f"step_2_{i}", f"step_3_{i}"])
assert execution_order == expected_order
def test_flow_with_and_condition():
"""Test a flow where a step waits for multiple other steps to complete."""
execution_order = []
class AndConditionFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
@start()
def step_2(self):
execution_order.append("step_2")
@listen(and_(step_1, step_2))
def step_3(self):
execution_order.append("step_3")
flow = AndConditionFlow()
flow.kickoff()
assert "step_1" in execution_order
assert "step_2" in execution_order
assert execution_order[-1] == "step_3"
assert execution_order.index("step_3") > execution_order.index("step_1")
assert execution_order.index("step_3") > execution_order.index("step_2")
def test_flow_with_or_condition():
"""Test a flow where a step is triggered when any of multiple steps complete."""
execution_order = []
class OrConditionFlow(Flow):
@start()
def step_a(self):
execution_order.append("step_a")
@start()
def step_b(self):
execution_order.append("step_b")
@listen(or_(step_a, step_b))
def step_c(self):
execution_order.append("step_c")
flow = OrConditionFlow()
flow.kickoff()
assert "step_a" in execution_order or "step_b" in execution_order
assert "step_c" in execution_order
assert execution_order.index("step_c") > min(
execution_order.index("step_a"), execution_order.index("step_b")
)
def test_flow_with_router():
"""Test a flow that uses a router method to determine the next step."""
execution_order = []
class RouterFlow(Flow):
@start()
def start_method(self):
execution_order.append("start_method")
@router(start_method)
def router(self):
execution_order.append("router")
# Ensure the condition is set to True to follow the "step_if_true" path
condition = True
return "step_if_true" if condition else "step_if_false"
@listen("step_if_true")
def truthy(self):
execution_order.append("step_if_true")
@listen("step_if_false")
def falsy(self):
execution_order.append("step_if_false")
flow = RouterFlow()
flow.kickoff()
assert execution_order == ["start_method", "router", "step_if_true"]
def test_async_flow():
"""Test an asynchronous flow."""
execution_order = []
class AsyncFlow(Flow):
@start()
async def step_1(self):
execution_order.append("step_1")
await asyncio.sleep(0.1)
@listen(step_1)
async def step_2(self):
execution_order.append("step_2")
await asyncio.sleep(0.1)
flow = AsyncFlow()
asyncio.run(flow.kickoff_async())
assert execution_order == ["step_1", "step_2"]
def test_flow_with_exceptions():
"""Test flow behavior when exceptions occur in steps."""
execution_order = []
class ExceptionFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
raise ValueError("An error occurred in step_1")
@listen(step_1)
def step_2(self):
execution_order.append("step_2")
flow = ExceptionFlow()
with pytest.raises(ValueError):
flow.kickoff()
# Ensure step_2 did not execute
assert execution_order == ["step_1"]
def test_flow_restart():
"""Test restarting a flow after it has completed."""
execution_order = []
class RestartableFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
@listen(step_1)
def step_2(self):
execution_order.append("step_2")
flow = RestartableFlow()
flow.kickoff()
flow.kickoff() # Restart the flow
assert execution_order == ["step_1", "step_2", "step_1", "step_2"]
def test_flow_with_custom_state():
"""Test a flow that maintains and modifies internal state."""
class StateFlow(Flow):
def __init__(self):
super().__init__()
self.counter = 0
@start()
def step_1(self):
self.counter += 1
@listen(step_1)
def step_2(self):
self.counter *= 2
assert self.counter == 2
flow = StateFlow()
flow.kickoff()
assert flow.counter == 2

View File

@@ -1,30 +0,0 @@
import pytest
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.llm import LLM
from crewai.utilities.token_counter_callback import TokenCalcHandler
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_callback_replacement():
llm = LLM(model="gpt-4o-mini")
calc_handler_1 = TokenCalcHandler(token_cost_process=TokenProcess())
calc_handler_2 = TokenCalcHandler(token_cost_process=TokenProcess())
llm.call(
messages=[{"role": "user", "content": "Hello, world!"}],
callbacks=[calc_handler_1],
)
usage_metrics_1 = calc_handler_1.token_cost_process.get_summary()
llm.call(
messages=[{"role": "user", "content": "Hello, world from another agent!"}],
callbacks=[calc_handler_2],
)
usage_metrics_2 = calc_handler_2.token_cost_process.get_summary()
# The first handler should not have been updated
assert usage_metrics_1.successful_requests == 1
assert usage_metrics_2.successful_requests == 1
assert usage_metrics_1 == calc_handler_1.token_cost_process.get_summary()

View File

@@ -1,270 +0,0 @@
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: GET
uri: https://api.mem0.ai/v1/memories/?user_id=test
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477138bad847b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:11 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '740'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:12 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '69'
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Remember the following insights
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '219'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/
response:
body:
string: '{"message":"ok"}'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477140282547b9-BOM
Connection:
- keep-alive
Content-Length:
- '16'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '73'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/search/
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b47714d083b47b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:14 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- POST, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '739'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '33'
status:
code: 200
message: OK
version: 1

View File

@@ -15,7 +15,7 @@ from pydantic_core import ValidationError
def test_task_tool_reflect_agent_tools(): def test_task_tool_reflect_agent_tools():
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def fake_tool() -> None: def fake_tool() -> None:
@@ -39,7 +39,7 @@ def test_task_tool_reflect_agent_tools():
def test_task_tool_takes_precedence_over_agent_tools(): def test_task_tool_takes_precedence_over_agent_tools():
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def fake_tool() -> None: def fake_tool() -> None:
@@ -656,7 +656,7 @@ def test_increment_delegations_for_sequential_process():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_increment_tool_errors(): def test_increment_tool_errors():
from crewai.tools import tool from crewai_tools import tool
@tool @tool
def scoring_examples() -> None: def scoring_examples() -> None:

View File

@@ -1,109 +0,0 @@
from typing import Callable
from crewai.tools import BaseTool, tool
def test_creating_a_tool_using_annotation():
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
return question
# Assert all the right attributes were defined
assert my_tool.name == "Name of my tool"
assert (
my_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
)
assert my_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}
assert (
my_tool.func("What is the meaning of life?") == "What is the meaning of life?"
)
# Assert the langchain tool conversion worked as expected
converted_tool = my_tool.to_langchain()
assert converted_tool.name == "Name of my tool"
assert (
converted_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
)
assert converted_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}
assert (
converted_tool.func("What is the meaning of life?")
== "What is the meaning of life?"
)
def test_creating_a_tool_using_baseclass():
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
def _run(self, question: str) -> str:
return question
my_tool = MyCustomTool()
# Assert all the right attributes were defined
assert my_tool.name == "Name of my tool"
assert (
my_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
)
assert my_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}
assert my_tool.run("What is the meaning of life?") == "What is the meaning of life?"
# Assert the langchain tool conversion worked as expected
converted_tool = my_tool.to_langchain()
assert converted_tool.name == "Name of my tool"
assert (
converted_tool.description
== "Tool Name: Name of my tool\nTool Arguments: {'question': {'description': None, 'type': 'str'}}\nTool Description: Clear description for what this tool is useful for, you agent will need this information to use it."
)
assert converted_tool.args_schema.schema()["properties"] == {
"question": {"title": "Question", "type": "string"}
}
assert (
converted_tool.run("What is the meaning of life?")
== "What is the meaning of life?"
)
def test_setting_cache_function():
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
cache_function: Callable = lambda: False
def _run(self, question: str) -> str:
return question
my_tool = MyCustomTool()
# Assert all the right attributes were defined
assert not my_tool.cache_function()
def test_default_cache_function_is_true():
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
def _run(self, question: str) -> str:
return question
my_tool = MyCustomTool()
# Assert all the right attributes were defined
assert my_tool.cache_function()

View File

@@ -3,11 +3,11 @@ import random
from unittest.mock import MagicMock from unittest.mock import MagicMock
import pytest import pytest
from crewai_tools import BaseTool
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from crewai import Agent, Task from crewai import Agent, Task
from crewai.tools.tool_usage import ToolUsage from crewai.tools.tool_usage import ToolUsage
from crewai.tools import BaseTool
class RandomNumberToolInput(BaseModel): class RandomNumberToolInput(BaseModel):
@@ -103,7 +103,11 @@ def test_tool_usage_render():
rendered = tool_usage._render() rendered = tool_usage._render()
# Updated checks to match the actual output # Updated checks to match the actual output
assert "Tool Name: Random Number Generator" in rendered assert "Tool Name: random number generator" in rendered
assert (
"Random Number Generator(min_value: 'integer', max_value: 'integer') - Generates a random number within a specified range min_value: 'The minimum value of the range (inclusive)', max_value: 'The maximum value of the range (inclusive)'"
in rendered
)
assert "Tool Arguments:" in rendered assert "Tool Arguments:" in rendered
assert ( assert (
"'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}" "'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}"
@@ -113,11 +117,3 @@ def test_tool_usage_render():
"'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}" "'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}"
in rendered in rendered
) )
assert (
"Tool Description: Generates a random number within a specified range"
in rendered
)
assert (
"Tool Name: Random Number Generator\nTool Arguments: {'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}, 'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}}\nTool Description: Generates a random number within a specified range"
in rendered
)

98
uv.lock generated
View File

@@ -490,32 +490,28 @@ wheels = [
[[package]] [[package]]
name = "chroma-hnswlib" name = "chroma-hnswlib"
version = "0.7.6" version = "0.7.3"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "numpy" }, { name = "numpy" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/73/09/10d57569e399ce9cbc5eee2134996581c957f63a9addfa6ca657daf006b8/chroma_hnswlib-0.7.6.tar.gz", hash = "sha256:4dce282543039681160259d29fcde6151cc9106c6461e0485f57cdccd83059b7", size = 32256 } sdist = { url = "https://files.pythonhosted.org/packages/c0/59/1224cbae62c7b84c84088cdf6c106b9b2b893783c000d22c442a1672bc75/chroma-hnswlib-0.7.3.tar.gz", hash = "sha256:b6137bedde49fffda6af93b0297fe00429fc61e5a072b1ed9377f909ed95a932", size = 31876 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/a8/74/b9dde05ea8685d2f8c4681b517e61c7887e974f6272bb24ebc8f2105875b/chroma_hnswlib-0.7.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f35192fbbeadc8c0633f0a69c3d3e9f1a4eab3a46b65458bbcbcabdd9e895c36", size = 195821 }, { url = "https://files.pythonhosted.org/packages/1a/36/d1069ffa520efcf93f6d81b527e3c7311e12363742fdc786cbdaea3ab02e/chroma_hnswlib-0.7.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:59d6a7c6f863c67aeb23e79a64001d537060b6995c3eca9a06e349ff7b0998ca", size = 219588 },
{ url = "https://files.pythonhosted.org/packages/fd/58/101bfa6bc41bc6cc55fbb5103c75462a7bf882e1704256eb4934df85b6a8/chroma_hnswlib-0.7.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f007b608c96362b8f0c8b6b2ac94f67f83fcbabd857c378ae82007ec92f4d82", size = 183854 }, { url = "https://files.pythonhosted.org/packages/c3/e8/263d331f5ce29367f6f8854cd7fa1f54fce72ab4f92ab957525ef9165a9c/chroma_hnswlib-0.7.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d71a3f4f232f537b6152947006bd32bc1629a8686df22fd97777b70f416c127a", size = 197094 },
{ url = "https://files.pythonhosted.org/packages/17/ff/95d49bb5ce134f10d6aa08d5f3bec624eaff945f0b17d8c3fce888b9a54a/chroma_hnswlib-0.7.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:456fd88fa0d14e6b385358515aef69fc89b3c2191706fd9aee62087b62aad09c", size = 2358774 }, { url = "https://files.pythonhosted.org/packages/a9/72/a9b61ae00d490c26359a8e10f3974c0d38065b894e6a2573ec6a7597f8e3/chroma_hnswlib-0.7.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c92dc1ebe062188e53970ba13f6b07e0ae32e64c9770eb7f7ffa83f149d4210", size = 2315620 },
{ url = "https://files.pythonhosted.org/packages/3a/6d/27826180a54df80dbba8a4f338b022ba21c0c8af96fd08ff8510626dee8f/chroma_hnswlib-0.7.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5dfaae825499c2beaa3b75a12d7ec713b64226df72a5c4097203e3ed532680da", size = 2392739 }, { url = "https://files.pythonhosted.org/packages/2f/48/f7609a3cb15a24c5d8ec18911ce10ac94144e9a89584f0a86bf9871b024c/chroma_hnswlib-0.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:49da700a6656fed8753f68d44b8cc8ae46efc99fc8a22a6d970dc1697f49b403", size = 2350956 },
{ url = "https://files.pythonhosted.org/packages/d6/63/ee3e8b7a8f931918755faacf783093b61f32f59042769d9db615999c3de0/chroma_hnswlib-0.7.6-cp310-cp310-win_amd64.whl", hash = "sha256:2487201982241fb1581be26524145092c95902cb09fc2646ccfbc407de3328ec", size = 150955 }, { url = "https://files.pythonhosted.org/packages/cc/3d/ca311b8f79744db3f4faad8fd9140af80d34c94829d3ed1726c98cf4a611/chroma_hnswlib-0.7.3-cp310-cp310-win_amd64.whl", hash = "sha256:108bc4c293d819b56476d8f7865803cb03afd6ca128a2a04d678fffc139af029", size = 150598 },
{ url = "https://files.pythonhosted.org/packages/f5/af/d15fdfed2a204c0f9467ad35084fbac894c755820b203e62f5dcba2d41f1/chroma_hnswlib-0.7.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:81181d54a2b1e4727369486a631f977ffc53c5533d26e3d366dda243fb0998ca", size = 196911 }, { url = "https://files.pythonhosted.org/packages/94/3f/844393b0d2ea1072b7704d6eff5c595e05ae8b831b96340cdb76b2fe995c/chroma_hnswlib-0.7.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:11e7ca93fb8192214ac2b9c0943641ac0daf8f9d4591bb7b73be808a83835667", size = 221219 },
{ url = "https://files.pythonhosted.org/packages/0d/19/aa6f2139f1ff7ad23a690ebf2a511b2594ab359915d7979f76f3213e46c4/chroma_hnswlib-0.7.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4b4ab4e11f1083dd0a11ee4f0e0b183ca9f0f2ed63ededba1935b13ce2b3606f", size = 185000 }, { url = "https://files.pythonhosted.org/packages/11/7a/673ccb9bb2faf9cf655d9040e970c02a96645966e06837fde7d10edf242a/chroma_hnswlib-0.7.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6f552e4d23edc06cdeb553cdc757d2fe190cdeb10d43093d6a3319f8d4bf1c6b", size = 198652 },
{ url = "https://files.pythonhosted.org/packages/79/b1/1b269c750e985ec7d40b9bbe7d66d0a890e420525187786718e7f6b07913/chroma_hnswlib-0.7.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:53db45cd9173d95b4b0bdccb4dbff4c54a42b51420599c32267f3abbeb795170", size = 2377289 }, { url = "https://files.pythonhosted.org/packages/ba/f4/c81a40da5473d5d80fc9d0c5bd5b1cb64e530a6ea941c69f195fe81c488c/chroma_hnswlib-0.7.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f96f4d5699e486eb1fb95849fe35ab79ab0901265805be7e60f4eaa83ce263ec", size = 2332260 },
{ url = "https://files.pythonhosted.org/packages/c7/2d/d5663e134436e5933bc63516a20b5edc08b4c1b1588b9680908a5f1afd04/chroma_hnswlib-0.7.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c093f07a010b499c00a15bc9376036ee4800d335360570b14f7fe92badcdcf9", size = 2411755 }, { url = "https://files.pythonhosted.org/packages/48/0e/068b658a547d6090b969014146321e28dae1411da54b76d081e51a2af22b/chroma_hnswlib-0.7.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:368e57fe9ebae05ee5844840fa588028a023d1182b0cfdb1d13f607c9ea05756", size = 2367211 },
{ url = "https://files.pythonhosted.org/packages/3e/79/1bce519cf186112d6d5ce2985392a89528c6e1e9332d680bf752694a4cdf/chroma_hnswlib-0.7.6-cp311-cp311-win_amd64.whl", hash = "sha256:0540b0ac96e47d0aa39e88ea4714358ae05d64bbe6bf33c52f316c664190a6a3", size = 151888 }, { url = "https://files.pythonhosted.org/packages/d2/32/a91850c7aa8a34f61838913155103808fe90da6f1ea4302731b59e9ba6f2/chroma_hnswlib-0.7.3-cp311-cp311-win_amd64.whl", hash = "sha256:b7dca27b8896b494456db0fd705b689ac6b73af78e186eb6a42fea2de4f71c6f", size = 151574 },
{ url = "https://files.pythonhosted.org/packages/93/ac/782b8d72de1c57b64fdf5cb94711540db99a92768d93d973174c62d45eb8/chroma_hnswlib-0.7.6-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:e87e9b616c281bfbe748d01705817c71211613c3b063021f7ed5e47173556cb7", size = 197804 },
{ url = "https://files.pythonhosted.org/packages/32/4e/fd9ce0764228e9a98f6ff46af05e92804090b5557035968c5b4198bc7af9/chroma_hnswlib-0.7.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ec5ca25bc7b66d2ecbf14502b5729cde25f70945d22f2aaf523c2d747ea68912", size = 185421 },
{ url = "https://files.pythonhosted.org/packages/d9/3d/b59a8dedebd82545d873235ef2d06f95be244dfece7ee4a1a6044f080b18/chroma_hnswlib-0.7.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:305ae491de9d5f3c51e8bd52d84fdf2545a4a2bc7af49765cda286b7bb30b1d4", size = 2389672 },
{ url = "https://files.pythonhosted.org/packages/74/1e/80a033ea4466338824974a34f418e7b034a7748bf906f56466f5caa434b0/chroma_hnswlib-0.7.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:822ede968d25a2c88823ca078a58f92c9b5c4142e38c7c8b4c48178894a0a3c5", size = 2436986 },
] ]
[[package]] [[package]]
name = "chromadb" name = "chromadb"
version = "0.5.18" version = "0.4.24"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "bcrypt" }, { name = "bcrypt" },
@@ -523,7 +519,6 @@ dependencies = [
{ name = "chroma-hnswlib" }, { name = "chroma-hnswlib" },
{ name = "fastapi" }, { name = "fastapi" },
{ name = "grpcio" }, { name = "grpcio" },
{ name = "httpx" },
{ name = "importlib-resources" }, { name = "importlib-resources" },
{ name = "kubernetes" }, { name = "kubernetes" },
{ name = "mmh3" }, { name = "mmh3" },
@@ -536,10 +531,11 @@ dependencies = [
{ name = "orjson" }, { name = "orjson" },
{ name = "overrides" }, { name = "overrides" },
{ name = "posthog" }, { name = "posthog" },
{ name = "pulsar-client" },
{ name = "pydantic" }, { name = "pydantic" },
{ name = "pypika" }, { name = "pypika" },
{ name = "pyyaml" }, { name = "pyyaml" },
{ name = "rich" }, { name = "requests" },
{ name = "tenacity" }, { name = "tenacity" },
{ name = "tokenizers" }, { name = "tokenizers" },
{ name = "tqdm" }, { name = "tqdm" },
@@ -547,9 +543,9 @@ dependencies = [
{ name = "typing-extensions" }, { name = "typing-extensions" },
{ name = "uvicorn", extra = ["standard"] }, { name = "uvicorn", extra = ["standard"] },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/15/95/d1a3f14c864e37d009606b82bd837090902b5e5a8e892fcab07eeaec0438/chromadb-0.5.18.tar.gz", hash = "sha256:cfbb3e5aeeb1dd532b47d80ed9185e8a9886c09af41c8e6123edf94395d76aec", size = 33620708 } sdist = { url = "https://files.pythonhosted.org/packages/47/6b/a5465827d8017b658d18ad1e63d2dc31109dec717c6bd068e82485186f4b/chromadb-0.4.24.tar.gz", hash = "sha256:a5c80b4e4ad9b236ed2d4899a5b9e8002b489293f2881cb2cadab5b199ee1c72", size = 13667084 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/82/85/4d2f8b9202153105ad4514ae09e9fe6f3b353a45e44e0ef7eca03dd8b9dc/chromadb-0.5.18-py3-none-any.whl", hash = "sha256:9dd3827b5e04b4ff0a5ea0df28a78bac88a09f45be37fcd7fe20f879b57c43cf", size = 615499 }, { url = "https://files.pythonhosted.org/packages/cc/63/b7d76109331318423f9cfb89bd89c99e19f5d0b47a5105439a629224d297/chromadb-0.4.24-py3-none-any.whl", hash = "sha256:3a08e237a4ad28b5d176685bd22429a03717fe09d35022fb230d516108da01da", size = 525452 },
] ]
[[package]] [[package]]
@@ -608,7 +604,7 @@ wheels = [
[[package]] [[package]]
name = "crewai" name = "crewai"
version = "0.79.4" version = "0.76.9"
source = { editable = "." } source = { editable = "." }
dependencies = [ dependencies = [
{ name = "appdirs" }, { name = "appdirs" },
@@ -638,9 +634,6 @@ dependencies = [
agentops = [ agentops = [
{ name = "agentops" }, { name = "agentops" },
] ]
mem0 = [
{ name = "mem0ai" },
]
tools = [ tools = [
{ name = "crewai-tools" }, { name = "crewai-tools" },
] ]
@@ -670,16 +663,15 @@ requires-dist = [
{ name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" }, { name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" },
{ name = "appdirs", specifier = ">=1.4.4" }, { name = "appdirs", specifier = ">=1.4.4" },
{ name = "auth0-python", specifier = ">=4.7.1" }, { name = "auth0-python", specifier = ">=4.7.1" },
{ name = "chromadb", specifier = ">=0.5.18" }, { name = "chromadb", specifier = ">=0.4.24" },
{ name = "click", specifier = ">=8.1.7" }, { name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", specifier = ">=0.14.0" }, { name = "crewai-tools", specifier = ">=0.13.4" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" }, { name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.13.4" },
{ name = "instructor", specifier = ">=1.3.3" }, { name = "instructor", specifier = ">=1.3.3" },
{ name = "json-repair", specifier = ">=0.25.2" }, { name = "json-repair", specifier = ">=0.25.2" },
{ name = "jsonref", specifier = ">=1.1.0" }, { name = "jsonref", specifier = ">=1.1.0" },
{ name = "langchain", specifier = ">=0.2.16" }, { name = "langchain", specifier = ">=0.2.16" },
{ name = "litellm", specifier = ">=1.44.22" }, { name = "litellm", specifier = ">=1.44.22" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.29" },
{ name = "openai", specifier = ">=1.13.3" }, { name = "openai", specifier = ">=1.13.3" },
{ name = "opentelemetry-api", specifier = ">=1.22.0" }, { name = "opentelemetry-api", specifier = ">=1.22.0" },
{ name = "opentelemetry-exporter-otlp-proto-http", specifier = ">=1.22.0" }, { name = "opentelemetry-exporter-otlp-proto-http", specifier = ">=1.22.0" },
@@ -696,7 +688,7 @@ requires-dist = [
[package.metadata.requires-dev] [package.metadata.requires-dev]
dev = [ dev = [
{ name = "cairosvg", specifier = ">=2.7.1" }, { name = "cairosvg", specifier = ">=2.7.1" },
{ name = "crewai-tools", specifier = ">=0.14.0" }, { name = "crewai-tools", specifier = ">=0.13.4" },
{ name = "mkdocs", specifier = ">=1.4.3" }, { name = "mkdocs", specifier = ">=1.4.3" },
{ name = "mkdocs-material", specifier = ">=9.5.7" }, { name = "mkdocs-material", specifier = ">=9.5.7" },
{ name = "mkdocs-material-extensions", specifier = ">=1.3.1" }, { name = "mkdocs-material-extensions", specifier = ">=1.3.1" },
@@ -715,7 +707,7 @@ dev = [
[[package]] [[package]]
name = "crewai-tools" name = "crewai-tools"
version = "0.14.0" version = "0.13.4"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "beautifulsoup4" }, { name = "beautifulsoup4" },
@@ -733,9 +725,9 @@ dependencies = [
{ name = "requests" }, { name = "requests" },
{ name = "selenium" }, { name = "selenium" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/9b/6d/4fa91b481b120f83bb58f365203d8aa8564e8ced1035d79f8aedb7d71e2f/crewai_tools-0.14.0.tar.gz", hash = "sha256:510f3a194bcda4fdae4314bd775521964b5f229ddbe451e5d9e0216cae57f4e3", size = 815892 } sdist = { url = "https://files.pythonhosted.org/packages/64/bd/eff7b633a0b28ff4ed115adde1499e3dcc683e4f0b5c378a4c6f5c0c1bf6/crewai_tools-0.13.4.tar.gz", hash = "sha256:b6ac527633b7018471d892c21ac96bc961a86b6626d996b1ed7d53cd481d4505", size = 816588 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/c8/ed/9f4e64e1507062957b0118085332d38b621c1000874baef2d1c4069bfd97/crewai_tools-0.14.0-py3-none-any.whl", hash = "sha256:0a804a828c29869c3af3253f4fc4c3967a3f80f06dab22e9bbe9526608a31564", size = 462980 }, { url = "https://files.pythonhosted.org/packages/6c/40/93cd347d854059cf5e54a81b70f896deea7ad1f03e9c024549eb323c4da5/crewai_tools-0.13.4-py3-none-any.whl", hash = "sha256:eda78fe3c4df57676259d8dd6b2610fa31f89b90909512f15893adb57fb9e825", size = 463703 },
] ]
[[package]] [[package]]
@@ -897,7 +889,7 @@ wheels = [
[[package]] [[package]]
name = "embedchain" name = "embedchain"
version = "0.1.125" version = "0.1.123"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "alembic" }, { name = "alembic" },
@@ -922,9 +914,9 @@ dependencies = [
{ name = "sqlalchemy" }, { name = "sqlalchemy" },
{ name = "tiktoken" }, { name = "tiktoken" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/6c/ea/eedb6016719f94fe4bd4c5aa44cc5463d85494bbd0864cc465e4317d4987/embedchain-0.1.125.tar.gz", hash = "sha256:15a6d368b48ba33feb93b237caa54f6e9078537c02a49c1373e59cc32627a138", size = 125176 } sdist = { url = "https://files.pythonhosted.org/packages/5d/6a/955b5a72fa6727db203c4d46ae0e30ac47f4f50389f663cd5ea157b0d819/embedchain-0.1.123.tar.gz", hash = "sha256:aecaf81c21de05b5cdb649b6cde95ef68ffa759c69c54f6ff2eaa667f2ad0580", size = 124797 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/52/82/3d0355c22bc68cfbb8fbcf670da4c01b31bd7eb516974a08cf7533e89887/embedchain-0.1.125-py3-none-any.whl", hash = "sha256:f87b49732dc192c6b61221830f29e59cf2aff26d8f5d69df81f6f6cf482715c2", size = 211367 }, { url = "https://files.pythonhosted.org/packages/a7/51/0c78d26da4afbe68370306669556b274f1021cac02f3155d8da2be407763/embedchain-0.1.123-py3-none-any.whl", hash = "sha256:1210e993b6364d7c702b6bd44b053fc244dd77f2a65ea4b90b62709114ea6c25", size = 210909 },
] ]
[[package]] [[package]]
@@ -2168,7 +2160,7 @@ wheels = [
[[package]] [[package]]
name = "mem0ai" name = "mem0ai"
version = "0.1.29" version = "0.1.22"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "openai" }, { name = "openai" },
@@ -2178,9 +2170,9 @@ dependencies = [
{ name = "qdrant-client" }, { name = "qdrant-client" },
{ name = "sqlalchemy" }, { name = "sqlalchemy" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/a9/bf/152718f9da3844dd24d4c45850b2e719798b5ce9389adf4ec873ee8905ca/mem0ai-0.1.29.tar.gz", hash = "sha256:42adefb7a9b241be03fbcabadf5328abf91b4ac390bc97e5966e55e3cac192c5", size = 55201 } sdist = { url = "https://files.pythonhosted.org/packages/7f/b4/64c6f7d9684bd1f9b46d251abfc7d5b2cc8371d70f1f9eec097f9872c719/mem0ai-0.1.22.tar.gz", hash = "sha256:d01aa028763719bd0ede2de4602121a7c3bf023f46112cd50cc9169140e11be2", size = 53117 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/65/9b/755be84f669415b3b513cfd935e768c4c84ac5c1ab6ff6ac2dab990a261a/mem0ai-0.1.29-py3-none-any.whl", hash = "sha256:07bbfd4238d0d7da65d5e4cf75a217eeb5b2829834e399074b05bb046730a57f", size = 79558 }, { url = "https://files.pythonhosted.org/packages/2b/27/3ef75abb28bf8b46c2cc34730f6be733ef2584652474216215019ee036a2/mem0ai-0.1.22-py3-none-any.whl", hash = "sha256:c783e15131c16a0d91e44e30195c1eeae9c36468de40006d5e42cf4516059855", size = 75695 },
] ]
[[package]] [[package]]
@@ -3210,6 +3202,34 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/22/a6/858897256d0deac81a172289110f31629fc4cee19b6f01283303e18c8db3/ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35", size = 13993 }, { url = "https://files.pythonhosted.org/packages/22/a6/858897256d0deac81a172289110f31629fc4cee19b6f01283303e18c8db3/ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35", size = 13993 },
] ]
[[package]]
name = "pulsar-client"
version = "3.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/e0/aa/eb3b04be87b961324e49748f3a715a12127d45d76258150bfa61b2a002d8/pulsar_client-3.5.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:c18552edb2f785de85280fe624bc507467152bff810fc81d7660fa2dfa861f38", size = 10953552 },
{ url = "https://files.pythonhosted.org/packages/cc/20/d59bf89ccdda45edd89f5b54bd1e93605ebe5ad3cb73f4f4f5e8eca8f9e6/pulsar_client-3.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18d438e456c146f01be41ef146f649dedc8f7bc714d9eaef94cff2e34099812b", size = 5190714 },
{ url = "https://files.pythonhosted.org/packages/1a/02/ca7e96b97d564d0375b8e3de65f95ac86c8502c40f6ff750e9d145709d9a/pulsar_client-3.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18a26a0719841103c7a89eb1492c4a8fedf89adaa386375baecbb4fa2707e88f", size = 5429820 },
{ url = "https://files.pythonhosted.org/packages/47/f3/682670cdc951b830cd3d8d1287521997327254e59508772664aaa656e246/pulsar_client-3.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ab0e1605dc5f44a126163fd06cd0a768494ad05123f6e0de89a2c71d6e2d2319", size = 5710427 },
{ url = "https://files.pythonhosted.org/packages/bc/00/119cd039286dfc1c91a5580963e9ba79204cd4717b16b7a6fdc57d1c1673/pulsar_client-3.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cdef720891b97656fdce3bf5913ea7729b2156b84ba64314f432c1e72c6117fa", size = 5916490 },
{ url = "https://files.pythonhosted.org/packages/0a/cc/d606b483dbb263cbaf7fc7c3d2ec4032628cf3324266cf9a4ccdb2a73076/pulsar_client-3.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:a42544e38773191fe550644a90e8050579476bb2dcf17ac69a4aed62a6cb70e7", size = 3305387 },
{ url = "https://files.pythonhosted.org/packages/0d/2e/aec6886a6d67f09230476182399b7fad694fbcbbaf004ce914725d4eddd9/pulsar_client-3.5.0-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:fd94432ea5d398ea78f8f2e09a217ec5058d26330c137a22690478c031e116da", size = 10954116 },
{ url = "https://files.pythonhosted.org/packages/43/06/b98df9300f60e5fad3396f843dd633c31176a495a2d60ba111c99511658a/pulsar_client-3.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d6252ae462e07ece4071213fdd9c76eab82ca522a749f2dc678037d4cbacd40b", size = 5189618 },
{ url = "https://files.pythonhosted.org/packages/72/05/c9aef7da7802a03c0b65ffe8f00a24289ff992f99ed5d5d1fd0ed63d9cf6/pulsar_client-3.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03b4d440b2d74323784328b082872ee2f206c440b5d224d7941eb3c083ec06c6", size = 5429329 },
{ url = "https://files.pythonhosted.org/packages/06/96/9acfe6f1d827cdd53b8460b04c63b4081333ef64a49a2f425419f1eb6b6b/pulsar_client-3.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f60af840b8d64a2fac5a0c1ce6ae0ddffec5f42267c6ded2c5e74bad8345f2a1", size = 5710106 },
{ url = "https://files.pythonhosted.org/packages/e1/7b/877a06eff5c9ac828cdb75e378ee29b0adac9328da9ee173eaf7076d8c56/pulsar_client-3.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2277a447c3b7f6571cb1eb9fc5c25da3fdd43d0b2fb91cf52054adfadc7d6842", size = 5916541 },
{ url = "https://files.pythonhosted.org/packages/fb/62/ed1da1ef72c95ba6a830e43995550ed0a1d26c223fb4b036ac6cd028c2ed/pulsar_client-3.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:f20f3e9dd50db2a37059abccad42078b7a4754b8bc1d3ae6502e71c1ad2209f0", size = 3305485 },
{ url = "https://files.pythonhosted.org/packages/81/19/4b145766df706aa5e09f60bbf5f87b934e6ac950fddd18f4acd520c465b9/pulsar_client-3.5.0-cp312-cp312-macosx_10_15_universal2.whl", hash = "sha256:d61f663d85308e12f44033ba95af88730f581a7e8da44f7a5c080a3aaea4878d", size = 10967548 },
{ url = "https://files.pythonhosted.org/packages/bf/bd/9bc05ee861b46884554a4c61f96edb9602de131dd07982c27920e554ab5b/pulsar_client-3.5.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a1ba0be25b6f747bcb28102b7d906ec1de48dc9f1a2d9eacdcc6f44ab2c9e17", size = 5189598 },
{ url = "https://files.pythonhosted.org/packages/76/00/379bedfa6f1c810553996a4cb0984fa2e2c89afc5953df0936e1c9636003/pulsar_client-3.5.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a181e3e60ac39df72ccb3c415d7aeac61ad0286497a6e02739a560d5af28393a", size = 5430145 },
{ url = "https://files.pythonhosted.org/packages/88/c8/8a37d75aa9132a69a28061c9e5f4b516328a1968b58bbae018f431c6d3d4/pulsar_client-3.5.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:3c72895ff7f51347e4f78b0375b2213fa70dd4790bbb78177b4002846f1fd290", size = 5708960 },
{ url = "https://files.pythonhosted.org/packages/6e/9a/abd98661e3f7ae3a8e1d3fb0fc7eba1a30005391ebd575ab06a66021256c/pulsar_client-3.5.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:547dba1b185a17eba915e51d0a3aca27c80747b6187e5cd7a71a3ca33921decc", size = 5915227 },
{ url = "https://files.pythonhosted.org/packages/a2/51/db376181d05716de595515fac736e3d06e96d3345ba0e31c0a90c352eae1/pulsar_client-3.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:443b786eed96bc86d2297a6a42e79f39d1abf217ec603e0bd303f3488c0234af", size = 3306515 },
]
[[package]] [[package]]
name = "pure-eval" name = "pure-eval"
version = "0.2.3" version = "0.2.3"