Merge branch 'main' of github.com:crewAIInc/crewAI into knowledge

This commit is contained in:
Lorenze Jay
2024-11-14 12:22:07 -08:00
41 changed files with 1242 additions and 244 deletions

View File

@@ -19,5 +19,5 @@ jobs:
run: pip install bandit run: pip install bandit
- name: Run Bandit - name: Run Bandit
run: bandit -c pyproject.toml -r src/ -lll run: bandit -c pyproject.toml -r src/ -ll

View File

@@ -22,7 +22,8 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. | | **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. | | **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. | | **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. | | **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. | | **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. | | **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. | | **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |

View File

@@ -25,7 +25,100 @@ By default, CrewAI uses the `gpt-4o-mini` model. It uses environment variables i
- `OPENAI_API_BASE` - `OPENAI_API_BASE`
- `OPENAI_API_KEY` - `OPENAI_API_KEY`
### 2. Custom LLM Objects ### 2. Updating YAML files
You can update the `agents.yml` file to refer to the LLM you want to use:
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: openai/gpt-4o
# llm: azure/gpt-4o-mini
# llm: gemini/gemini-pro
# llm: anthropic/claude-3-5-sonnet-20240620
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: mistral/mistral-large-latest
# llm: ollama/llama3:70b
# llm: groq/llama-3.2-90b-vision-preview
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# ...
```
Keep in mind that you will need to set certain ENV vars depending on the model you are
using to account for the credentials or set a custom LLM object like described below.
Here are some of the required ENV vars for some of the LLM integrations:
<AccordionGroup>
<Accordion title="OpenAI">
```python Code
OPENAI_API_KEY=<your-api-key>
OPENAI_API_BASE=<optional-custom-base-url>
OPENAI_MODEL_NAME=<openai-model-name>
OPENAI_ORGANIZATION=<your-org-id> # OPTIONAL
OPENAI_API_BASE=<openaiai-api-base> # OPTIONAL
```
</Accordion>
<Accordion title="Anthropic">
```python Code
ANTHROPIC_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Google">
```python Code
GEMINI_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Azure">
```python Code
AZURE_API_KEY=<your-api-key> # "my-azure-api-key"
AZURE_API_BASE=<your-resource-url> # "https://example-endpoint.openai.azure.com"
AZURE_API_VERSION=<api-version> # "2023-05-15"
AZURE_AD_TOKEN=<your-azure-ad-token> # Optional
AZURE_API_TYPE=<your-azure-api-type> # Optional
```
</Accordion>
<Accordion title="AWS Bedrock">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
</Accordion>
<Accordion title="Mistral">
```python Code
MISTRAL_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
WATSONX_URL=<your-url> # (required) Base URL of your WatsonX instance
WATSONX_APIKEY=<your-apikey> # (required) IBM cloud API key
WATSONX_TOKEN=<your-token> # (required) IAM auth token (alternative to APIKEY)
WATSONX_PROJECT_ID=<your-project-id> # (optional) Project ID of your WatsonX instance
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id> # (optional) ID of deployment space for deployed models
```
</Accordion>
</AccordionGroup>
### 3. Custom LLM Objects
Pass a custom LLM implementation or object from another library. Pass a custom LLM implementation or object from another library.
@@ -102,7 +195,7 @@ When configuring an LLM for your agent, you have access to a wide range of param
These are examples of how to configure LLMs for your agent. These are examples of how to configure LLMs for your agent.
<AccordionGroup> <AccordionGroup>
<Accordion title="OpenAI"> <Accordion title="OpenAI">
```python Code ```python Code
@@ -133,10 +226,10 @@ These are examples of how to configure LLMs for your agent.
model="cerebras/llama-3.1-70b", model="cerebras/llama-3.1-70b",
api_key="your-api-key-here" api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
<Accordion title="Ollama (Local LLMs)"> <Accordion title="Ollama (Local LLMs)">
CrewAI supports using Ollama for running open-source models locally: CrewAI supports using Ollama for running open-source models locally:
@@ -150,7 +243,7 @@ These are examples of how to configure LLMs for your agent.
agent = Agent( agent = Agent(
llm=LLM( llm=LLM(
model="ollama/llama3.1", model="ollama/llama3.1",
base_url="http://localhost:11434" base_url="http://localhost:11434"
), ),
... ...
@@ -164,7 +257,7 @@ These are examples of how to configure LLMs for your agent.
from crewai import LLM from crewai import LLM
llm = LLM( llm = LLM(
model="groq/llama3-8b-8192", model="groq/llama3-8b-8192",
api_key="your-api-key-here" api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
@@ -189,7 +282,7 @@ These are examples of how to configure LLMs for your agent.
from crewai import LLM from crewai import LLM
llm = LLM( llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct", model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
api_key="your-api-key-here" api_key="your-api-key-here"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
@@ -224,6 +317,29 @@ These are examples of how to configure LLMs for your agent.
</Accordion> </Accordion>
<Accordion title="IBM watsonx.ai"> <Accordion title="IBM watsonx.ai">
You can use IBM Watson by seeting the following ENV vars:
```python Code
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
```
You can then define your agents llms by updating the `agents.yml`
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: watsonx/meta-llama/llama-3-1-70b-instruct
```
You can also set up agents more dynamically as a base level LLM instance, like bellow:
```python Code ```python Code
from crewai import LLM from crewai import LLM
@@ -247,7 +363,7 @@ These are examples of how to configure LLMs for your agent.
api_key="your-api-key-here", api_key="your-api-key-here",
base_url="your_api_endpoint" base_url="your_api_endpoint"
) )
agent = Agent(llm=llm, ...) agent = Agent(llm=llm, ...)
``` ```
</Accordion> </Accordion>
</AccordionGroup> </AccordionGroup>

View File

@@ -18,6 +18,7 @@ reason, and learn from past interactions.
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. | | **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. | | **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. | | **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
## How Memory Systems Empower Agents ## How Memory Systems Empower Agents
@@ -92,6 +93,47 @@ my_crew = Crew(
) )
``` ```
## Integrating Mem0 for Enhanced User Memory
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
```python Code
import os
from crewai import Crew, Process
from mem0 import MemoryClient
# Set environment variables for Mem0
os.environ["MEM0_API_KEY"] = "m0-xx"
# Step 1: Record preferences based on past conversation or user input
client = MemoryClient()
messages = [
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
{"role": "user", "content": "I am more of a beach person than a mountain person."},
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
{"role": "user", "content": "I like Airbnb more."},
]
client.add(messages, user_id="john")
# Step 2: Create a Crew with User Memory
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
process=Process.sequential,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
},
)
```
## Additional Embedding Providers ## Additional Embedding Providers

6
poetry.lock generated
View File

@@ -1597,12 +1597,12 @@ files = [
google-auth = ">=2.14.1,<3.0.dev0" google-auth = ">=2.14.1,<3.0.dev0"
googleapis-common-protos = ">=1.56.2,<2.0.dev0" googleapis-common-protos = ">=1.56.2,<2.0.dev0"
grpcio = [ grpcio = [
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, {version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
] ]
grpcio-status = [ grpcio-status = [
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, {version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
] ]
proto-plus = ">=1.22.3,<2.0.0dev" proto-plus = ">=1.22.3,<2.0.0dev"
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0" protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
@@ -4286,8 +4286,8 @@ files = [
[package.dependencies] [package.dependencies]
numpy = [ numpy = [
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""}, {version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""}, {version = ">=1.26.0", markers = "python_version >= \"3.12\""},
] ]
python-dateutil = ">=2.8.2" python-dateutil = ">=2.8.2"

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.76.9" version = "0.79.4"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
@@ -16,7 +16,7 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http>=1.22.0", "opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3", "instructor>=1.3.3",
"regex>=2024.9.11", "regex>=2024.9.11",
"crewai-tools>=0.13.4", "crewai-tools>=0.14.0",
"click>=8.1.7", "click>=8.1.7",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",
"appdirs>=1.4.4", "appdirs>=1.4.4",
@@ -37,7 +37,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI" Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies] [project.optional-dependencies]
tools = ["crewai-tools>=0.13.4"] tools = ["crewai-tools>=0.14.0"]
agentops = ["agentops>=0.3.0"] agentops = ["agentops>=0.3.0"]
fastembed = ["fastembed>=0.4.1"] fastembed = ["fastembed>=0.4.1"]
pdfplumber = [ pdfplumber = [
@@ -49,6 +49,7 @@ pandas = [
openpyxl = [ openpyxl = [
"openpyxl>=3.1.5", "openpyxl>=3.1.5",
] ]
mem0 = ["mem0ai>=0.1.29"]
[tool.uv] [tool.uv]
dev-dependencies = [ dev-dependencies = [
@@ -62,7 +63,7 @@ dev-dependencies = [
"mkdocs-material-extensions>=1.3.1", "mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0", "pillow>=10.2.0",
"cairosvg>=2.7.1", "cairosvg>=2.7.1",
"crewai-tools>=0.13.4", "crewai-tools>=0.14.0",
"pytest>=8.0.0", "pytest>=8.0.0",
"pytest-vcr>=1.0.2", "pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",

View File

@@ -16,7 +16,7 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.76.9" __version__ = "0.79.4"
__all__ = [ __all__ = [
"Agent", "Agent",
"Crew", "Crew",

View File

@@ -132,6 +132,11 @@ class Agent(BaseAgent):
@model_validator(mode="after") @model_validator(mode="after")
def post_init_setup(self): def post_init_setup(self):
self.agent_ops_agent_name = self.role self.agent_ops_agent_name = self.role
unnacepted_attributes = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
]
# Handle different cases for self.llm # Handle different cases for self.llm
if isinstance(self.llm, str): if isinstance(self.llm, str):
@@ -155,39 +160,44 @@ class Agent(BaseAgent):
if api_base: if api_base:
llm_params["base_url"] = api_base llm_params["base_url"] = api_base
set_provider = model_name.split("/")[0] if "/" in model_name else "openai"
# Iterate over all environment variables to find matching API keys or use defaults # Iterate over all environment variables to find matching API keys or use defaults
for provider, env_vars in ENV_VARS.items(): for provider, env_vars in ENV_VARS.items():
for env_var in env_vars: if provider == set_provider:
# Check if the environment variable is set for env_var in env_vars:
if "key_name" in env_var: if env_var["key_name"] in unnacepted_attributes:
env_value = os.environ.get(env_var["key_name"]) continue
if env_value: # Check if the environment variable is set
# Map key names containing "API_KEY" to "api_key" if "key_name" in env_var:
key_name = ( env_value = os.environ.get(env_var["key_name"])
"api_key" if env_value:
if "API_KEY" in env_var["key_name"] # Map key names containing "API_KEY" to "api_key"
else env_var["key_name"] key_name = (
) "api_key"
# Map key names containing "API_BASE" to "api_base" if "API_KEY" in env_var["key_name"]
key_name = ( else env_var["key_name"]
"api_base" )
if "API_BASE" in env_var["key_name"] # Map key names containing "API_BASE" to "api_base"
else key_name key_name = (
) "api_base"
# Map key names containing "API_VERSION" to "api_version" if "API_BASE" in env_var["key_name"]
key_name = ( else key_name
"api_version" )
if "API_VERSION" in env_var["key_name"] # Map key names containing "API_VERSION" to "api_version"
else key_name key_name = (
) "api_version"
llm_params[key_name] = env_value if "API_VERSION" in env_var["key_name"]
# Check for default values if the environment variable is not set else key_name
elif env_var.get("default", False): )
for key, value in env_var.items(): llm_params[key_name] = env_value
if key not in ["prompt", "key_name", "default"]: # Check for default values if the environment variable is not set
# Only add default if the key is already set in os.environ elif env_var.get("default", False):
if key in os.environ: for key, value in env_var.items():
llm_params[key] = value if key not in ["prompt", "key_name", "default"]:
# Only add default if the key is already set in os.environ
if key in os.environ:
llm_params[key] = value
self.llm = LLM(**llm_params) self.llm = LLM(**llm_params)
else: else:
@@ -267,9 +277,11 @@ class Agent(BaseAgent):
if self.crew and self.crew.memory: if self.crew and self.crew.memory:
contextual_memory = ContextualMemory( contextual_memory = ContextualMemory(
self.crew.memory_config,
self.crew._short_term_memory, self.crew._short_term_memory,
self.crew._long_term_memory, self.crew._long_term_memory,
self.crew._entity_memory, self.crew._entity_memory,
self.crew._user_memory,
) )
memory = contextual_memory.build_context_for_task(task, context) memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "": if memory.strip() != "":

View File

@@ -4,6 +4,7 @@ from crewai.types.usage_metrics import UsageMetrics
class TokenProcess: class TokenProcess:
total_tokens: int = 0 total_tokens: int = 0
prompt_tokens: int = 0 prompt_tokens: int = 0
cached_prompt_tokens: int = 0
completion_tokens: int = 0 completion_tokens: int = 0
successful_requests: int = 0 successful_requests: int = 0
@@ -15,6 +16,9 @@ class TokenProcess:
self.completion_tokens = self.completion_tokens + tokens self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens self.total_tokens = self.total_tokens + tokens
def sum_cached_prompt_tokens(self, tokens: int):
self.cached_prompt_tokens = self.cached_prompt_tokens + tokens
def sum_successful_requests(self, requests: int): def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests self.successful_requests = self.successful_requests + requests
@@ -22,6 +26,7 @@ class TokenProcess:
return UsageMetrics( return UsageMetrics(
total_tokens=self.total_tokens, total_tokens=self.total_tokens,
prompt_tokens=self.prompt_tokens, prompt_tokens=self.prompt_tokens,
cached_prompt_tokens=self.cached_prompt_tokens,
completion_tokens=self.completion_tokens, completion_tokens=self.completion_tokens,
successful_requests=self.successful_requests, successful_requests=self.successful_requests,
) )

View File

@@ -145,25 +145,26 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer.result = action_result formatted_answer.result = action_result
self._show_logs(formatted_answer) self._show_logs(formatted_answer)
if self.step_callback: if self.step_callback:
self.step_callback(formatted_answer) self.step_callback(formatted_answer)
if self._should_force_answer(): if self._should_force_answer():
if self.have_forced_answer: if self.have_forced_answer:
return AgentFinish( return AgentFinish(
output=self._i18n.errors( thought="",
"force_final_answer_error" output=self._i18n.errors(
).format(formatted_answer.text), "force_final_answer_error"
text=formatted_answer.text, ).format(formatted_answer.text),
) text=formatted_answer.text,
else: )
formatted_answer.text += ( else:
f'\n{self._i18n.errors("force_final_answer")}' formatted_answer.text += (
) f'\n{self._i18n.errors("force_final_answer")}'
self.have_forced_answer = True )
self.messages.append( self.have_forced_answer = True
self._format_msg(formatted_answer.text, role="assistant") self.messages.append(
) self._format_msg(formatted_answer.text, role="assistant")
)
except OutputParserException as e: except OutputParserException as e:
self.messages.append({"role": "user", "content": e.error}) self.messages.append({"role": "user", "content": e.error})
@@ -332,9 +333,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.crew is not None and hasattr(self.crew, "_train_iteration"): if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration train_iteration = self.crew._train_iteration
if agent_id in training_data and isinstance(train_iteration, int): if agent_id in training_data and isinstance(train_iteration, int):
training_data[agent_id][train_iteration][ training_data[agent_id][train_iteration]["improved_output"] = (
"improved_output" result.output
] = result.output )
training_handler.save(training_data) training_handler.save(training_data)
else: else:
self._logger.log( self._logger.log(
@@ -385,4 +386,5 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
return CrewAgentParser(agent=self.agent).parse(answer) return CrewAgentParser(agent=self.agent).parse(answer)
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]: def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
prompt = prompt.rstrip()
return {"role": role, "content": prompt} return {"role": role, "content": prompt}

View File

@@ -34,7 +34,9 @@ class AuthenticationCommand:
"scope": "openid", "scope": "openid",
"audience": AUTH0_AUDIENCE, "audience": AUTH0_AUDIENCE,
} }
response = requests.post(url=self.DEVICE_CODE_URL, data=device_code_payload) response = requests.post(
url=self.DEVICE_CODE_URL, data=device_code_payload, timeout=20
)
response.raise_for_status() response.raise_for_status()
return response.json() return response.json()
@@ -54,7 +56,7 @@ class AuthenticationCommand:
attempts = 0 attempts = 0
while True and attempts < 5: while True and attempts < 5:
response = requests.post(self.TOKEN_URL, data=token_payload) response = requests.post(self.TOKEN_URL, data=token_payload, timeout=30)
token_data = response.json() token_data = response.json()
if response.status_code == 200: if response.status_code == 200:

View File

@@ -24,7 +24,6 @@ def run_crew() -> None:
f"Please run `crewai update` to update your pyproject.toml to use uv.", f"Please run `crewai update` to update your pyproject.toml to use uv.",
fg="red", fg="red",
) )
print()
try: try:
subprocess.run(command, capture_output=False, text=True, check=True) subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.9,<1.0.0" "crewai[tools]>=0.79.4,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.9,<1.0.0", "crewai[tools]>=0.79.4,<1.0.0",
] ]
[project.scripts] [project.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.76.9,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.79.4,<1.0.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"] authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.9,<1.0.0" "crewai[tools]>=0.79.4,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.9" "crewai[tools]>=0.79.4"
] ]

View File

@@ -27,6 +27,7 @@ from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.user.user_memory import UserMemory
from crewai.process import Process from crewai.process import Process
from crewai.task import Task from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask from crewai.tasks.conditional_task import ConditionalTask
@@ -71,6 +72,7 @@ class Crew(BaseModel):
manager_llm: The language model that will run manager agent. manager_llm: The language model that will run manager agent.
manager_agent: Custom agent that will be used as manager. manager_agent: Custom agent that will be used as manager.
memory: Whether the crew should use memory to store memories of it's execution. memory: Whether the crew should use memory to store memories of it's execution.
memory_config: Configuration for the memory to be used for the crew.
cache: Whether the crew should use a cache to store the results of the tools execution. cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents. function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential, hierarchical). process: The process flow that the crew will follow (e.g., sequential, hierarchical).
@@ -94,6 +96,7 @@ class Crew(BaseModel):
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr() _short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr() _long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr() _entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False) _train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr() _train_iteration: Optional[int] = PrivateAttr()
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None) _inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
@@ -114,6 +117,10 @@ class Crew(BaseModel):
default=False, default=False,
description="Whether the crew should use memory to store memories of it's execution", description="Whether the crew should use memory to store memories of it's execution",
) )
memory_config: Optional[Dict[str, Any]] = Field(
default=None,
description="Configuration for the memory to be used for the crew.",
)
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field( short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
default=None, default=None,
description="An Instance of the ShortTermMemory to be used by the Crew", description="An Instance of the ShortTermMemory to be used by the Crew",
@@ -126,7 +133,11 @@ class Crew(BaseModel):
default=None, default=None,
description="An Instance of the EntityMemory to be used by the Crew", description="An Instance of the EntityMemory to be used by the Crew",
) )
embedder: Optional[Any] = Field( user_memory: Optional[InstanceOf[UserMemory]] = Field(
default=None,
description="An instance of the UserMemory to be used by the Crew to store/fetch memories of a specific user.",
)
embedder: Optional[dict] = Field(
default=None, default=None,
description="Configuration for the embedder to be used for the crew.", description="Configuration for the embedder to be used for the crew.",
) )
@@ -238,13 +249,22 @@ class Crew(BaseModel):
self._short_term_memory = ( self._short_term_memory = (
self.short_term_memory self.short_term_memory
if self.short_term_memory if self.short_term_memory
else ShortTermMemory(crew=self, embedder_config=self.embedder) else ShortTermMemory(
crew=self,
embedder_config=self.embedder,
)
) )
self._entity_memory = ( self._entity_memory = (
self.entity_memory self.entity_memory
if self.entity_memory if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder) else EntityMemory(crew=self, embedder_config=self.embedder)
) )
if hasattr(self, "memory_config") and self.memory_config is not None:
self._user_memory = (
self.user_memory if self.user_memory else UserMemory(crew=self)
)
else:
self._user_memory = None
return self return self
@model_validator(mode="after") @model_validator(mode="after")

View File

@@ -118,12 +118,12 @@ class LLM:
litellm.drop_params = True litellm.drop_params = True
litellm.set_verbose = False litellm.set_verbose = False
litellm.callbacks = callbacks self.set_callbacks(callbacks)
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str: def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
with suppress_warnings(): with suppress_warnings():
if callbacks and len(callbacks) > 0: if callbacks and len(callbacks) > 0:
litellm.callbacks = callbacks self.set_callbacks(callbacks)
try: try:
params = { params = {
@@ -181,3 +181,15 @@ class LLM:
def get_context_window_size(self) -> int: def get_context_window_size(self) -> int:
# Only using 75% of the context window size to avoid cutting the message in the middle # Only using 75% of the context window size to avoid cutting the message in the middle
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75) return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
def set_callbacks(self, callbacks: List[Any]):
callback_types = [type(callback) for callback in callbacks]
for callback in litellm.success_callback[:]:
if type(callback) in callback_types:
litellm.success_callback.remove(callback)
for callback in litellm._async_success_callback[:]:
if type(callback) in callback_types:
litellm._async_success_callback.remove(callback)
litellm.callbacks = callbacks

View File

@@ -1,5 +1,6 @@
from .entity.entity_memory import EntityMemory from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory from .short_term.short_term_memory import ShortTermMemory
from .user.user_memory import UserMemory
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"] __all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -1,13 +1,25 @@
from typing import Optional from typing import Optional, Dict, Any
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
class ContextualMemory: class ContextualMemory:
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory): def __init__(
self,
memory_config: Optional[Dict[str, Any]],
stm: ShortTermMemory,
ltm: LongTermMemory,
em: EntityMemory,
um: UserMemory,
):
if memory_config is not None:
self.memory_provider = memory_config.get("provider")
else:
self.memory_provider = None
self.stm = stm self.stm = stm
self.ltm = ltm self.ltm = ltm
self.em = em self.em = em
self.um = um
def build_context_for_task(self, task, context) -> str: def build_context_for_task(self, task, context) -> str:
""" """
@@ -23,6 +35,8 @@ class ContextualMemory:
context.append(self._fetch_ltm_context(task.description)) context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query)) context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query)) context.append(self._fetch_entity_context(query))
if self.memory_provider == "mem0":
context.append(self._fetch_user_context(query))
return "\n".join(filter(None, context)) return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str: def _fetch_stm_context(self, query) -> str:
@@ -32,9 +46,11 @@ class ContextualMemory:
""" """
stm_results = self.stm.search(query) stm_results = self.stm.search(query)
formatted_results = "\n".join( formatted_results = "\n".join(
[f"- {result['context']}" for result in stm_results] [
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in stm_results
]
) )
print("formatted_results stm", formatted_results)
return f"Recent Insights:\n{formatted_results}" if stm_results else "" return f"Recent Insights:\n{formatted_results}" if stm_results else ""
def _fetch_ltm_context(self, task) -> Optional[str]: def _fetch_ltm_context(self, task) -> Optional[str]:
@@ -54,8 +70,6 @@ class ContextualMemory:
formatted_results = list(dict.fromkeys(formatted_results)) formatted_results = list(dict.fromkeys(formatted_results))
formatted_results = "\n".join([f"- {result}" for result in formatted_results]) # type: ignore # Incompatible types in assignment (expression has type "str", variable has type "list[str]") formatted_results = "\n".join([f"- {result}" for result in formatted_results]) # type: ignore # Incompatible types in assignment (expression has type "str", variable has type "list[str]")
print("formatted_results ltm", formatted_results)
return f"Historical Data:\n{formatted_results}" if ltm_results else "" return f"Historical Data:\n{formatted_results}" if ltm_results else ""
def _fetch_entity_context(self, query) -> str: def _fetch_entity_context(self, query) -> str:
@@ -65,7 +79,26 @@ class ContextualMemory:
""" """
em_results = self.em.search(query) em_results = self.em.search(query)
formatted_results = "\n".join( formatted_results = "\n".join(
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice" [
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in em_results
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
) )
print("formatted_results em", formatted_results)
return f"Entities:\n{formatted_results}" if em_results else "" return f"Entities:\n{formatted_results}" if em_results else ""
def _fetch_user_context(self, query: str) -> str:
"""
Fetches and formats relevant user information from User Memory.
Args:
query (str): The search query to find relevant user memories.
Returns:
str: Formatted user memories as bullet points, or an empty string if none found.
"""
user_memories = self.um.search(query)
if not user_memories:
return ""
formatted_memories = "\n".join(
f"- {result['memory']}" for result in user_memories
)
return f"User memories/preferences:\n{formatted_memories}"

View File

@@ -11,21 +11,43 @@ class EntityMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
storage = ( if hasattr(crew, "memory_config") and crew.memory_config is not None:
storage self.memory_provider = crew.memory_config.get("provider")
if storage else:
else RAGStorage( self.memory_provider = None
type="entities",
allow_reset=True, if self.memory_provider == "mem0":
embedder_config=embedder_config, try:
crew=crew, from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="entities", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="entities",
allow_reset=False,
embedder_config=embedder_config,
crew=crew,
)
) )
)
super().__init__(storage) super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory" def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage.""" """Saves an entity item into the SQLite storage."""
data = f"{item.name}({item.type}): {item.description}" if self.memory_provider == "mem0":
data = f"""
Remember details about the following entity:
Name: {item.name}
Type: {item.type}
Entity Description: {item.description}
"""
else:
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata) super().save(data, item.metadata)
def reset(self) -> None: def reset(self) -> None:

View File

@@ -23,5 +23,12 @@ class Memory:
self.storage.save(value, metadata) self.storage.save(value, metadata)
def search(self, query: str) -> List[Dict[str, Any]]: def search(
return self.storage.search(query) self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
)

View File

@@ -14,13 +14,27 @@ class ShortTermMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
storage = ( if hasattr(crew, "memory_config") and crew.memory_config is not None:
storage self.memory_provider = crew.memory_config.get("provider")
if storage else:
else RAGStorage( self.memory_provider = None
type="short_term", embedder_config=embedder_config, crew=crew
if self.memory_provider == "mem0":
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="short_term", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
) )
)
super().__init__(storage) super().__init__(storage)
def save( def save(
@@ -30,11 +44,20 @@ class ShortTermMemory(Memory):
agent: Optional[str] = None, agent: Optional[str] = None,
) -> None: ) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent) item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
if self.memory_provider == "mem0":
item.data = f"Remember the following insights from Agent run: {item.data}"
super().save(value=item.data, metadata=item.metadata, agent=item.agent) super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(self, query: str, score_threshold: float = 0.35): def search(
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def reset(self) -> None: def reset(self) -> None:
try: try:

View File

@@ -7,8 +7,10 @@ class Storage:
def save(self, value: Any, metadata: Dict[str, Any]) -> None: def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass pass
def search(self, key: str) -> List[Dict[str, Any]]: # type: ignore def search(
pass self, query: str, limit: int, score_threshold: float
) -> Dict[str, Any] | List[Any]:
return {}
def reset(self) -> None: def reset(self) -> None:
pass pass

View File

@@ -103,7 +103,7 @@ class KickoffTaskOutputsSQLiteStorage:
else value else value
) )
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec
values.append(task_index) values.append(task_index)
cursor.execute(query, tuple(values)) cursor.execute(query, tuple(values))

View File

@@ -83,7 +83,7 @@ class LTMSQLiteStorage:
WHERE task_description = ? WHERE task_description = ?
ORDER BY datetime DESC, score ASC ORDER BY datetime DESC, score ASC
LIMIT {latest_n} LIMIT {latest_n}
""", """, # nosec
(task_description,), (task_description,),
) )
rows = cursor.fetchall() rows = cursor.fetchall()

View File

@@ -0,0 +1,104 @@
import os
from typing import Any, Dict, List
from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage
class Mem0Storage(Storage):
"""
Extends Storage to handle embedding and searching across entities using Mem0.
"""
def __init__(self, type, crew=None):
super().__init__()
if type not in ["user", "short_term", "long_term", "entities"]:
raise ValueError("Invalid type for Mem0Storage. Must be 'user' or 'agent'.")
self.memory_type = type
self.crew = crew
self.memory_config = crew.memory_config
# User ID is required for user memory type "user" since it's used as a unique identifier for the user.
user_id = self._get_user_id()
if type == "user" and not user_id:
raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable
mem0_api_key = self.memory_config.get("config", {}).get("api_key") or os.getenv(
"MEM0_API_KEY"
)
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str:
"""
Sanitizes agent roles to ensure valid directory names.
"""
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
user_id = self._get_user_id()
agent_name = self._get_agent_name()
if self.memory_type == "user":
self.memory.add(value, user_id=user_id, metadata={**metadata})
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
self.memory.add(
value, agent_id=agent_name, metadata={"type": "short_term", **metadata}
)
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
self.memory.add(
value,
agent_id=agent_name,
infer=False,
metadata={"type": "long_term", **metadata},
)
elif self.memory_type == "entities":
entity_name = None
self.memory.add(
value, user_id=entity_name, metadata={"type": "entity", **metadata}
)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit}
if self.memory_type == "user":
user_id = self._get_user_id()
params["user_id"] = user_id
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "short_term"}
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "long_term"}
elif self.memory_type == "entities":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "entity"}
# Discard the filters for now since we create the filters
# automatically when the crew is created.
results = self.memory.search(**params)
return [r for r in results if r["score"] >= score_threshold]
def _get_user_id(self):
if self.memory_type == "user":
if hasattr(self, "memory_config") and self.memory_config is not None:
return self.memory_config.get("config", {}).get("user_id")
else:
return None
return None
def _get_agent_name(self):
agents = self.crew.agents if self.crew else []
agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents)
return agents

View File

View File

@@ -0,0 +1,45 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
class UserMemory(Memory):
"""
UserMemory class for handling user memory storage and retrieval.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, crew=None):
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="user", crew=crew)
super().__init__(storage)
def save(
self,
value,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
# TODO: Change this function since we want to take care of the case where we save memories for the usr
data = f"Remember the details about the user: {value}"
super().save(data, metadata)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
results = super().search(
query=query,
limit=limit,
score_threshold=score_threshold,
)
return results

View File

@@ -0,0 +1,8 @@
from typing import Any, Dict, Optional
class UserMemoryItem:
def __init__(self, data: Any, user: str, metadata: Optional[Dict[str, Any]] = None):
self.data = data
self.user = user
self.metadata = metadata if metadata is not None else {}

View File

View File

@@ -8,6 +8,7 @@ class UsageMetrics(BaseModel):
Attributes: Attributes:
total_tokens: Total number of tokens used. total_tokens: Total number of tokens used.
prompt_tokens: Number of tokens used in prompts. prompt_tokens: Number of tokens used in prompts.
cached_prompt_tokens: Number of cached prompt tokens used.
completion_tokens: Number of tokens used in completions. completion_tokens: Number of tokens used in completions.
successful_requests: Number of successful requests made. successful_requests: Number of successful requests made.
""" """
@@ -16,6 +17,9 @@ class UsageMetrics(BaseModel):
prompt_tokens: int = Field( prompt_tokens: int = Field(
default=0, description="Number of tokens used in prompts." default=0, description="Number of tokens used in prompts."
) )
cached_prompt_tokens: int = Field(
default=0, description="Number of cached prompt tokens used."
)
completion_tokens: int = Field( completion_tokens: int = Field(
default=0, description="Number of tokens used in completions." default=0, description="Number of tokens used in completions."
) )
@@ -32,5 +36,6 @@ class UsageMetrics(BaseModel):
""" """
self.total_tokens += usage_metrics.total_tokens self.total_tokens += usage_metrics.total_tokens
self.prompt_tokens += usage_metrics.prompt_tokens self.prompt_tokens += usage_metrics.prompt_tokens
self.cached_prompt_tokens += usage_metrics.cached_prompt_tokens
self.completion_tokens += usage_metrics.completion_tokens self.completion_tokens += usage_metrics.completion_tokens
self.successful_requests += usage_metrics.successful_requests self.successful_requests += usage_metrics.successful_requests

View File

@@ -16,7 +16,11 @@ class FileHandler:
def log(self, **kwargs): def log(self, **kwargs):
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S") now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
message = f"{now}: " + ", ".join([f"{key}=\"{value}\"" for key, value in kwargs.items()]) + "\n" message = (
f"{now}: "
+ ", ".join([f'{key}="{value}"' for key, value in kwargs.items()])
+ "\n"
)
with open(self._path, "a", encoding="utf-8") as file: with open(self._path, "a", encoding="utf-8") as file:
file.write(message + "\n") file.write(message + "\n")
@@ -63,7 +67,7 @@ class PickleHandler:
with open(self.file_path, "rb") as file: with open(self.file_path, "rb") as file:
try: try:
return pickle.load(file) return pickle.load(file) # nosec
except EOFError: except EOFError:
return {} # Return an empty dictionary if the file is empty or corrupted return {} # Return an empty dictionary if the file is empty or corrupted
except Exception: except Exception:

View File

@@ -1,5 +1,5 @@
from litellm.integrations.custom_logger import CustomLogger from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import Usage
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -11,8 +11,11 @@ class TokenCalcHandler(CustomLogger):
if self.token_cost_process is None: if self.token_cost_process is None:
return return
usage : Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1) self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(response_obj["usage"].prompt_tokens) self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
self.token_cost_process.sum_completion_tokens( self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
response_obj["usage"].completion_tokens if usage.prompt_tokens_details:
) self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
)

View File

@@ -10,7 +10,8 @@ interactions:
criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}' Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -19,49 +20,50 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '869' - '919'
content-type: content-type:
- application/json - application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7auGDrAVE0iXSBBhySZp3xE8gvP\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSy27bMBC86ysWPEuB7ciV7VuAIkUObQ+59QFhTa0kttQuS9Jx08D/XkhyLAVJ
\"assistant\",\n \"content\": \"I now can give a great answer\\nFinal gV4EaGdnMLPDpwRAmUrtQOkWo+6czW7u41q2t3+cvCvuPvxafSG+58XHTzXlxWeV9gzZ/yAdn1lX
Answer: Dogs are unparalleled in loyalty and companionship to humans.\",\n \"refusal\": WjpnKRrhEdaeMFKvuiyul3m+uV7lA9BJRbanNS5muWSdYZOtFqs8WxTZcnNmt2I0BbWDrwkAwNPw
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n 7X1yRb/VDhbp86SjELAhtbssASgvtp8oDMGEiBxVOoFaOBIP1u+A5QgaGRrzQIDQ9LYBORzJA3zj
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\": W8No4Wb438F7aQKgJ7DyiBb6zMhGOKRA3CJrww10xBEttIQ2toBcgTyQR2vhSNZmezLcXM39eKoP
21,\n \"total_tokens\": 196,\n \"completion_tokens_details\": {\n \"reasoning_tokens\": Afub8MHa8/x0CWilcV724Yxf5rVhE9rSEwbhPkyI4tSAnhKA78MhDy9uo5yXzsUyyk/iMHSzHvXU
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n" 1N+Ejo0BqCgR7Yy13aZv6JUVRTQ2zKpQGnVL1USdesNDZWQGJLPUr928pT0mN9z8j/wEaE0uUlU6
T5XRLxNPa5765/2vtcuVB8MqPIZIXVkbbsg7b8bHVbuyXm9xs8xXRa2SU/IXAAD//wMAq2ZCBWoD
AAA=
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f22ddda01cf3-GRU - 8e19bf36db158761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -69,19 +71,27 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:44 GMT - Tue, 12 Nov 2024 21:52:04 GMT
Server: Server:
- cloudflare - cloudflare
Set-Cookie:
- __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
path=/; expires=Tue, 12-Nov-24 22:22:04 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding: Transfer-Encoding:
- chunked - chunked
X-Content-Type-Options: X-Content-Type-Options:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '349' - '601'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -89,19 +99,20 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999792' - '199793'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 8.64s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_4c8cd76fdfba7b65e5ce85397b33c22b - req_77fb166b4e272bfd45c37c08d2b93b0c
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
have a lot of experience with cat.\nYour personal goal is: Express hot takes have a lot of experience with cat.\nYour personal goal is: Express hot takes
@@ -113,7 +124,8 @@ interactions:
criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}' Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -122,49 +134,53 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '869' - '919'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g; - __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000 _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7auNbAqjT3rgBX92rhxBLuhaLBj\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSy27bMBC86ysWPFuB7MhN6ltQIGmBnlL00BcEmlxJ21JLhlzFLQL/eyH5IRlt
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal gV4EaGZnMLPLlwxAkVUbUKbVYrrg8rsPsg4P+Orxs9XvPz0U8eP966dS6sdo3wa1GBR++x2NnFRX
Answer: Cats are highly independent, agile, and intuitive creatures beloved xnfBoZDnA20iasHBdXlzvSzL2+vVeiQ6b9ENsiZIXvq8I6Z8VazKvLjJl7dHdevJYFIb+JIBALyM
by millions worldwide.\",\n \"refusal\": null\n },\n \"logprobs\": 3yEnW/ypNlAsTkiHKekG1eY8BKCidwOidEqURLOoxUQaz4I8Rn8H7HdgNENDzwgamiE2aE47jABf
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": +Z5YO7gb/zfwRksCHRGGGAHZIg/D1GmXFiBtpGfiBjyDtEgR/I5BMHYJNFvomZ56hIAxedaOhDBd
175,\n \"completion_tokens\": 28,\n \"total_tokens\": 203,\n \"completion_tokens_details\": zYNFrPukh+Vw79wR35+bOt+E6LfpyJ/xmphSW0XUyfPQKokPamT3GcC3caP9xZJUiL4LUon/gZzG
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n" I60Pfmo65MSuTqR40W6GF8c7XPpVFkWTS7ObKKNNi3aSTgfUvSU/I7JZ6z/T/M370Jy4+R/7iTAG
g6CtQkRL5rLxNBZxeOf/GjtveQys0q8k2FU1cYMxRDq8sjpUxVYXdrkq66XK9tlvAAAA//8DAIjK
KzJzAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f2321c1c1cf3-GRU - 8e19bf3fae118761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -172,7 +188,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:45 GMT - Tue, 12 Nov 2024 21:52:05 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -181,10 +197,12 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '430' - '464'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -192,19 +210,20 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9998'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999792' - '199792'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 16.369s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_ace859b7d9e83d9fa7753ce23bb03716 - req_91706b23d0ef23458ba63ec18304cd28
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are apple Researcher. body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
You have a lot of experience with apple.\nYour personal goal is: Express hot You have a lot of experience with apple.\nYour personal goal is: Express hot
@@ -217,7 +236,7 @@ interactions:
under 15 words.\nyou MUST return the actual complete content as the final answer, under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model": and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o"}' "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -226,49 +245,53 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '879' - '929'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g; - __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000 _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7avZ0yqY18ukQS7SnLkZydsx72b\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214165,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSPW/bMBDd9SsOXLpIgeTITarNS4t26JJubSHQ5IliSh1ZHv0RBP7vhSTHctAU
\"assistant\",\n \"content\": \"I now can give a great answer.\\n\\nFinal 6CJQ7909vHd3zxmAsFo0IFQvkxqCKzYPaf1b2/hhW+8PR9N9Kh9W5Zdhjebr4zeRjx1++4gqvXTd
Answer: Apples are incredibly versatile, nutritious, and a staple in diets globally.\",\n KD8Eh8l6mmkVUSYcVau726qu729X7ydi8Brd2GZCKmpfDJZssSpXdVHeFdX9ubv3ViGLBr5nAADP
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\": 03f0SRqPooEyf0EGZJYGRXMpAhDRuxERktlykpREvpDKU0KarH8G8gdQksDYPYIEM9oGSXzACPCD
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\": PlqSDjbTfwObEBy+Y0Dl+YkTDmApoYkyIUMvoz7IiDmw79L8kqSBMe7HMMAoB4fM7ikHpF6SsmRg
25,\n \"total_tokens\": 200,\n \"completion_tokens_details\": {\n \"reasoning_tokens\": xxgBjwGjRVJ4c+00YrdjOU6Lds6d8dMluvMmRL/lM3/BO0uW+zaiZE9jTE4+iIk9ZQA/pxHvXk1N
0\n }\n },\n \"system_fingerprint\": \"fp_a5d11b2ef2\"\n}\n" hOiHkNrkfyHxtLX1rCeWzS7svEsAkXyS7govq/wNvVZjktbx1ZKEkqpHvbQuG5U7bf0VkV2l/tvN
W9pzckvmf+QXQikMCXUbImqrXideyiKOh/+vssuUJ8NiPpO2s2Qwhmjns+tCW25lqatV3VUiO2V/
AAAA//8DAPtpFJCEAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f2369a761cf3-GRU - 8e19bf447ba48761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -276,7 +299,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:46 GMT - Tue, 12 Nov 2024 21:52:06 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -285,10 +308,12 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '389' - '655'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -296,17 +321,18 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9997'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999791' - '199791'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 24.239s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_0167388f0a7a7f1a1026409834ceb914 - req_a228208b0e965ecee334a6947d6c9e7c
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
version: 1 version: 1

View File

@@ -0,0 +1,205 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Hello, world!"}], "model": "gpt-4o-mini",
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '101'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSwWrcMBS8+ytedY6LvWvYZi8lpZSkBJLSQiChGK307FUi66nSc9Ml7L8H2e56
l7bQiw8zb8Yzg14yAGG0WINQW8mq8za/+Oqv5MUmXv+8+/Hl3uO3j59u1efreHO+/PAszpKCNo+o
+LfqraLOW2RDbqRVQMmYXMvVsqyWy1VVDERHGm2StZ7zivLOOJMvikWVF6u8fDept2QURrGGhwwA
4GX4ppxO4y+xhsFrQDqMUbYo1ocjABHIJkTIGE1k6ViczaQix+iG6JdoLb2BS3oGJR1cwSiAHfXA
pOXu/bEwYNNHmcK73toJ3x+SWGp9oE2c+APeGGfitg4oI7n018jkxcDuM4DvQ+P+pITwgTrPNdMT
umRYlqOdmHeeyfOJY2JpZ3gxjXRqVmtkaWw8GkwoqbaoZ+W8ruy1oSMiO6r8Z5a/eY+1jWv/x34m
lELPqGsfUBt12nc+C5ge4b/ODhMPgUXcRcauboxrMfhgxifQ+LrYyEKXi6opRbbPXgEAAP//AwAM
DMWoEAMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8e185b2c1b790303-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 12 Nov 2024 17:49:00 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
path=/; expires=Tue, 12-Nov-24 18:19:00 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms:
- '322'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '200000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '199978'
x-ratelimit-reset-requests:
- 8.64s
x-ratelimit-reset-tokens:
- 6ms
x-request-id:
- req_037288753767e763a51a04eae757ca84
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Hello, world from another agent!"}],
"model": "gpt-4o-mini", "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '120'
content-type:
- application/json
cookie:
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
_cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSy27bMBC86yu2PFuBZAt14UvRU5MA7aVAEKAIBJpcSUwoLkuu6jiB/z3QI5aM
tkAvPMzsDGZ2+ZoACKPFDoRqJKvW2/TLD3+z//oiD8dfL7d339zvW125x9zX90/3mVj1Cto/ouJ3
1ZWi1ltkQ26kVUDJ2Lvm201ebDbbIh+IljTaXlZ7TgtKW+NMus7WRZpt0/zTpG7IKIxiBz8TAIDX
4e1zOo3PYgfZ6h1pMUZZo9idhwBEINsjQsZoIkvHYjWTihyjG6Jfo7X0Ab4bhcAEipxDxXAw3IB0
xA0GkDU6voJrOoCSDm5gNIUjdcCk5fHz0jxg1UXZF3SdtRN+Oqe1VPtA+zjxZ7wyzsSmDCgjuT5Z
ZPJiYE8JwMOwle6iqPCBWs8l0xO63jAvRjsx32JBfpxIJpZ2xjfTJi/dSo0sjY2LrQolVYN6Vs4n
kJ02tCCSRec/w/zNe+xtXP0/9jOhFHpGXfqA2qjLwvNYwP6n/mvsvOMhsIjHyNiWlXE1Bh/M+E8q
X2Z7mel8XVS5SE7JGwAAAP//AwA/cK4yNQMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8e185b31398a0303-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 12 Nov 2024 17:49:02 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms:
- '889'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '200000'
x-ratelimit-remaining-requests:
- '9998'
x-ratelimit-remaining-tokens:
- '199975'
x-ratelimit-reset-requests:
- 16.489s
x-ratelimit-reset-tokens:
- 7ms
x-request-id:
- req_bde3810b36a4859688e53d1df64bdd20
status:
code: 200
message: OK
version: 1

View File

@@ -564,6 +564,7 @@ def test_crew_kickoff_usage_metrics():
assert result.token_usage.prompt_tokens > 0 assert result.token_usage.prompt_tokens > 0
assert result.token_usage.completion_tokens > 0 assert result.token_usage.completion_tokens > 0
assert result.token_usage.successful_requests > 0 assert result.token_usage.successful_requests > 0
assert result.token_usage.cached_prompt_tokens == 0
def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set(): def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
@@ -1280,10 +1281,11 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
assert result.raw == "Howdy!" assert result.raw == "Howdy!"
assert result.token_usage == UsageMetrics( assert result.token_usage == UsageMetrics(
total_tokens=2626, total_tokens=1673,
prompt_tokens=2482, prompt_tokens=1562,
completion_tokens=144, completion_tokens=111,
successful_requests=5, successful_requests=3,
cached_prompt_tokens=0
) )
@@ -1777,26 +1779,22 @@ def test_crew_train_success(
] ]
) )
crew_training_handler.assert_has_calls( crew_training_handler.assert_any_call("training_data.pkl")
[ crew_training_handler().load.assert_called()
mock.call("training_data.pkl"),
mock.call().load(), crew_training_handler.assert_any_call("trained_agents_data.pkl")
mock.call("trained_agents_data.pkl"), crew_training_handler().load.assert_called()
mock.call().save_trained_data(
agent_id="Researcher", crew_training_handler().save_trained_data.assert_has_calls([
trained_data=task_evaluator().evaluate_training_data().model_dump(), mock.call(
), agent_id="Researcher",
mock.call("trained_agents_data.pkl"), trained_data=task_evaluator().evaluate_training_data().model_dump(),
mock.call().save_trained_data( ),
agent_id="Senior Writer", mock.call(
trained_data=task_evaluator().evaluate_training_data().model_dump(), agent_id="Senior Writer",
), trained_data=task_evaluator().evaluate_training_data().model_dump(),
mock.call(), )
mock.call().load(), ])
mock.call(),
mock.call().load(),
]
)
def test_crew_train_error(): def test_crew_train_error():

30
tests/llm_test.py Normal file
View File

@@ -0,0 +1,30 @@
import pytest
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.llm import LLM
from crewai.utilities.token_counter_callback import TokenCalcHandler
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_callback_replacement():
llm = LLM(model="gpt-4o-mini")
calc_handler_1 = TokenCalcHandler(token_cost_process=TokenProcess())
calc_handler_2 = TokenCalcHandler(token_cost_process=TokenProcess())
llm.call(
messages=[{"role": "user", "content": "Hello, world!"}],
callbacks=[calc_handler_1],
)
usage_metrics_1 = calc_handler_1.token_cost_process.get_summary()
llm.call(
messages=[{"role": "user", "content": "Hello, world from another agent!"}],
callbacks=[calc_handler_2],
)
usage_metrics_2 = calc_handler_2.token_cost_process.get_summary()
# The first handler should not have been updated
assert usage_metrics_1.successful_requests == 1
assert usage_metrics_2.successful_requests == 1
assert usage_metrics_1 == calc_handler_1.token_cost_process.get_summary()

View File

@@ -0,0 +1,270 @@
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: GET
uri: https://api.mem0.ai/v1/memories/?user_id=test
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477138bad847b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:11 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '740'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:12 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '69'
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Remember the following insights
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '219'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/
response:
body:
string: '{"message":"ok"}'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477140282547b9-BOM
Connection:
- keep-alive
Content-Length:
- '16'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '73'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/search/
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b47714d083b47b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:14 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- POST, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '739'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '33'
status:
code: 200
message: OK
version: 1

14
uv.lock generated
View File

@@ -604,7 +604,7 @@ wheels = [
[[package]] [[package]]
name = "crewai" name = "crewai"
version = "0.76.9" version = "0.79.4"
source = { editable = "." } source = { editable = "." }
dependencies = [ dependencies = [
{ name = "appdirs" }, { name = "appdirs" },
@@ -677,8 +677,8 @@ requires-dist = [
{ name = "auth0-python", specifier = ">=4.7.1" }, { name = "auth0-python", specifier = ">=4.7.1" },
{ name = "chromadb", specifier = ">=0.4.24" }, { name = "chromadb", specifier = ">=0.4.24" },
{ name = "click", specifier = ">=8.1.7" }, { name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", specifier = ">=0.13.4" }, { name = "crewai-tools", specifier = ">=0.14.0" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.13.4" }, { name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" },
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" }, { name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" }, { name = "instructor", specifier = ">=1.3.3" },
{ name = "json-repair", specifier = ">=0.25.2" }, { name = "json-repair", specifier = ">=0.25.2" },
@@ -704,7 +704,7 @@ requires-dist = [
[package.metadata.requires-dev] [package.metadata.requires-dev]
dev = [ dev = [
{ name = "cairosvg", specifier = ">=2.7.1" }, { name = "cairosvg", specifier = ">=2.7.1" },
{ name = "crewai-tools", specifier = ">=0.13.4" }, { name = "crewai-tools", specifier = ">=0.14.0" },
{ name = "mkdocs", specifier = ">=1.4.3" }, { name = "mkdocs", specifier = ">=1.4.3" },
{ name = "mkdocs-material", specifier = ">=9.5.7" }, { name = "mkdocs-material", specifier = ">=9.5.7" },
{ name = "mkdocs-material-extensions", specifier = ">=1.3.1" }, { name = "mkdocs-material-extensions", specifier = ">=1.3.1" },
@@ -723,7 +723,7 @@ dev = [
[[package]] [[package]]
name = "crewai-tools" name = "crewai-tools"
version = "0.13.4" version = "0.14.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "beautifulsoup4" }, { name = "beautifulsoup4" },
@@ -741,9 +741,9 @@ dependencies = [
{ name = "requests" }, { name = "requests" },
{ name = "selenium" }, { name = "selenium" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/64/bd/eff7b633a0b28ff4ed115adde1499e3dcc683e4f0b5c378a4c6f5c0c1bf6/crewai_tools-0.13.4.tar.gz", hash = "sha256:b6ac527633b7018471d892c21ac96bc961a86b6626d996b1ed7d53cd481d4505", size = 816588 } sdist = { url = "https://files.pythonhosted.org/packages/9b/6d/4fa91b481b120f83bb58f365203d8aa8564e8ced1035d79f8aedb7d71e2f/crewai_tools-0.14.0.tar.gz", hash = "sha256:510f3a194bcda4fdae4314bd775521964b5f229ddbe451e5d9e0216cae57f4e3", size = 815892 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/6c/40/93cd347d854059cf5e54a81b70f896deea7ad1f03e9c024549eb323c4da5/crewai_tools-0.13.4-py3-none-any.whl", hash = "sha256:eda78fe3c4df57676259d8dd6b2610fa31f89b90909512f15893adb57fb9e825", size = 463703 }, { url = "https://files.pythonhosted.org/packages/c8/ed/9f4e64e1507062957b0118085332d38b621c1000874baef2d1c4069bfd97/crewai_tools-0.14.0-py3-none-any.whl", hash = "sha256:0a804a828c29869c3af3253f4fc4c3967a3f80f06dab22e9bbe9526608a31564", size = 462980 },
] ]
[[package]] [[package]]