Compare commits

..

11 Commits

Author SHA1 Message Date
theCyberTech
58af5c08f9 docs: add gh_token documentation to GithubSearchTool 2024-11-20 19:23:09 +08:00
Lorenze Jay
3dc02310b6 upgrade chroma and adjust embedder function generator (#1607)
* upgrade chroma and adjust embedder function generator

* >= version

* linted
2024-11-14 14:13:12 -08:00
Dev Khant
e70bc94ab6 Add support for retrieving user preferences and memories using Mem0 (#1209)
* Integrate Mem0

* Update src/crewai/memory/contextual/contextual_memory.py

Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>

* pending commit for _fetch_user_memories

* update poetry.lock

* fixes mypy issues

* fix mypy checks

* New fixes for user_id

* remove memory_provider

* handle memory_provider

* checks for memory_config

* add mem0 to dependency

* Update pyproject.toml

Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>

* update docs

* update doc

* bump mem0 version

* fix api error msg and mypy issue

* mypy fix

* resolve comments

* fix memory usage without mem0

* mem0 version bump

* lazy import mem0

---------

Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-11-14 10:59:24 -08:00
Eduardo Chiarotti
9285ebf8a2 feat: Reduce level for Bandit and fix code to adapt (#1604) 2024-11-14 13:12:35 -03:00
Thiago Moretto
4ca785eb15 Merge pull request #1597 from crewAIInc/tm-fix-crew-train-test
Fix crew_train_success test
2024-11-13 10:52:49 -03:00
Thiago Moretto
c57cbd8591 Fix crew_train_success test 2024-11-13 10:47:49 -03:00
Thiago Moretto
7fb1289205 Merge pull request #1596 from crewAIInc/tm-recording-cached-prompt-tokens
Add cached prompt tokens info on usage metrics
2024-11-13 10:37:29 -03:00
Thiago Moretto
f02681ae01 Merge branch 'main' into tm-recording-cached-prompt-tokens 2024-11-13 10:19:02 -03:00
Thiago Moretto
c725105b1f do not include cached on total 2024-11-13 10:18:30 -03:00
Thiago Moretto
36aa4bcb46 Cached prompt tokens on usage metrics 2024-11-13 10:16:30 -03:00
Eduardo Chiarotti
b98f8f9fe1 fix: Step callback issue (#1595)
* fix: Step callback issue

* fix: Add empty thought since its required
2024-11-13 10:07:28 -03:00
30 changed files with 877 additions and 249 deletions

View File

@@ -19,5 +19,5 @@ jobs:
run: pip install bandit run: pip install bandit
- name: Run Bandit - name: Run Bandit
run: bandit -c pyproject.toml -r src/ -lll run: bandit -c pyproject.toml -r src/ -ll

View File

@@ -22,7 +22,8 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. | | **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. | | **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. | | **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. | | **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. | | **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. | | **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. | | **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |

View File

@@ -18,6 +18,7 @@ reason, and learn from past interactions.
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. | | **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. | | **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. | | **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
## How Memory Systems Empower Agents ## How Memory Systems Empower Agents
@@ -92,6 +93,47 @@ my_crew = Crew(
) )
``` ```
## Integrating Mem0 for Enhanced User Memory
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
```python Code
import os
from crewai import Crew, Process
from mem0 import MemoryClient
# Set environment variables for Mem0
os.environ["MEM0_API_KEY"] = "m0-xx"
# Step 1: Record preferences based on past conversation or user input
client = MemoryClient()
messages = [
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
{"role": "user", "content": "I am more of a beach person than a mountain person."},
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
{"role": "user", "content": "I like Airbnb more."},
]
client.add(messages, user_id="john")
# Step 2: Create a Crew with User Memory
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
process=Process.sequential,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
},
)
```
## Additional Embedding Providers ## Additional Embedding Providers

View File

@@ -34,6 +34,7 @@ from crewai_tools import GithubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository # Initialize the tool for semantic searches within a specific GitHub repository
tool = GithubSearchTool( tool = GithubSearchTool(
github_repo='https://github.com/example/repo', github_repo='https://github.com/example/repo',
gh_token='your_github_personal_access_token',
content_types=['code', 'issue'] # Options: code, repo, pr, issue content_types=['code', 'issue'] # Options: code, repo, pr, issue
) )
@@ -41,6 +42,7 @@ tool = GithubSearchTool(
# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution # Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution
tool = GithubSearchTool( tool = GithubSearchTool(
gh_token='your_github_personal_access_token',
content_types=['code', 'issue'] # Options: code, repo, pr, issue content_types=['code', 'issue'] # Options: code, repo, pr, issue
) )
``` ```
@@ -48,6 +50,7 @@ tool = GithubSearchTool(
## Arguments ## Arguments
- `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search. - `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search.
- `gh_token` : Your GitHub Personal Access Token (PAT) required for authentication. You can create one in your GitHub account settings under Developer Settings > Personal Access Tokens.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code, - `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code,
`repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues. `repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues.
This field is mandatory and allows tailoring the search to specific content types within the GitHub repository. This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.
@@ -77,5 +80,4 @@ tool = GithubSearchTool(
), ),
), ),
) )
) )
```

6
poetry.lock generated
View File

@@ -1597,12 +1597,12 @@ files = [
google-auth = ">=2.14.1,<3.0.dev0" google-auth = ">=2.14.1,<3.0.dev0"
googleapis-common-protos = ">=1.56.2,<2.0.dev0" googleapis-common-protos = ">=1.56.2,<2.0.dev0"
grpcio = [ grpcio = [
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, {version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
] ]
grpcio-status = [ grpcio-status = [
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, {version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
] ]
proto-plus = ">=1.22.3,<2.0.0dev" proto-plus = ">=1.22.3,<2.0.0dev"
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0" protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
@@ -4286,8 +4286,8 @@ files = [
[package.dependencies] [package.dependencies]
numpy = [ numpy = [
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""}, {version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""}, {version = ">=1.26.0", markers = "python_version >= \"3.12\""},
] ]
python-dateutil = ">=2.8.2" python-dateutil = ">=2.8.2"

View File

@@ -27,8 +27,8 @@ dependencies = [
"pyvis>=0.3.2", "pyvis>=0.3.2",
"uv>=0.4.25", "uv>=0.4.25",
"tomli-w>=1.1.0", "tomli-w>=1.1.0",
"chromadb>=0.4.24",
"tomli>=2.0.2", "tomli>=2.0.2",
"chromadb>=0.5.18",
] ]
[project.urls] [project.urls]
@@ -39,6 +39,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies] [project.optional-dependencies]
tools = ["crewai-tools>=0.14.0"] tools = ["crewai-tools>=0.14.0"]
agentops = ["agentops>=0.3.0"] agentops = ["agentops>=0.3.0"]
mem0 = ["mem0ai>=0.1.29"]
[tool.uv] [tool.uv]
dev-dependencies = [ dev-dependencies = [

View File

@@ -262,9 +262,11 @@ class Agent(BaseAgent):
if self.crew and self.crew.memory: if self.crew and self.crew.memory:
contextual_memory = ContextualMemory( contextual_memory = ContextualMemory(
self.crew.memory_config,
self.crew._short_term_memory, self.crew._short_term_memory,
self.crew._long_term_memory, self.crew._long_term_memory,
self.crew._entity_memory, self.crew._entity_memory,
self.crew._user_memory,
) )
memory = contextual_memory.build_context_for_task(task, context) memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "": if memory.strip() != "":

View File

@@ -4,6 +4,7 @@ from crewai.types.usage_metrics import UsageMetrics
class TokenProcess: class TokenProcess:
total_tokens: int = 0 total_tokens: int = 0
prompt_tokens: int = 0 prompt_tokens: int = 0
cached_prompt_tokens: int = 0
completion_tokens: int = 0 completion_tokens: int = 0
successful_requests: int = 0 successful_requests: int = 0
@@ -15,6 +16,9 @@ class TokenProcess:
self.completion_tokens = self.completion_tokens + tokens self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens self.total_tokens = self.total_tokens + tokens
def sum_cached_prompt_tokens(self, tokens: int):
self.cached_prompt_tokens = self.cached_prompt_tokens + tokens
def sum_successful_requests(self, requests: int): def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests self.successful_requests = self.successful_requests + requests
@@ -22,6 +26,7 @@ class TokenProcess:
return UsageMetrics( return UsageMetrics(
total_tokens=self.total_tokens, total_tokens=self.total_tokens,
prompt_tokens=self.prompt_tokens, prompt_tokens=self.prompt_tokens,
cached_prompt_tokens=self.cached_prompt_tokens,
completion_tokens=self.completion_tokens, completion_tokens=self.completion_tokens,
successful_requests=self.successful_requests, successful_requests=self.successful_requests,
) )

View File

@@ -34,7 +34,9 @@ class AuthenticationCommand:
"scope": "openid", "scope": "openid",
"audience": AUTH0_AUDIENCE, "audience": AUTH0_AUDIENCE,
} }
response = requests.post(url=self.DEVICE_CODE_URL, data=device_code_payload) response = requests.post(
url=self.DEVICE_CODE_URL, data=device_code_payload, timeout=20
)
response.raise_for_status() response.raise_for_status()
return response.json() return response.json()
@@ -54,7 +56,7 @@ class AuthenticationCommand:
attempts = 0 attempts = 0
while True and attempts < 5: while True and attempts < 5:
response = requests.post(self.TOKEN_URL, data=token_payload) response = requests.post(self.TOKEN_URL, data=token_payload, timeout=30)
token_data = response.json() token_data = response.json()
if response.status_code == 200: if response.status_code == 200:

View File

@@ -27,6 +27,7 @@ from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.user.user_memory import UserMemory
from crewai.process import Process from crewai.process import Process
from crewai.task import Task from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask from crewai.tasks.conditional_task import ConditionalTask
@@ -71,6 +72,7 @@ class Crew(BaseModel):
manager_llm: The language model that will run manager agent. manager_llm: The language model that will run manager agent.
manager_agent: Custom agent that will be used as manager. manager_agent: Custom agent that will be used as manager.
memory: Whether the crew should use memory to store memories of it's execution. memory: Whether the crew should use memory to store memories of it's execution.
memory_config: Configuration for the memory to be used for the crew.
cache: Whether the crew should use a cache to store the results of the tools execution. cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents. function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential, hierarchical). process: The process flow that the crew will follow (e.g., sequential, hierarchical).
@@ -94,6 +96,7 @@ class Crew(BaseModel):
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr() _short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr() _long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr() _entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False) _train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr() _train_iteration: Optional[int] = PrivateAttr()
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None) _inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
@@ -114,6 +117,10 @@ class Crew(BaseModel):
default=False, default=False,
description="Whether the crew should use memory to store memories of it's execution", description="Whether the crew should use memory to store memories of it's execution",
) )
memory_config: Optional[Dict[str, Any]] = Field(
default=None,
description="Configuration for the memory to be used for the crew.",
)
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field( short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
default=None, default=None,
description="An Instance of the ShortTermMemory to be used by the Crew", description="An Instance of the ShortTermMemory to be used by the Crew",
@@ -126,7 +133,11 @@ class Crew(BaseModel):
default=None, default=None,
description="An Instance of the EntityMemory to be used by the Crew", description="An Instance of the EntityMemory to be used by the Crew",
) )
embedder: Optional[Any] = Field( user_memory: Optional[InstanceOf[UserMemory]] = Field(
default=None,
description="An instance of the UserMemory to be used by the Crew to store/fetch memories of a specific user.",
)
embedder: Optional[dict] = Field(
default=None, default=None,
description="Configuration for the embedder to be used for the crew.", description="Configuration for the embedder to be used for the crew.",
) )
@@ -238,13 +249,22 @@ class Crew(BaseModel):
self._short_term_memory = ( self._short_term_memory = (
self.short_term_memory self.short_term_memory
if self.short_term_memory if self.short_term_memory
else ShortTermMemory(crew=self, embedder_config=self.embedder) else ShortTermMemory(
crew=self,
embedder_config=self.embedder,
)
) )
self._entity_memory = ( self._entity_memory = (
self.entity_memory self.entity_memory
if self.entity_memory if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder) else EntityMemory(crew=self, embedder_config=self.embedder)
) )
if hasattr(self, "memory_config") and self.memory_config is not None:
self._user_memory = (
self.user_memory if self.user_memory else UserMemory(crew=self)
)
else:
self._user_memory = None
return self return self
@model_validator(mode="after") @model_validator(mode="after")

View File

@@ -1,5 +1,6 @@
from .entity.entity_memory import EntityMemory from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory from .short_term.short_term_memory import ShortTermMemory
from .user.user_memory import UserMemory
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"] __all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -1,13 +1,25 @@
from typing import Optional from typing import Optional, Dict, Any
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
class ContextualMemory: class ContextualMemory:
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory): def __init__(
self,
memory_config: Optional[Dict[str, Any]],
stm: ShortTermMemory,
ltm: LongTermMemory,
em: EntityMemory,
um: UserMemory,
):
if memory_config is not None:
self.memory_provider = memory_config.get("provider")
else:
self.memory_provider = None
self.stm = stm self.stm = stm
self.ltm = ltm self.ltm = ltm
self.em = em self.em = em
self.um = um
def build_context_for_task(self, task, context) -> str: def build_context_for_task(self, task, context) -> str:
""" """
@@ -23,6 +35,8 @@ class ContextualMemory:
context.append(self._fetch_ltm_context(task.description)) context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query)) context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query)) context.append(self._fetch_entity_context(query))
if self.memory_provider == "mem0":
context.append(self._fetch_user_context(query))
return "\n".join(filter(None, context)) return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str: def _fetch_stm_context(self, query) -> str:
@@ -32,7 +46,10 @@ class ContextualMemory:
""" """
stm_results = self.stm.search(query) stm_results = self.stm.search(query)
formatted_results = "\n".join( formatted_results = "\n".join(
[f"- {result['context']}" for result in stm_results] [
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in stm_results
]
) )
return f"Recent Insights:\n{formatted_results}" if stm_results else "" return f"Recent Insights:\n{formatted_results}" if stm_results else ""
@@ -62,6 +79,26 @@ class ContextualMemory:
""" """
em_results = self.em.search(query) em_results = self.em.search(query)
formatted_results = "\n".join( formatted_results = "\n".join(
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice" [
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in em_results
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
) )
return f"Entities:\n{formatted_results}" if em_results else "" return f"Entities:\n{formatted_results}" if em_results else ""
def _fetch_user_context(self, query: str) -> str:
"""
Fetches and formats relevant user information from User Memory.
Args:
query (str): The search query to find relevant user memories.
Returns:
str: Formatted user memories as bullet points, or an empty string if none found.
"""
user_memories = self.um.search(query)
if not user_memories:
return ""
formatted_memories = "\n".join(
f"- {result['memory']}" for result in user_memories
)
return f"User memories/preferences:\n{formatted_memories}"

View File

@@ -11,21 +11,43 @@ class EntityMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
storage = ( if hasattr(crew, "memory_config") and crew.memory_config is not None:
storage self.memory_provider = crew.memory_config.get("provider")
if storage else:
else RAGStorage( self.memory_provider = None
type="entities",
allow_reset=True, if self.memory_provider == "mem0":
embedder_config=embedder_config, try:
crew=crew, from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="entities", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="entities",
allow_reset=True,
embedder_config=embedder_config,
crew=crew,
)
) )
)
super().__init__(storage) super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory" def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage.""" """Saves an entity item into the SQLite storage."""
data = f"{item.name}({item.type}): {item.description}" if self.memory_provider == "mem0":
data = f"""
Remember details about the following entity:
Name: {item.name}
Type: {item.type}
Entity Description: {item.description}
"""
else:
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata) super().save(data, item.metadata)
def reset(self) -> None: def reset(self) -> None:

View File

@@ -23,5 +23,12 @@ class Memory:
self.storage.save(value, metadata) self.storage.save(value, metadata)
def search(self, query: str) -> List[Dict[str, Any]]: def search(
return self.storage.search(query) self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
)

View File

@@ -14,13 +14,27 @@ class ShortTermMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
storage = ( if hasattr(crew, "memory_config") and crew.memory_config is not None:
storage self.memory_provider = crew.memory_config.get("provider")
if storage else:
else RAGStorage( self.memory_provider = None
type="short_term", embedder_config=embedder_config, crew=crew
if self.memory_provider == "mem0":
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="short_term", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
) )
)
super().__init__(storage) super().__init__(storage)
def save( def save(
@@ -30,11 +44,20 @@ class ShortTermMemory(Memory):
agent: Optional[str] = None, agent: Optional[str] = None,
) -> None: ) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent) item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
if self.memory_provider == "mem0":
item.data = f"Remember the following insights from Agent run: {item.data}"
super().save(value=item.data, metadata=item.metadata, agent=item.agent) super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(self, query: str, score_threshold: float = 0.35): def search(
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def reset(self) -> None: def reset(self) -> None:
try: try:

View File

@@ -7,8 +7,10 @@ class Storage:
def save(self, value: Any, metadata: Dict[str, Any]) -> None: def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass pass
def search(self, key: str) -> List[Dict[str, Any]]: # type: ignore def search(
pass self, query: str, limit: int, score_threshold: float
) -> Dict[str, Any] | List[Any]:
return {}
def reset(self) -> None: def reset(self) -> None:
pass pass

View File

@@ -103,7 +103,7 @@ class KickoffTaskOutputsSQLiteStorage:
else value else value
) )
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec
values.append(task_index) values.append(task_index)
cursor.execute(query, tuple(values)) cursor.execute(query, tuple(values))

View File

@@ -83,7 +83,7 @@ class LTMSQLiteStorage:
WHERE task_description = ? WHERE task_description = ?
ORDER BY datetime DESC, score ASC ORDER BY datetime DESC, score ASC
LIMIT {latest_n} LIMIT {latest_n}
""", """, # nosec
(task_description,), (task_description,),
) )
rows = cursor.fetchall() rows = cursor.fetchall()

View File

@@ -0,0 +1,104 @@
import os
from typing import Any, Dict, List
from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage
class Mem0Storage(Storage):
"""
Extends Storage to handle embedding and searching across entities using Mem0.
"""
def __init__(self, type, crew=None):
super().__init__()
if type not in ["user", "short_term", "long_term", "entities"]:
raise ValueError("Invalid type for Mem0Storage. Must be 'user' or 'agent'.")
self.memory_type = type
self.crew = crew
self.memory_config = crew.memory_config
# User ID is required for user memory type "user" since it's used as a unique identifier for the user.
user_id = self._get_user_id()
if type == "user" and not user_id:
raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable
mem0_api_key = self.memory_config.get("config", {}).get("api_key") or os.getenv(
"MEM0_API_KEY"
)
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str:
"""
Sanitizes agent roles to ensure valid directory names.
"""
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
user_id = self._get_user_id()
agent_name = self._get_agent_name()
if self.memory_type == "user":
self.memory.add(value, user_id=user_id, metadata={**metadata})
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
self.memory.add(
value, agent_id=agent_name, metadata={"type": "short_term", **metadata}
)
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
self.memory.add(
value,
agent_id=agent_name,
infer=False,
metadata={"type": "long_term", **metadata},
)
elif self.memory_type == "entities":
entity_name = None
self.memory.add(
value, user_id=entity_name, metadata={"type": "entity", **metadata}
)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit}
if self.memory_type == "user":
user_id = self._get_user_id()
params["user_id"] = user_id
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "short_term"}
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "long_term"}
elif self.memory_type == "entities":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "entity"}
# Discard the filters for now since we create the filters
# automatically when the crew is created.
results = self.memory.search(**params)
return [r for r in results if r["score"] >= score_threshold]
def _get_user_id(self):
if self.memory_type == "user":
if hasattr(self, "memory_config") and self.memory_config is not None:
return self.memory_config.get("config", {}).get("user_id")
else:
return None
return None
def _get_agent_name(self):
agents = self.crew.agents if self.crew else []
agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents)
return agents

View File

@@ -51,8 +51,6 @@ class RAGStorage(BaseRAGStorage):
self._initialize_app() self._initialize_app()
def _set_embedder_config(self): def _set_embedder_config(self):
import chromadb.utils.embedding_functions as embedding_functions
if self.embedder_config is None: if self.embedder_config is None:
self.embedder_config = self._create_default_embedding_function() self.embedder_config = self._create_default_embedding_function()
@@ -61,12 +59,20 @@ class RAGStorage(BaseRAGStorage):
config = self.embedder_config.get("config", {}) config = self.embedder_config.get("config", {})
model_name = config.get("model") model_name = config.get("model")
if provider == "openai": if provider == "openai":
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction( from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
self.embedder_config = OpenAIEmbeddingFunction(
api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"), api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"),
model_name=model_name, model_name=model_name,
) )
elif provider == "azure": elif provider == "azure":
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction( from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
self.embedder_config = OpenAIEmbeddingFunction(
api_key=config.get("api_key"), api_key=config.get("api_key"),
api_base=config.get("api_base"), api_base=config.get("api_base"),
api_type=config.get("api_type", "azure"), api_type=config.get("api_type", "azure"),
@@ -74,45 +80,55 @@ class RAGStorage(BaseRAGStorage):
model_name=model_name, model_name=model_name,
) )
elif provider == "ollama": elif provider == "ollama":
from openai import OpenAI from chromadb.utils.embedding_functions.ollama_embedding_function import (
OllamaEmbeddingFunction,
)
class OllamaEmbeddingFunction(EmbeddingFunction): self.embedder_config = OllamaEmbeddingFunction(
def __call__(self, input: Documents) -> Embeddings: url=config.get("url", "http://localhost:11434/api/embeddings"),
client = OpenAI( model_name=model_name,
base_url="http://localhost:11434/v1", )
api_key=config.get("api_key", "ollama"),
)
try:
response = client.embeddings.create(
input=input, model=model_name
)
embeddings = [item.embedding for item in response.data]
return cast(Embeddings, embeddings)
except Exception as e:
raise e
self.embedder_config = OllamaEmbeddingFunction()
elif provider == "vertexai": elif provider == "vertexai":
self.embedder_config = ( from chromadb.utils.embedding_functions.google_embedding_function import (
embedding_functions.GoogleVertexEmbeddingFunction( GoogleVertexEmbeddingFunction,
model_name=model_name,
api_key=config.get("api_key"),
)
) )
elif provider == "google":
self.embedder_config = ( self.embedder_config = GoogleVertexEmbeddingFunction(
embedding_functions.GoogleGenerativeAiEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
)
elif provider == "cohere":
self.embedder_config = embedding_functions.CohereEmbeddingFunction(
model_name=model_name, model_name=model_name,
api_key=config.get("api_key"), api_key=config.get("api_key"),
) )
elif provider == "google":
from chromadb.utils.embedding_functions.google_embedding_function import (
GoogleGenerativeAiEmbeddingFunction,
)
self.embedder_config = GoogleGenerativeAiEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "cohere":
from chromadb.utils.embedding_functions.cohere_embedding_function import (
CohereEmbeddingFunction,
)
self.embedder_config = CohereEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "bedrock":
from chromadb.utils.embedding_functions.amazon_bedrock_embedding_function import (
AmazonBedrockEmbeddingFunction,
)
self.embedder_config = AmazonBedrockEmbeddingFunction(
session=config.get("session"),
)
elif provider == "huggingface": elif provider == "huggingface":
self.embedder_config = embedding_functions.HuggingFaceEmbeddingServer( from chromadb.utils.embedding_functions.huggingface_embedding_function import (
HuggingFaceEmbeddingServer,
)
self.embedder_config = HuggingFaceEmbeddingServer(
url=config.get("api_url"), url=config.get("api_url"),
) )
elif provider == "watson": elif provider == "watson":
@@ -253,8 +269,10 @@ class RAGStorage(BaseRAGStorage):
) )
def _create_default_embedding_function(self): def _create_default_embedding_function(self):
import chromadb.utils.embedding_functions as embedding_functions from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
return embedding_functions.OpenAIEmbeddingFunction( return OpenAIEmbeddingFunction(
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small" api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
) )

View File

View File

@@ -0,0 +1,45 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
class UserMemory(Memory):
"""
UserMemory class for handling user memory storage and retrieval.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, crew=None):
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="user", crew=crew)
super().__init__(storage)
def save(
self,
value,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
# TODO: Change this function since we want to take care of the case where we save memories for the usr
data = f"Remember the details about the user: {value}"
super().save(data, metadata)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
results = super().search(
query=query,
limit=limit,
score_threshold=score_threshold,
)
return results

View File

@@ -0,0 +1,8 @@
from typing import Any, Dict, Optional
class UserMemoryItem:
def __init__(self, data: Any, user: str, metadata: Optional[Dict[str, Any]] = None):
self.data = data
self.user = user
self.metadata = metadata if metadata is not None else {}

View File

@@ -8,6 +8,7 @@ class UsageMetrics(BaseModel):
Attributes: Attributes:
total_tokens: Total number of tokens used. total_tokens: Total number of tokens used.
prompt_tokens: Number of tokens used in prompts. prompt_tokens: Number of tokens used in prompts.
cached_prompt_tokens: Number of cached prompt tokens used.
completion_tokens: Number of tokens used in completions. completion_tokens: Number of tokens used in completions.
successful_requests: Number of successful requests made. successful_requests: Number of successful requests made.
""" """
@@ -16,6 +17,9 @@ class UsageMetrics(BaseModel):
prompt_tokens: int = Field( prompt_tokens: int = Field(
default=0, description="Number of tokens used in prompts." default=0, description="Number of tokens used in prompts."
) )
cached_prompt_tokens: int = Field(
default=0, description="Number of cached prompt tokens used."
)
completion_tokens: int = Field( completion_tokens: int = Field(
default=0, description="Number of tokens used in completions." default=0, description="Number of tokens used in completions."
) )
@@ -32,5 +36,6 @@ class UsageMetrics(BaseModel):
""" """
self.total_tokens += usage_metrics.total_tokens self.total_tokens += usage_metrics.total_tokens
self.prompt_tokens += usage_metrics.prompt_tokens self.prompt_tokens += usage_metrics.prompt_tokens
self.cached_prompt_tokens += usage_metrics.cached_prompt_tokens
self.completion_tokens += usage_metrics.completion_tokens self.completion_tokens += usage_metrics.completion_tokens
self.successful_requests += usage_metrics.successful_requests self.successful_requests += usage_metrics.successful_requests

View File

@@ -16,7 +16,11 @@ class FileHandler:
def log(self, **kwargs): def log(self, **kwargs):
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S") now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
message = f"{now}: " + ", ".join([f"{key}=\"{value}\"" for key, value in kwargs.items()]) + "\n" message = (
f"{now}: "
+ ", ".join([f'{key}="{value}"' for key, value in kwargs.items()])
+ "\n"
)
with open(self._path, "a", encoding="utf-8") as file: with open(self._path, "a", encoding="utf-8") as file:
file.write(message + "\n") file.write(message + "\n")
@@ -63,7 +67,7 @@ class PickleHandler:
with open(self.file_path, "rb") as file: with open(self.file_path, "rb") as file:
try: try:
return pickle.load(file) return pickle.load(file) # nosec
except EOFError: except EOFError:
return {} # Return an empty dictionary if the file is empty or corrupted return {} # Return an empty dictionary if the file is empty or corrupted
except Exception: except Exception:

View File

@@ -1,5 +1,5 @@
from litellm.integrations.custom_logger import CustomLogger from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import Usage
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -11,8 +11,11 @@ class TokenCalcHandler(CustomLogger):
if self.token_cost_process is None: if self.token_cost_process is None:
return return
usage : Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1) self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(response_obj["usage"].prompt_tokens) self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
self.token_cost_process.sum_completion_tokens( self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
response_obj["usage"].completion_tokens if usage.prompt_tokens_details:
) self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
)

View File

@@ -10,7 +10,8 @@ interactions:
criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}' Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -19,49 +20,50 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '869' - '919'
content-type: content-type:
- application/json - application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7auGDrAVE0iXSBBhySZp3xE8gvP\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSy27bMBC86ysWPEuB7ciV7VuAIkUObQ+59QFhTa0kttQuS9Jx08D/XkhyLAVJ
\"assistant\",\n \"content\": \"I now can give a great answer\\nFinal gV4EaGdnMLPDpwRAmUrtQOkWo+6czW7u41q2t3+cvCvuPvxafSG+58XHTzXlxWeV9gzZ/yAdn1lX
Answer: Dogs are unparalleled in loyalty and companionship to humans.\",\n \"refusal\": WjpnKRrhEdaeMFKvuiyul3m+uV7lA9BJRbanNS5muWSdYZOtFqs8WxTZcnNmt2I0BbWDrwkAwNPw
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n 7X1yRb/VDhbp86SjELAhtbssASgvtp8oDMGEiBxVOoFaOBIP1u+A5QgaGRrzQIDQ9LYBORzJA3zj
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\": W8No4Wb438F7aQKgJ7DyiBb6zMhGOKRA3CJrww10xBEttIQ2toBcgTyQR2vhSNZmezLcXM39eKoP
21,\n \"total_tokens\": 196,\n \"completion_tokens_details\": {\n \"reasoning_tokens\": Afub8MHa8/x0CWilcV724Yxf5rVhE9rSEwbhPkyI4tSAnhKA78MhDy9uo5yXzsUyyk/iMHSzHvXU
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n" 1N+Ejo0BqCgR7Yy13aZv6JUVRTQ2zKpQGnVL1USdesNDZWQGJLPUr928pT0mN9z8j/wEaE0uUlU6
T5XRLxNPa5765/2vtcuVB8MqPIZIXVkbbsg7b8bHVbuyXm9xs8xXRa2SU/IXAAD//wMAq2ZCBWoD
AAA=
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f22ddda01cf3-GRU - 8e19bf36db158761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -69,19 +71,27 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:44 GMT - Tue, 12 Nov 2024 21:52:04 GMT
Server: Server:
- cloudflare - cloudflare
Set-Cookie:
- __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
path=/; expires=Tue, 12-Nov-24 22:22:04 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding: Transfer-Encoding:
- chunked - chunked
X-Content-Type-Options: X-Content-Type-Options:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '349' - '601'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -89,19 +99,20 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999792' - '199793'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 8.64s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_4c8cd76fdfba7b65e5ce85397b33c22b - req_77fb166b4e272bfd45c37c08d2b93b0c
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
have a lot of experience with cat.\nYour personal goal is: Express hot takes have a lot of experience with cat.\nYour personal goal is: Express hot takes
@@ -113,7 +124,8 @@ interactions:
criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}' Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -122,49 +134,53 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '869' - '919'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g; - __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000 _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7auNbAqjT3rgBX92rhxBLuhaLBj\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSy27bMBC86ysWPFuB7MhN6ltQIGmBnlL00BcEmlxJ21JLhlzFLQL/eyH5IRlt
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal gV4EaGZnMLPLlwxAkVUbUKbVYrrg8rsPsg4P+Orxs9XvPz0U8eP966dS6sdo3wa1GBR++x2NnFRX
Answer: Cats are highly independent, agile, and intuitive creatures beloved xnfBoZDnA20iasHBdXlzvSzL2+vVeiQ6b9ENsiZIXvq8I6Z8VazKvLjJl7dHdevJYFIb+JIBALyM
by millions worldwide.\",\n \"refusal\": null\n },\n \"logprobs\": 3yEnW/ypNlAsTkiHKekG1eY8BKCidwOidEqURLOoxUQaz4I8Rn8H7HdgNENDzwgamiE2aE47jABf
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": +Z5YO7gb/zfwRksCHRGGGAHZIg/D1GmXFiBtpGfiBjyDtEgR/I5BMHYJNFvomZ56hIAxedaOhDBd
175,\n \"completion_tokens\": 28,\n \"total_tokens\": 203,\n \"completion_tokens_details\": zYNFrPukh+Vw79wR35+bOt+E6LfpyJ/xmphSW0XUyfPQKokPamT3GcC3caP9xZJUiL4LUon/gZzG
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n" I60Pfmo65MSuTqR40W6GF8c7XPpVFkWTS7ObKKNNi3aSTgfUvSU/I7JZ6z/T/M370Jy4+R/7iTAG
g6CtQkRL5rLxNBZxeOf/GjtveQys0q8k2FU1cYMxRDq8sjpUxVYXdrkq66XK9tlvAAAA//8DAIjK
KzJzAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f2321c1c1cf3-GRU - 8e19bf3fae118761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -172,7 +188,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:45 GMT - Tue, 12 Nov 2024 21:52:05 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -181,10 +197,12 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '430' - '464'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -192,19 +210,20 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9998'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999792' - '199792'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 16.369s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_ace859b7d9e83d9fa7753ce23bb03716 - req_91706b23d0ef23458ba63ec18304cd28
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are apple Researcher. body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
You have a lot of experience with apple.\nYour personal goal is: Express hot You have a lot of experience with apple.\nYour personal goal is: Express hot
@@ -217,7 +236,7 @@ interactions:
under 15 words.\nyou MUST return the actual complete content as the final answer, under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model": and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o"}' "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -226,49 +245,53 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '879' - '929'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g; - __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000 _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7avZ0yqY18ukQS7SnLkZydsx72b\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214165,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSPW/bMBDd9SsOXLpIgeTITarNS4t26JJubSHQ5IliSh1ZHv0RBP7vhSTHctAU
\"assistant\",\n \"content\": \"I now can give a great answer.\\n\\nFinal 6CJQ7909vHd3zxmAsFo0IFQvkxqCKzYPaf1b2/hhW+8PR9N9Kh9W5Zdhjebr4zeRjx1++4gqvXTd
Answer: Apples are incredibly versatile, nutritious, and a staple in diets globally.\",\n KD8Eh8l6mmkVUSYcVau726qu729X7ydi8Brd2GZCKmpfDJZssSpXdVHeFdX9ubv3ViGLBr5nAADP
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\": 03f0SRqPooEyf0EGZJYGRXMpAhDRuxERktlykpREvpDKU0KarH8G8gdQksDYPYIEM9oGSXzACPCD
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\": PlqSDjbTfwObEBy+Y0Dl+YkTDmApoYkyIUMvoz7IiDmw79L8kqSBMe7HMMAoB4fM7ikHpF6SsmRg
25,\n \"total_tokens\": 200,\n \"completion_tokens_details\": {\n \"reasoning_tokens\": xxgBjwGjRVJ4c+00YrdjOU6Lds6d8dMluvMmRL/lM3/BO0uW+zaiZE9jTE4+iIk9ZQA/pxHvXk1N
0\n }\n },\n \"system_fingerprint\": \"fp_a5d11b2ef2\"\n}\n" hOiHkNrkfyHxtLX1rCeWzS7svEsAkXyS7govq/wNvVZjktbx1ZKEkqpHvbQuG5U7bf0VkV2l/tvN
W9pzckvmf+QXQikMCXUbImqrXideyiKOh/+vssuUJ8NiPpO2s2Qwhmjns+tCW25lqatV3VUiO2V/
AAAA//8DAPtpFJCEAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f2369a761cf3-GRU - 8e19bf447ba48761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -276,7 +299,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:46 GMT - Tue, 12 Nov 2024 21:52:06 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -285,10 +308,12 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '389' - '655'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -296,17 +321,18 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9997'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999791' - '199791'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 24.239s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_0167388f0a7a7f1a1026409834ceb914 - req_a228208b0e965ecee334a6947d6c9e7c
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
version: 1 version: 1

View File

@@ -564,6 +564,7 @@ def test_crew_kickoff_usage_metrics():
assert result.token_usage.prompt_tokens > 0 assert result.token_usage.prompt_tokens > 0
assert result.token_usage.completion_tokens > 0 assert result.token_usage.completion_tokens > 0
assert result.token_usage.successful_requests > 0 assert result.token_usage.successful_requests > 0
assert result.token_usage.cached_prompt_tokens == 0
def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set(): def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
@@ -1284,6 +1285,7 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
prompt_tokens=1562, prompt_tokens=1562,
completion_tokens=111, completion_tokens=111,
successful_requests=3, successful_requests=3,
cached_prompt_tokens=0
) )
@@ -1777,26 +1779,22 @@ def test_crew_train_success(
] ]
) )
crew_training_handler.assert_has_calls( crew_training_handler.assert_any_call("training_data.pkl")
[ crew_training_handler().load.assert_called()
mock.call("training_data.pkl"),
mock.call().load(), crew_training_handler.assert_any_call("trained_agents_data.pkl")
mock.call("trained_agents_data.pkl"), crew_training_handler().load.assert_called()
mock.call().save_trained_data(
agent_id="Researcher", crew_training_handler().save_trained_data.assert_has_calls([
trained_data=task_evaluator().evaluate_training_data().model_dump(), mock.call(
), agent_id="Researcher",
mock.call("trained_agents_data.pkl"), trained_data=task_evaluator().evaluate_training_data().model_dump(),
mock.call().save_trained_data( ),
agent_id="Senior Writer", mock.call(
trained_data=task_evaluator().evaluate_training_data().model_dump(), agent_id="Senior Writer",
), trained_data=task_evaluator().evaluate_training_data().model_dump(),
mock.call(), )
mock.call().load(), ])
mock.call(),
mock.call().load(),
]
)
def test_crew_train_error(): def test_crew_train_error():

View File

@@ -0,0 +1,270 @@
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: GET
uri: https://api.mem0.ai/v1/memories/?user_id=test
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477138bad847b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:11 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '740'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:12 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '69'
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Remember the following insights
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '219'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/
response:
body:
string: '{"message":"ok"}'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477140282547b9-BOM
Connection:
- keep-alive
Content-Length:
- '16'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '73'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/search/
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b47714d083b47b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:14 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- POST, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '739'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '33'
status:
code: 200
message: OK
version: 1

84
uv.lock generated
View File

@@ -490,28 +490,32 @@ wheels = [
[[package]] [[package]]
name = "chroma-hnswlib" name = "chroma-hnswlib"
version = "0.7.3" version = "0.7.6"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "numpy" }, { name = "numpy" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/c0/59/1224cbae62c7b84c84088cdf6c106b9b2b893783c000d22c442a1672bc75/chroma-hnswlib-0.7.3.tar.gz", hash = "sha256:b6137bedde49fffda6af93b0297fe00429fc61e5a072b1ed9377f909ed95a932", size = 31876 } sdist = { url = "https://files.pythonhosted.org/packages/73/09/10d57569e399ce9cbc5eee2134996581c957f63a9addfa6ca657daf006b8/chroma_hnswlib-0.7.6.tar.gz", hash = "sha256:4dce282543039681160259d29fcde6151cc9106c6461e0485f57cdccd83059b7", size = 32256 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/1a/36/d1069ffa520efcf93f6d81b527e3c7311e12363742fdc786cbdaea3ab02e/chroma_hnswlib-0.7.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:59d6a7c6f863c67aeb23e79a64001d537060b6995c3eca9a06e349ff7b0998ca", size = 219588 }, { url = "https://files.pythonhosted.org/packages/a8/74/b9dde05ea8685d2f8c4681b517e61c7887e974f6272bb24ebc8f2105875b/chroma_hnswlib-0.7.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f35192fbbeadc8c0633f0a69c3d3e9f1a4eab3a46b65458bbcbcabdd9e895c36", size = 195821 },
{ url = "https://files.pythonhosted.org/packages/c3/e8/263d331f5ce29367f6f8854cd7fa1f54fce72ab4f92ab957525ef9165a9c/chroma_hnswlib-0.7.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d71a3f4f232f537b6152947006bd32bc1629a8686df22fd97777b70f416c127a", size = 197094 }, { url = "https://files.pythonhosted.org/packages/fd/58/101bfa6bc41bc6cc55fbb5103c75462a7bf882e1704256eb4934df85b6a8/chroma_hnswlib-0.7.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f007b608c96362b8f0c8b6b2ac94f67f83fcbabd857c378ae82007ec92f4d82", size = 183854 },
{ url = "https://files.pythonhosted.org/packages/a9/72/a9b61ae00d490c26359a8e10f3974c0d38065b894e6a2573ec6a7597f8e3/chroma_hnswlib-0.7.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c92dc1ebe062188e53970ba13f6b07e0ae32e64c9770eb7f7ffa83f149d4210", size = 2315620 }, { url = "https://files.pythonhosted.org/packages/17/ff/95d49bb5ce134f10d6aa08d5f3bec624eaff945f0b17d8c3fce888b9a54a/chroma_hnswlib-0.7.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:456fd88fa0d14e6b385358515aef69fc89b3c2191706fd9aee62087b62aad09c", size = 2358774 },
{ url = "https://files.pythonhosted.org/packages/2f/48/f7609a3cb15a24c5d8ec18911ce10ac94144e9a89584f0a86bf9871b024c/chroma_hnswlib-0.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:49da700a6656fed8753f68d44b8cc8ae46efc99fc8a22a6d970dc1697f49b403", size = 2350956 }, { url = "https://files.pythonhosted.org/packages/3a/6d/27826180a54df80dbba8a4f338b022ba21c0c8af96fd08ff8510626dee8f/chroma_hnswlib-0.7.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5dfaae825499c2beaa3b75a12d7ec713b64226df72a5c4097203e3ed532680da", size = 2392739 },
{ url = "https://files.pythonhosted.org/packages/cc/3d/ca311b8f79744db3f4faad8fd9140af80d34c94829d3ed1726c98cf4a611/chroma_hnswlib-0.7.3-cp310-cp310-win_amd64.whl", hash = "sha256:108bc4c293d819b56476d8f7865803cb03afd6ca128a2a04d678fffc139af029", size = 150598 }, { url = "https://files.pythonhosted.org/packages/d6/63/ee3e8b7a8f931918755faacf783093b61f32f59042769d9db615999c3de0/chroma_hnswlib-0.7.6-cp310-cp310-win_amd64.whl", hash = "sha256:2487201982241fb1581be26524145092c95902cb09fc2646ccfbc407de3328ec", size = 150955 },
{ url = "https://files.pythonhosted.org/packages/94/3f/844393b0d2ea1072b7704d6eff5c595e05ae8b831b96340cdb76b2fe995c/chroma_hnswlib-0.7.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:11e7ca93fb8192214ac2b9c0943641ac0daf8f9d4591bb7b73be808a83835667", size = 221219 }, { url = "https://files.pythonhosted.org/packages/f5/af/d15fdfed2a204c0f9467ad35084fbac894c755820b203e62f5dcba2d41f1/chroma_hnswlib-0.7.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:81181d54a2b1e4727369486a631f977ffc53c5533d26e3d366dda243fb0998ca", size = 196911 },
{ url = "https://files.pythonhosted.org/packages/11/7a/673ccb9bb2faf9cf655d9040e970c02a96645966e06837fde7d10edf242a/chroma_hnswlib-0.7.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6f552e4d23edc06cdeb553cdc757d2fe190cdeb10d43093d6a3319f8d4bf1c6b", size = 198652 }, { url = "https://files.pythonhosted.org/packages/0d/19/aa6f2139f1ff7ad23a690ebf2a511b2594ab359915d7979f76f3213e46c4/chroma_hnswlib-0.7.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4b4ab4e11f1083dd0a11ee4f0e0b183ca9f0f2ed63ededba1935b13ce2b3606f", size = 185000 },
{ url = "https://files.pythonhosted.org/packages/ba/f4/c81a40da5473d5d80fc9d0c5bd5b1cb64e530a6ea941c69f195fe81c488c/chroma_hnswlib-0.7.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f96f4d5699e486eb1fb95849fe35ab79ab0901265805be7e60f4eaa83ce263ec", size = 2332260 }, { url = "https://files.pythonhosted.org/packages/79/b1/1b269c750e985ec7d40b9bbe7d66d0a890e420525187786718e7f6b07913/chroma_hnswlib-0.7.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:53db45cd9173d95b4b0bdccb4dbff4c54a42b51420599c32267f3abbeb795170", size = 2377289 },
{ url = "https://files.pythonhosted.org/packages/48/0e/068b658a547d6090b969014146321e28dae1411da54b76d081e51a2af22b/chroma_hnswlib-0.7.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:368e57fe9ebae05ee5844840fa588028a023d1182b0cfdb1d13f607c9ea05756", size = 2367211 }, { url = "https://files.pythonhosted.org/packages/c7/2d/d5663e134436e5933bc63516a20b5edc08b4c1b1588b9680908a5f1afd04/chroma_hnswlib-0.7.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c093f07a010b499c00a15bc9376036ee4800d335360570b14f7fe92badcdcf9", size = 2411755 },
{ url = "https://files.pythonhosted.org/packages/d2/32/a91850c7aa8a34f61838913155103808fe90da6f1ea4302731b59e9ba6f2/chroma_hnswlib-0.7.3-cp311-cp311-win_amd64.whl", hash = "sha256:b7dca27b8896b494456db0fd705b689ac6b73af78e186eb6a42fea2de4f71c6f", size = 151574 }, { url = "https://files.pythonhosted.org/packages/3e/79/1bce519cf186112d6d5ce2985392a89528c6e1e9332d680bf752694a4cdf/chroma_hnswlib-0.7.6-cp311-cp311-win_amd64.whl", hash = "sha256:0540b0ac96e47d0aa39e88ea4714358ae05d64bbe6bf33c52f316c664190a6a3", size = 151888 },
{ url = "https://files.pythonhosted.org/packages/93/ac/782b8d72de1c57b64fdf5cb94711540db99a92768d93d973174c62d45eb8/chroma_hnswlib-0.7.6-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:e87e9b616c281bfbe748d01705817c71211613c3b063021f7ed5e47173556cb7", size = 197804 },
{ url = "https://files.pythonhosted.org/packages/32/4e/fd9ce0764228e9a98f6ff46af05e92804090b5557035968c5b4198bc7af9/chroma_hnswlib-0.7.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ec5ca25bc7b66d2ecbf14502b5729cde25f70945d22f2aaf523c2d747ea68912", size = 185421 },
{ url = "https://files.pythonhosted.org/packages/d9/3d/b59a8dedebd82545d873235ef2d06f95be244dfece7ee4a1a6044f080b18/chroma_hnswlib-0.7.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:305ae491de9d5f3c51e8bd52d84fdf2545a4a2bc7af49765cda286b7bb30b1d4", size = 2389672 },
{ url = "https://files.pythonhosted.org/packages/74/1e/80a033ea4466338824974a34f418e7b034a7748bf906f56466f5caa434b0/chroma_hnswlib-0.7.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:822ede968d25a2c88823ca078a58f92c9b5c4142e38c7c8b4c48178894a0a3c5", size = 2436986 },
] ]
[[package]] [[package]]
name = "chromadb" name = "chromadb"
version = "0.4.24" version = "0.5.18"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "bcrypt" }, { name = "bcrypt" },
@@ -519,6 +523,7 @@ dependencies = [
{ name = "chroma-hnswlib" }, { name = "chroma-hnswlib" },
{ name = "fastapi" }, { name = "fastapi" },
{ name = "grpcio" }, { name = "grpcio" },
{ name = "httpx" },
{ name = "importlib-resources" }, { name = "importlib-resources" },
{ name = "kubernetes" }, { name = "kubernetes" },
{ name = "mmh3" }, { name = "mmh3" },
@@ -531,11 +536,10 @@ dependencies = [
{ name = "orjson" }, { name = "orjson" },
{ name = "overrides" }, { name = "overrides" },
{ name = "posthog" }, { name = "posthog" },
{ name = "pulsar-client" },
{ name = "pydantic" }, { name = "pydantic" },
{ name = "pypika" }, { name = "pypika" },
{ name = "pyyaml" }, { name = "pyyaml" },
{ name = "requests" }, { name = "rich" },
{ name = "tenacity" }, { name = "tenacity" },
{ name = "tokenizers" }, { name = "tokenizers" },
{ name = "tqdm" }, { name = "tqdm" },
@@ -543,9 +547,9 @@ dependencies = [
{ name = "typing-extensions" }, { name = "typing-extensions" },
{ name = "uvicorn", extra = ["standard"] }, { name = "uvicorn", extra = ["standard"] },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/47/6b/a5465827d8017b658d18ad1e63d2dc31109dec717c6bd068e82485186f4b/chromadb-0.4.24.tar.gz", hash = "sha256:a5c80b4e4ad9b236ed2d4899a5b9e8002b489293f2881cb2cadab5b199ee1c72", size = 13667084 } sdist = { url = "https://files.pythonhosted.org/packages/15/95/d1a3f14c864e37d009606b82bd837090902b5e5a8e892fcab07eeaec0438/chromadb-0.5.18.tar.gz", hash = "sha256:cfbb3e5aeeb1dd532b47d80ed9185e8a9886c09af41c8e6123edf94395d76aec", size = 33620708 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/cc/63/b7d76109331318423f9cfb89bd89c99e19f5d0b47a5105439a629224d297/chromadb-0.4.24-py3-none-any.whl", hash = "sha256:3a08e237a4ad28b5d176685bd22429a03717fe09d35022fb230d516108da01da", size = 525452 }, { url = "https://files.pythonhosted.org/packages/82/85/4d2f8b9202153105ad4514ae09e9fe6f3b353a45e44e0ef7eca03dd8b9dc/chromadb-0.5.18-py3-none-any.whl", hash = "sha256:9dd3827b5e04b4ff0a5ea0df28a78bac88a09f45be37fcd7fe20f879b57c43cf", size = 615499 },
] ]
[[package]] [[package]]
@@ -634,6 +638,9 @@ dependencies = [
agentops = [ agentops = [
{ name = "agentops" }, { name = "agentops" },
] ]
mem0 = [
{ name = "mem0ai" },
]
tools = [ tools = [
{ name = "crewai-tools" }, { name = "crewai-tools" },
] ]
@@ -663,7 +670,7 @@ requires-dist = [
{ name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" }, { name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" },
{ name = "appdirs", specifier = ">=1.4.4" }, { name = "appdirs", specifier = ">=1.4.4" },
{ name = "auth0-python", specifier = ">=4.7.1" }, { name = "auth0-python", specifier = ">=4.7.1" },
{ name = "chromadb", specifier = ">=0.4.24" }, { name = "chromadb", specifier = ">=0.5.18" },
{ name = "click", specifier = ">=8.1.7" }, { name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", specifier = ">=0.14.0" }, { name = "crewai-tools", specifier = ">=0.14.0" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" }, { name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" },
@@ -672,6 +679,7 @@ requires-dist = [
{ name = "jsonref", specifier = ">=1.1.0" }, { name = "jsonref", specifier = ">=1.1.0" },
{ name = "langchain", specifier = ">=0.2.16" }, { name = "langchain", specifier = ">=0.2.16" },
{ name = "litellm", specifier = ">=1.44.22" }, { name = "litellm", specifier = ">=1.44.22" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.29" },
{ name = "openai", specifier = ">=1.13.3" }, { name = "openai", specifier = ">=1.13.3" },
{ name = "opentelemetry-api", specifier = ">=1.22.0" }, { name = "opentelemetry-api", specifier = ">=1.22.0" },
{ name = "opentelemetry-exporter-otlp-proto-http", specifier = ">=1.22.0" }, { name = "opentelemetry-exporter-otlp-proto-http", specifier = ">=1.22.0" },
@@ -889,7 +897,7 @@ wheels = [
[[package]] [[package]]
name = "embedchain" name = "embedchain"
version = "0.1.123" version = "0.1.125"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "alembic" }, { name = "alembic" },
@@ -914,9 +922,9 @@ dependencies = [
{ name = "sqlalchemy" }, { name = "sqlalchemy" },
{ name = "tiktoken" }, { name = "tiktoken" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/5d/6a/955b5a72fa6727db203c4d46ae0e30ac47f4f50389f663cd5ea157b0d819/embedchain-0.1.123.tar.gz", hash = "sha256:aecaf81c21de05b5cdb649b6cde95ef68ffa759c69c54f6ff2eaa667f2ad0580", size = 124797 } sdist = { url = "https://files.pythonhosted.org/packages/6c/ea/eedb6016719f94fe4bd4c5aa44cc5463d85494bbd0864cc465e4317d4987/embedchain-0.1.125.tar.gz", hash = "sha256:15a6d368b48ba33feb93b237caa54f6e9078537c02a49c1373e59cc32627a138", size = 125176 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/51/0c78d26da4afbe68370306669556b274f1021cac02f3155d8da2be407763/embedchain-0.1.123-py3-none-any.whl", hash = "sha256:1210e993b6364d7c702b6bd44b053fc244dd77f2a65ea4b90b62709114ea6c25", size = 210909 }, { url = "https://files.pythonhosted.org/packages/52/82/3d0355c22bc68cfbb8fbcf670da4c01b31bd7eb516974a08cf7533e89887/embedchain-0.1.125-py3-none-any.whl", hash = "sha256:f87b49732dc192c6b61221830f29e59cf2aff26d8f5d69df81f6f6cf482715c2", size = 211367 },
] ]
[[package]] [[package]]
@@ -2160,7 +2168,7 @@ wheels = [
[[package]] [[package]]
name = "mem0ai" name = "mem0ai"
version = "0.1.22" version = "0.1.29"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "openai" }, { name = "openai" },
@@ -2170,9 +2178,9 @@ dependencies = [
{ name = "qdrant-client" }, { name = "qdrant-client" },
{ name = "sqlalchemy" }, { name = "sqlalchemy" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/7f/b4/64c6f7d9684bd1f9b46d251abfc7d5b2cc8371d70f1f9eec097f9872c719/mem0ai-0.1.22.tar.gz", hash = "sha256:d01aa028763719bd0ede2de4602121a7c3bf023f46112cd50cc9169140e11be2", size = 53117 } sdist = { url = "https://files.pythonhosted.org/packages/a9/bf/152718f9da3844dd24d4c45850b2e719798b5ce9389adf4ec873ee8905ca/mem0ai-0.1.29.tar.gz", hash = "sha256:42adefb7a9b241be03fbcabadf5328abf91b4ac390bc97e5966e55e3cac192c5", size = 55201 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/2b/27/3ef75abb28bf8b46c2cc34730f6be733ef2584652474216215019ee036a2/mem0ai-0.1.22-py3-none-any.whl", hash = "sha256:c783e15131c16a0d91e44e30195c1eeae9c36468de40006d5e42cf4516059855", size = 75695 }, { url = "https://files.pythonhosted.org/packages/65/9b/755be84f669415b3b513cfd935e768c4c84ac5c1ab6ff6ac2dab990a261a/mem0ai-0.1.29-py3-none-any.whl", hash = "sha256:07bbfd4238d0d7da65d5e4cf75a217eeb5b2829834e399074b05bb046730a57f", size = 79558 },
] ]
[[package]] [[package]]
@@ -3202,34 +3210,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/22/a6/858897256d0deac81a172289110f31629fc4cee19b6f01283303e18c8db3/ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35", size = 13993 }, { url = "https://files.pythonhosted.org/packages/22/a6/858897256d0deac81a172289110f31629fc4cee19b6f01283303e18c8db3/ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35", size = 13993 },
] ]
[[package]]
name = "pulsar-client"
version = "3.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/e0/aa/eb3b04be87b961324e49748f3a715a12127d45d76258150bfa61b2a002d8/pulsar_client-3.5.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:c18552edb2f785de85280fe624bc507467152bff810fc81d7660fa2dfa861f38", size = 10953552 },
{ url = "https://files.pythonhosted.org/packages/cc/20/d59bf89ccdda45edd89f5b54bd1e93605ebe5ad3cb73f4f4f5e8eca8f9e6/pulsar_client-3.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18d438e456c146f01be41ef146f649dedc8f7bc714d9eaef94cff2e34099812b", size = 5190714 },
{ url = "https://files.pythonhosted.org/packages/1a/02/ca7e96b97d564d0375b8e3de65f95ac86c8502c40f6ff750e9d145709d9a/pulsar_client-3.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18a26a0719841103c7a89eb1492c4a8fedf89adaa386375baecbb4fa2707e88f", size = 5429820 },
{ url = "https://files.pythonhosted.org/packages/47/f3/682670cdc951b830cd3d8d1287521997327254e59508772664aaa656e246/pulsar_client-3.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ab0e1605dc5f44a126163fd06cd0a768494ad05123f6e0de89a2c71d6e2d2319", size = 5710427 },
{ url = "https://files.pythonhosted.org/packages/bc/00/119cd039286dfc1c91a5580963e9ba79204cd4717b16b7a6fdc57d1c1673/pulsar_client-3.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cdef720891b97656fdce3bf5913ea7729b2156b84ba64314f432c1e72c6117fa", size = 5916490 },
{ url = "https://files.pythonhosted.org/packages/0a/cc/d606b483dbb263cbaf7fc7c3d2ec4032628cf3324266cf9a4ccdb2a73076/pulsar_client-3.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:a42544e38773191fe550644a90e8050579476bb2dcf17ac69a4aed62a6cb70e7", size = 3305387 },
{ url = "https://files.pythonhosted.org/packages/0d/2e/aec6886a6d67f09230476182399b7fad694fbcbbaf004ce914725d4eddd9/pulsar_client-3.5.0-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:fd94432ea5d398ea78f8f2e09a217ec5058d26330c137a22690478c031e116da", size = 10954116 },
{ url = "https://files.pythonhosted.org/packages/43/06/b98df9300f60e5fad3396f843dd633c31176a495a2d60ba111c99511658a/pulsar_client-3.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d6252ae462e07ece4071213fdd9c76eab82ca522a749f2dc678037d4cbacd40b", size = 5189618 },
{ url = "https://files.pythonhosted.org/packages/72/05/c9aef7da7802a03c0b65ffe8f00a24289ff992f99ed5d5d1fd0ed63d9cf6/pulsar_client-3.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03b4d440b2d74323784328b082872ee2f206c440b5d224d7941eb3c083ec06c6", size = 5429329 },
{ url = "https://files.pythonhosted.org/packages/06/96/9acfe6f1d827cdd53b8460b04c63b4081333ef64a49a2f425419f1eb6b6b/pulsar_client-3.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f60af840b8d64a2fac5a0c1ce6ae0ddffec5f42267c6ded2c5e74bad8345f2a1", size = 5710106 },
{ url = "https://files.pythonhosted.org/packages/e1/7b/877a06eff5c9ac828cdb75e378ee29b0adac9328da9ee173eaf7076d8c56/pulsar_client-3.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2277a447c3b7f6571cb1eb9fc5c25da3fdd43d0b2fb91cf52054adfadc7d6842", size = 5916541 },
{ url = "https://files.pythonhosted.org/packages/fb/62/ed1da1ef72c95ba6a830e43995550ed0a1d26c223fb4b036ac6cd028c2ed/pulsar_client-3.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:f20f3e9dd50db2a37059abccad42078b7a4754b8bc1d3ae6502e71c1ad2209f0", size = 3305485 },
{ url = "https://files.pythonhosted.org/packages/81/19/4b145766df706aa5e09f60bbf5f87b934e6ac950fddd18f4acd520c465b9/pulsar_client-3.5.0-cp312-cp312-macosx_10_15_universal2.whl", hash = "sha256:d61f663d85308e12f44033ba95af88730f581a7e8da44f7a5c080a3aaea4878d", size = 10967548 },
{ url = "https://files.pythonhosted.org/packages/bf/bd/9bc05ee861b46884554a4c61f96edb9602de131dd07982c27920e554ab5b/pulsar_client-3.5.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a1ba0be25b6f747bcb28102b7d906ec1de48dc9f1a2d9eacdcc6f44ab2c9e17", size = 5189598 },
{ url = "https://files.pythonhosted.org/packages/76/00/379bedfa6f1c810553996a4cb0984fa2e2c89afc5953df0936e1c9636003/pulsar_client-3.5.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a181e3e60ac39df72ccb3c415d7aeac61ad0286497a6e02739a560d5af28393a", size = 5430145 },
{ url = "https://files.pythonhosted.org/packages/88/c8/8a37d75aa9132a69a28061c9e5f4b516328a1968b58bbae018f431c6d3d4/pulsar_client-3.5.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:3c72895ff7f51347e4f78b0375b2213fa70dd4790bbb78177b4002846f1fd290", size = 5708960 },
{ url = "https://files.pythonhosted.org/packages/6e/9a/abd98661e3f7ae3a8e1d3fb0fc7eba1a30005391ebd575ab06a66021256c/pulsar_client-3.5.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:547dba1b185a17eba915e51d0a3aca27c80747b6187e5cd7a71a3ca33921decc", size = 5915227 },
{ url = "https://files.pythonhosted.org/packages/a2/51/db376181d05716de595515fac736e3d06e96d3345ba0e31c0a90c352eae1/pulsar_client-3.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:443b786eed96bc86d2297a6a42e79f39d1abf217ec603e0bd303f3488c0234af", size = 3306515 },
]
[[package]] [[package]]
name = "pure-eval" name = "pure-eval"
version = "0.2.3" version = "0.2.3"