mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-18 21:38:29 +00:00
Compare commits
13 Commits
bugfix/mem
...
fix/step-c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ee8fe74395 | ||
|
|
d8f271daeb | ||
|
|
bcfcf88e78 | ||
|
|
fd0de3a47e | ||
|
|
c7b9ae02fd | ||
|
|
4afb022572 | ||
|
|
8610faef22 | ||
|
|
6d677541c7 | ||
|
|
49220ec163 | ||
|
|
40a676b7ac | ||
|
|
50bf146d1e | ||
|
|
40d378abfb | ||
|
|
1b09b085a7 |
@@ -25,7 +25,100 @@ By default, CrewAI uses the `gpt-4o-mini` model. It uses environment variables i
|
|||||||
- `OPENAI_API_BASE`
|
- `OPENAI_API_BASE`
|
||||||
- `OPENAI_API_KEY`
|
- `OPENAI_API_KEY`
|
||||||
|
|
||||||
### 2. Custom LLM Objects
|
### 2. Updating YAML files
|
||||||
|
|
||||||
|
You can update the `agents.yml` file to refer to the LLM you want to use:
|
||||||
|
|
||||||
|
```yaml Code
|
||||||
|
researcher:
|
||||||
|
role: Research Specialist
|
||||||
|
goal: Conduct comprehensive research and analysis to gather relevant information,
|
||||||
|
synthesize findings, and produce well-documented insights.
|
||||||
|
backstory: A dedicated research professional with years of experience in academic
|
||||||
|
investigation, literature review, and data analysis, known for thorough and
|
||||||
|
methodical approaches to complex research questions.
|
||||||
|
verbose: true
|
||||||
|
llm: openai/gpt-4o
|
||||||
|
# llm: azure/gpt-4o-mini
|
||||||
|
# llm: gemini/gemini-pro
|
||||||
|
# llm: anthropic/claude-3-5-sonnet-20240620
|
||||||
|
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
|
||||||
|
# llm: mistral/mistral-large-latest
|
||||||
|
# llm: ollama/llama3:70b
|
||||||
|
# llm: groq/llama-3.2-90b-vision-preview
|
||||||
|
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
|
||||||
|
# ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep in mind that you will need to set certain ENV vars depending on the model you are
|
||||||
|
using to account for the credentials or set a custom LLM object like described below.
|
||||||
|
Here are some of the required ENV vars for some of the LLM integrations:
|
||||||
|
|
||||||
|
<AccordionGroup>
|
||||||
|
<Accordion title="OpenAI">
|
||||||
|
```python Code
|
||||||
|
OPENAI_API_KEY=<your-api-key>
|
||||||
|
OPENAI_API_BASE=<optional-custom-base-url>
|
||||||
|
OPENAI_MODEL_NAME=<openai-model-name>
|
||||||
|
OPENAI_ORGANIZATION=<your-org-id> # OPTIONAL
|
||||||
|
OPENAI_API_BASE=<openaiai-api-base> # OPTIONAL
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="Anthropic">
|
||||||
|
```python Code
|
||||||
|
ANTHROPIC_API_KEY=<your-api-key>
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="Google">
|
||||||
|
```python Code
|
||||||
|
GEMINI_API_KEY=<your-api-key>
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="Azure">
|
||||||
|
```python Code
|
||||||
|
AZURE_API_KEY=<your-api-key> # "my-azure-api-key"
|
||||||
|
AZURE_API_BASE=<your-resource-url> # "https://example-endpoint.openai.azure.com"
|
||||||
|
AZURE_API_VERSION=<api-version> # "2023-05-15"
|
||||||
|
AZURE_AD_TOKEN=<your-azure-ad-token> # Optional
|
||||||
|
AZURE_API_TYPE=<your-azure-api-type> # Optional
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="AWS Bedrock">
|
||||||
|
```python Code
|
||||||
|
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||||
|
AWS_SECRET_ACCESS_KEY=<your-secret-key>
|
||||||
|
AWS_DEFAULT_REGION=<your-region>
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="Mistral">
|
||||||
|
```python Code
|
||||||
|
MISTRAL_API_KEY=<your-api-key>
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="Groq">
|
||||||
|
```python Code
|
||||||
|
GROQ_API_KEY=<your-api-key>
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="IBM watsonx.ai">
|
||||||
|
```python Code
|
||||||
|
WATSONX_URL=<your-url> # (required) Base URL of your WatsonX instance
|
||||||
|
WATSONX_APIKEY=<your-apikey> # (required) IBM cloud API key
|
||||||
|
WATSONX_TOKEN=<your-token> # (required) IAM auth token (alternative to APIKEY)
|
||||||
|
WATSONX_PROJECT_ID=<your-project-id> # (optional) Project ID of your WatsonX instance
|
||||||
|
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id> # (optional) ID of deployment space for deployed models
|
||||||
|
```
|
||||||
|
</Accordion>
|
||||||
|
</AccordionGroup>
|
||||||
|
|
||||||
|
### 3. Custom LLM Objects
|
||||||
|
|
||||||
Pass a custom LLM implementation or object from another library.
|
Pass a custom LLM implementation or object from another library.
|
||||||
|
|
||||||
@@ -224,6 +317,29 @@ These are examples of how to configure LLMs for your agent.
|
|||||||
</Accordion>
|
</Accordion>
|
||||||
|
|
||||||
<Accordion title="IBM watsonx.ai">
|
<Accordion title="IBM watsonx.ai">
|
||||||
|
You can use IBM Watson by seeting the following ENV vars:
|
||||||
|
|
||||||
|
```python Code
|
||||||
|
WATSONX_URL=<your-url>
|
||||||
|
WATSONX_APIKEY=<your-apikey>
|
||||||
|
WATSONX_PROJECT_ID=<your-project-id>
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then define your agents llms by updating the `agents.yml`
|
||||||
|
|
||||||
|
```yaml Code
|
||||||
|
researcher:
|
||||||
|
role: Research Specialist
|
||||||
|
goal: Conduct comprehensive research and analysis to gather relevant information,
|
||||||
|
synthesize findings, and produce well-documented insights.
|
||||||
|
backstory: A dedicated research professional with years of experience in academic
|
||||||
|
investigation, literature review, and data analysis, known for thorough and
|
||||||
|
methodical approaches to complex research questions.
|
||||||
|
verbose: true
|
||||||
|
llm: watsonx/meta-llama/llama-3-1-70b-instruct
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also set up agents more dynamically as a base level LLM instance, like bellow:
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
from crewai import LLM
|
from crewai import LLM
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "crewai"
|
name = "crewai"
|
||||||
version = "0.76.9"
|
version = "0.79.4"
|
||||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
@@ -16,7 +16,7 @@ dependencies = [
|
|||||||
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
||||||
"instructor>=1.3.3",
|
"instructor>=1.3.3",
|
||||||
"regex>=2024.9.11",
|
"regex>=2024.9.11",
|
||||||
"crewai-tools>=0.13.4",
|
"crewai-tools>=0.14.0",
|
||||||
"click>=8.1.7",
|
"click>=8.1.7",
|
||||||
"python-dotenv>=1.0.0",
|
"python-dotenv>=1.0.0",
|
||||||
"appdirs>=1.4.4",
|
"appdirs>=1.4.4",
|
||||||
@@ -37,7 +37,7 @@ Documentation = "https://docs.crewai.com"
|
|||||||
Repository = "https://github.com/crewAIInc/crewAI"
|
Repository = "https://github.com/crewAIInc/crewAI"
|
||||||
|
|
||||||
[project.optional-dependencies]
|
[project.optional-dependencies]
|
||||||
tools = ["crewai-tools>=0.13.4"]
|
tools = ["crewai-tools>=0.14.0"]
|
||||||
agentops = ["agentops>=0.3.0"]
|
agentops = ["agentops>=0.3.0"]
|
||||||
|
|
||||||
[tool.uv]
|
[tool.uv]
|
||||||
@@ -52,7 +52,7 @@ dev-dependencies = [
|
|||||||
"mkdocs-material-extensions>=1.3.1",
|
"mkdocs-material-extensions>=1.3.1",
|
||||||
"pillow>=10.2.0",
|
"pillow>=10.2.0",
|
||||||
"cairosvg>=2.7.1",
|
"cairosvg>=2.7.1",
|
||||||
"crewai-tools>=0.13.4",
|
"crewai-tools>=0.14.0",
|
||||||
"pytest>=8.0.0",
|
"pytest>=8.0.0",
|
||||||
"pytest-vcr>=1.0.2",
|
"pytest-vcr>=1.0.2",
|
||||||
"python-dotenv>=1.0.0",
|
"python-dotenv>=1.0.0",
|
||||||
|
|||||||
@@ -14,5 +14,5 @@ warnings.filterwarnings(
|
|||||||
category=UserWarning,
|
category=UserWarning,
|
||||||
module="pydantic.main",
|
module="pydantic.main",
|
||||||
)
|
)
|
||||||
__version__ = "0.76.9"
|
__version__ = "0.79.4"
|
||||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]
|
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]
|
||||||
|
|||||||
@@ -123,6 +123,11 @@ class Agent(BaseAgent):
|
|||||||
@model_validator(mode="after")
|
@model_validator(mode="after")
|
||||||
def post_init_setup(self):
|
def post_init_setup(self):
|
||||||
self.agent_ops_agent_name = self.role
|
self.agent_ops_agent_name = self.role
|
||||||
|
unnacepted_attributes = [
|
||||||
|
"AWS_ACCESS_KEY_ID",
|
||||||
|
"AWS_SECRET_ACCESS_KEY",
|
||||||
|
"AWS_REGION_NAME",
|
||||||
|
]
|
||||||
|
|
||||||
# Handle different cases for self.llm
|
# Handle different cases for self.llm
|
||||||
if isinstance(self.llm, str):
|
if isinstance(self.llm, str):
|
||||||
@@ -146,9 +151,14 @@ class Agent(BaseAgent):
|
|||||||
if api_base:
|
if api_base:
|
||||||
llm_params["base_url"] = api_base
|
llm_params["base_url"] = api_base
|
||||||
|
|
||||||
|
set_provider = model_name.split("/")[0] if "/" in model_name else "openai"
|
||||||
|
|
||||||
# Iterate over all environment variables to find matching API keys or use defaults
|
# Iterate over all environment variables to find matching API keys or use defaults
|
||||||
for provider, env_vars in ENV_VARS.items():
|
for provider, env_vars in ENV_VARS.items():
|
||||||
|
if provider == set_provider:
|
||||||
for env_var in env_vars:
|
for env_var in env_vars:
|
||||||
|
if env_var["key_name"] in unnacepted_attributes:
|
||||||
|
continue
|
||||||
# Check if the environment variable is set
|
# Check if the environment variable is set
|
||||||
if "key_name" in env_var:
|
if "key_name" in env_var:
|
||||||
env_value = os.environ.get(env_var["key_name"])
|
env_value = os.environ.get(env_var["key_name"])
|
||||||
|
|||||||
@@ -151,6 +151,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
if self._should_force_answer():
|
if self._should_force_answer():
|
||||||
if self.have_forced_answer:
|
if self.have_forced_answer:
|
||||||
return AgentFinish(
|
return AgentFinish(
|
||||||
|
thought="",
|
||||||
output=self._i18n.errors(
|
output=self._i18n.errors(
|
||||||
"force_final_answer_error"
|
"force_final_answer_error"
|
||||||
).format(formatted_answer.text),
|
).format(formatted_answer.text),
|
||||||
@@ -332,9 +333,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||||
train_iteration = self.crew._train_iteration
|
train_iteration = self.crew._train_iteration
|
||||||
if agent_id in training_data and isinstance(train_iteration, int):
|
if agent_id in training_data and isinstance(train_iteration, int):
|
||||||
training_data[agent_id][train_iteration][
|
training_data[agent_id][train_iteration]["improved_output"] = (
|
||||||
"improved_output"
|
result.output
|
||||||
] = result.output
|
)
|
||||||
training_handler.save(training_data)
|
training_handler.save(training_data)
|
||||||
else:
|
else:
|
||||||
self._logger.log(
|
self._logger.log(
|
||||||
@@ -385,4 +386,5 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
return CrewAgentParser(agent=self.agent).parse(answer)
|
return CrewAgentParser(agent=self.agent).parse(answer)
|
||||||
|
|
||||||
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
|
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
|
||||||
|
prompt = prompt.rstrip()
|
||||||
return {"role": role, "content": prompt}
|
return {"role": role, "content": prompt}
|
||||||
|
|||||||
@@ -24,7 +24,6 @@ def run_crew() -> None:
|
|||||||
f"Please run `crewai update` to update your pyproject.toml to use uv.",
|
f"Please run `crewai update` to update your pyproject.toml to use uv.",
|
||||||
fg="red",
|
fg="red",
|
||||||
)
|
)
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
subprocess.run(command, capture_output=False, text=True, check=True)
|
subprocess.run(command, capture_output=False, text=True, check=True)
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
|||||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.76.9,<1.0.0"
|
"crewai[tools]>=0.79.4,<1.0.0"
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
|||||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.76.9,<1.0.0",
|
"crewai[tools]>=0.79.4,<1.0.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
|||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.10,<=3.13"
|
python = ">=3.10,<=3.13"
|
||||||
crewai = { extras = ["tools"], version = ">=0.76.9,<1.0.0" }
|
crewai = { extras = ["tools"], version = ">=0.79.4,<1.0.0" }
|
||||||
asyncio = "*"
|
asyncio = "*"
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
[tool.poetry.scripts]
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
|||||||
authors = ["Your Name <you@example.com>"]
|
authors = ["Your Name <you@example.com>"]
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.76.9,<1.0.0"
|
"crewai[tools]>=0.79.4,<1.0.0"
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
|
|||||||
@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
|
|||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
requires-python = ">=3.10,<=3.13"
|
requires-python = ">=3.10,<=3.13"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"crewai[tools]>=0.76.9"
|
"crewai[tools]>=0.79.4"
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|||||||
@@ -118,12 +118,12 @@ class LLM:
|
|||||||
|
|
||||||
litellm.drop_params = True
|
litellm.drop_params = True
|
||||||
litellm.set_verbose = False
|
litellm.set_verbose = False
|
||||||
litellm.callbacks = callbacks
|
self.set_callbacks(callbacks)
|
||||||
|
|
||||||
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
|
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
|
||||||
with suppress_warnings():
|
with suppress_warnings():
|
||||||
if callbacks and len(callbacks) > 0:
|
if callbacks and len(callbacks) > 0:
|
||||||
litellm.callbacks = callbacks
|
self.set_callbacks(callbacks)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
params = {
|
params = {
|
||||||
@@ -181,3 +181,15 @@ class LLM:
|
|||||||
def get_context_window_size(self) -> int:
|
def get_context_window_size(self) -> int:
|
||||||
# Only using 75% of the context window size to avoid cutting the message in the middle
|
# Only using 75% of the context window size to avoid cutting the message in the middle
|
||||||
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
|
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
|
||||||
|
|
||||||
|
def set_callbacks(self, callbacks: List[Any]):
|
||||||
|
callback_types = [type(callback) for callback in callbacks]
|
||||||
|
for callback in litellm.success_callback[:]:
|
||||||
|
if type(callback) in callback_types:
|
||||||
|
litellm.success_callback.remove(callback)
|
||||||
|
|
||||||
|
for callback in litellm._async_success_callback[:]:
|
||||||
|
if type(callback) in callback_types:
|
||||||
|
litellm._async_success_callback.remove(callback)
|
||||||
|
|
||||||
|
litellm.callbacks = callbacks
|
||||||
|
|||||||
@@ -34,7 +34,6 @@ class ContextualMemory:
|
|||||||
formatted_results = "\n".join(
|
formatted_results = "\n".join(
|
||||||
[f"- {result['context']}" for result in stm_results]
|
[f"- {result['context']}" for result in stm_results]
|
||||||
)
|
)
|
||||||
print("formatted_results stm", formatted_results)
|
|
||||||
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
|
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
|
||||||
|
|
||||||
def _fetch_ltm_context(self, task) -> Optional[str]:
|
def _fetch_ltm_context(self, task) -> Optional[str]:
|
||||||
@@ -54,8 +53,6 @@ class ContextualMemory:
|
|||||||
formatted_results = list(dict.fromkeys(formatted_results))
|
formatted_results = list(dict.fromkeys(formatted_results))
|
||||||
formatted_results = "\n".join([f"- {result}" for result in formatted_results]) # type: ignore # Incompatible types in assignment (expression has type "str", variable has type "list[str]")
|
formatted_results = "\n".join([f"- {result}" for result in formatted_results]) # type: ignore # Incompatible types in assignment (expression has type "str", variable has type "list[str]")
|
||||||
|
|
||||||
print("formatted_results ltm", formatted_results)
|
|
||||||
|
|
||||||
return f"Historical Data:\n{formatted_results}" if ltm_results else ""
|
return f"Historical Data:\n{formatted_results}" if ltm_results else ""
|
||||||
|
|
||||||
def _fetch_entity_context(self, query) -> str:
|
def _fetch_entity_context(self, query) -> str:
|
||||||
@@ -67,5 +64,4 @@ class ContextualMemory:
|
|||||||
formatted_results = "\n".join(
|
formatted_results = "\n".join(
|
||||||
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
||||||
)
|
)
|
||||||
print("formatted_results em", formatted_results)
|
|
||||||
return f"Entities:\n{formatted_results}" if em_results else ""
|
return f"Entities:\n{formatted_results}" if em_results else ""
|
||||||
|
|||||||
0
src/crewai/tools/cache_tools/__init__.py
Normal file
0
src/crewai/tools/cache_tools/__init__.py
Normal file
205
tests/cassettes/test_llm_callback_replacement.yaml
Normal file
205
tests/cassettes/test_llm_callback_replacement.yaml
Normal file
@@ -0,0 +1,205 @@
|
|||||||
|
interactions:
|
||||||
|
- request:
|
||||||
|
body: '{"messages": [{"role": "user", "content": "Hello, world!"}], "model": "gpt-4o-mini",
|
||||||
|
"stream": false}'
|
||||||
|
headers:
|
||||||
|
accept:
|
||||||
|
- application/json
|
||||||
|
accept-encoding:
|
||||||
|
- gzip, deflate
|
||||||
|
connection:
|
||||||
|
- keep-alive
|
||||||
|
content-length:
|
||||||
|
- '101'
|
||||||
|
content-type:
|
||||||
|
- application/json
|
||||||
|
host:
|
||||||
|
- api.openai.com
|
||||||
|
user-agent:
|
||||||
|
- OpenAI/Python 1.52.1
|
||||||
|
x-stainless-arch:
|
||||||
|
- x64
|
||||||
|
x-stainless-async:
|
||||||
|
- 'false'
|
||||||
|
x-stainless-lang:
|
||||||
|
- python
|
||||||
|
x-stainless-os:
|
||||||
|
- Linux
|
||||||
|
x-stainless-package-version:
|
||||||
|
- 1.52.1
|
||||||
|
x-stainless-raw-response:
|
||||||
|
- 'true'
|
||||||
|
x-stainless-retry-count:
|
||||||
|
- '0'
|
||||||
|
x-stainless-runtime:
|
||||||
|
- CPython
|
||||||
|
x-stainless-runtime-version:
|
||||||
|
- 3.11.9
|
||||||
|
method: POST
|
||||||
|
uri: https://api.openai.com/v1/chat/completions
|
||||||
|
response:
|
||||||
|
body:
|
||||||
|
string: !!binary |
|
||||||
|
H4sIAAAAAAAAA4xSwWrcMBS8+ytedY6LvWvYZi8lpZSkBJLSQiChGK307FUi66nSc9Ml7L8H2e56
|
||||||
|
l7bQiw8zb8Yzg14yAGG0WINQW8mq8za/+Oqv5MUmXv+8+/Hl3uO3j59u1efreHO+/PAszpKCNo+o
|
||||||
|
+LfqraLOW2RDbqRVQMmYXMvVsqyWy1VVDERHGm2StZ7zivLOOJMvikWVF6u8fDept2QURrGGhwwA
|
||||||
|
4GX4ppxO4y+xhsFrQDqMUbYo1ocjABHIJkTIGE1k6ViczaQix+iG6JdoLb2BS3oGJR1cwSiAHfXA
|
||||||
|
pOXu/bEwYNNHmcK73toJ3x+SWGp9oE2c+APeGGfitg4oI7n018jkxcDuM4DvQ+P+pITwgTrPNdMT
|
||||||
|
umRYlqOdmHeeyfOJY2JpZ3gxjXRqVmtkaWw8GkwoqbaoZ+W8ruy1oSMiO6r8Z5a/eY+1jWv/x34m
|
||||||
|
lELPqGsfUBt12nc+C5ge4b/ODhMPgUXcRcauboxrMfhgxifQ+LrYyEKXi6opRbbPXgEAAP//AwAM
|
||||||
|
DMWoEAMAAA==
|
||||||
|
headers:
|
||||||
|
CF-Cache-Status:
|
||||||
|
- DYNAMIC
|
||||||
|
CF-RAY:
|
||||||
|
- 8e185b2c1b790303-GRU
|
||||||
|
Connection:
|
||||||
|
- keep-alive
|
||||||
|
Content-Encoding:
|
||||||
|
- gzip
|
||||||
|
Content-Type:
|
||||||
|
- application/json
|
||||||
|
Date:
|
||||||
|
- Tue, 12 Nov 2024 17:49:00 GMT
|
||||||
|
Server:
|
||||||
|
- cloudflare
|
||||||
|
Set-Cookie:
|
||||||
|
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
|
||||||
|
path=/; expires=Tue, 12-Nov-24 18:19:00 GMT; domain=.api.openai.com; HttpOnly;
|
||||||
|
Secure; SameSite=None
|
||||||
|
- _cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000;
|
||||||
|
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||||
|
Transfer-Encoding:
|
||||||
|
- chunked
|
||||||
|
X-Content-Type-Options:
|
||||||
|
- nosniff
|
||||||
|
access-control-expose-headers:
|
||||||
|
- X-Request-ID
|
||||||
|
alt-svc:
|
||||||
|
- h3=":443"; ma=86400
|
||||||
|
openai-organization:
|
||||||
|
- user-tqfegqsiobpvvjmn0giaipdq
|
||||||
|
openai-processing-ms:
|
||||||
|
- '322'
|
||||||
|
openai-version:
|
||||||
|
- '2020-10-01'
|
||||||
|
strict-transport-security:
|
||||||
|
- max-age=31536000; includeSubDomains; preload
|
||||||
|
x-ratelimit-limit-requests:
|
||||||
|
- '10000'
|
||||||
|
x-ratelimit-limit-tokens:
|
||||||
|
- '200000'
|
||||||
|
x-ratelimit-remaining-requests:
|
||||||
|
- '9999'
|
||||||
|
x-ratelimit-remaining-tokens:
|
||||||
|
- '199978'
|
||||||
|
x-ratelimit-reset-requests:
|
||||||
|
- 8.64s
|
||||||
|
x-ratelimit-reset-tokens:
|
||||||
|
- 6ms
|
||||||
|
x-request-id:
|
||||||
|
- req_037288753767e763a51a04eae757ca84
|
||||||
|
status:
|
||||||
|
code: 200
|
||||||
|
message: OK
|
||||||
|
- request:
|
||||||
|
body: '{"messages": [{"role": "user", "content": "Hello, world from another agent!"}],
|
||||||
|
"model": "gpt-4o-mini", "stream": false}'
|
||||||
|
headers:
|
||||||
|
accept:
|
||||||
|
- application/json
|
||||||
|
accept-encoding:
|
||||||
|
- gzip, deflate
|
||||||
|
connection:
|
||||||
|
- keep-alive
|
||||||
|
content-length:
|
||||||
|
- '120'
|
||||||
|
content-type:
|
||||||
|
- application/json
|
||||||
|
cookie:
|
||||||
|
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
|
||||||
|
_cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000
|
||||||
|
host:
|
||||||
|
- api.openai.com
|
||||||
|
user-agent:
|
||||||
|
- OpenAI/Python 1.52.1
|
||||||
|
x-stainless-arch:
|
||||||
|
- x64
|
||||||
|
x-stainless-async:
|
||||||
|
- 'false'
|
||||||
|
x-stainless-lang:
|
||||||
|
- python
|
||||||
|
x-stainless-os:
|
||||||
|
- Linux
|
||||||
|
x-stainless-package-version:
|
||||||
|
- 1.52.1
|
||||||
|
x-stainless-raw-response:
|
||||||
|
- 'true'
|
||||||
|
x-stainless-retry-count:
|
||||||
|
- '0'
|
||||||
|
x-stainless-runtime:
|
||||||
|
- CPython
|
||||||
|
x-stainless-runtime-version:
|
||||||
|
- 3.11.9
|
||||||
|
method: POST
|
||||||
|
uri: https://api.openai.com/v1/chat/completions
|
||||||
|
response:
|
||||||
|
body:
|
||||||
|
string: !!binary |
|
||||||
|
H4sIAAAAAAAAA4xSy27bMBC86yu2PFuBZAt14UvRU5MA7aVAEKAIBJpcSUwoLkuu6jiB/z3QI5aM
|
||||||
|
tkAvPMzsDGZ2+ZoACKPFDoRqJKvW2/TLD3+z//oiD8dfL7d339zvW125x9zX90/3mVj1Cto/ouJ3
|
||||||
|
1ZWi1ltkQ26kVUDJ2Lvm201ebDbbIh+IljTaXlZ7TgtKW+NMus7WRZpt0/zTpG7IKIxiBz8TAIDX
|
||||||
|
4e1zOo3PYgfZ6h1pMUZZo9idhwBEINsjQsZoIkvHYjWTihyjG6Jfo7X0Ab4bhcAEipxDxXAw3IB0
|
||||||
|
xA0GkDU6voJrOoCSDm5gNIUjdcCk5fHz0jxg1UXZF3SdtRN+Oqe1VPtA+zjxZ7wyzsSmDCgjuT5Z
|
||||||
|
ZPJiYE8JwMOwle6iqPCBWs8l0xO63jAvRjsx32JBfpxIJpZ2xjfTJi/dSo0sjY2LrQolVYN6Vs4n
|
||||||
|
kJ02tCCSRec/w/zNe+xtXP0/9jOhFHpGXfqA2qjLwvNYwP6n/mvsvOMhsIjHyNiWlXE1Bh/M+E8q
|
||||||
|
X2Z7mel8XVS5SE7JGwAAAP//AwA/cK4yNQMAAA==
|
||||||
|
headers:
|
||||||
|
CF-Cache-Status:
|
||||||
|
- DYNAMIC
|
||||||
|
CF-RAY:
|
||||||
|
- 8e185b31398a0303-GRU
|
||||||
|
Connection:
|
||||||
|
- keep-alive
|
||||||
|
Content-Encoding:
|
||||||
|
- gzip
|
||||||
|
Content-Type:
|
||||||
|
- application/json
|
||||||
|
Date:
|
||||||
|
- Tue, 12 Nov 2024 17:49:02 GMT
|
||||||
|
Server:
|
||||||
|
- cloudflare
|
||||||
|
Transfer-Encoding:
|
||||||
|
- chunked
|
||||||
|
X-Content-Type-Options:
|
||||||
|
- nosniff
|
||||||
|
access-control-expose-headers:
|
||||||
|
- X-Request-ID
|
||||||
|
alt-svc:
|
||||||
|
- h3=":443"; ma=86400
|
||||||
|
openai-organization:
|
||||||
|
- user-tqfegqsiobpvvjmn0giaipdq
|
||||||
|
openai-processing-ms:
|
||||||
|
- '889'
|
||||||
|
openai-version:
|
||||||
|
- '2020-10-01'
|
||||||
|
strict-transport-security:
|
||||||
|
- max-age=31536000; includeSubDomains; preload
|
||||||
|
x-ratelimit-limit-requests:
|
||||||
|
- '10000'
|
||||||
|
x-ratelimit-limit-tokens:
|
||||||
|
- '200000'
|
||||||
|
x-ratelimit-remaining-requests:
|
||||||
|
- '9998'
|
||||||
|
x-ratelimit-remaining-tokens:
|
||||||
|
- '199975'
|
||||||
|
x-ratelimit-reset-requests:
|
||||||
|
- 16.489s
|
||||||
|
x-ratelimit-reset-tokens:
|
||||||
|
- 7ms
|
||||||
|
x-request-id:
|
||||||
|
- req_bde3810b36a4859688e53d1df64bdd20
|
||||||
|
status:
|
||||||
|
code: 200
|
||||||
|
message: OK
|
||||||
|
version: 1
|
||||||
@@ -1280,10 +1280,10 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
|
|||||||
assert result.raw == "Howdy!"
|
assert result.raw == "Howdy!"
|
||||||
|
|
||||||
assert result.token_usage == UsageMetrics(
|
assert result.token_usage == UsageMetrics(
|
||||||
total_tokens=2626,
|
total_tokens=1673,
|
||||||
prompt_tokens=2482,
|
prompt_tokens=1562,
|
||||||
completion_tokens=144,
|
completion_tokens=111,
|
||||||
successful_requests=5,
|
successful_requests=3,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
30
tests/llm_test.py
Normal file
30
tests/llm_test.py
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
import pytest
|
||||||
|
|
||||||
|
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||||
|
from crewai.llm import LLM
|
||||||
|
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||||
|
def test_llm_callback_replacement():
|
||||||
|
llm = LLM(model="gpt-4o-mini")
|
||||||
|
|
||||||
|
calc_handler_1 = TokenCalcHandler(token_cost_process=TokenProcess())
|
||||||
|
calc_handler_2 = TokenCalcHandler(token_cost_process=TokenProcess())
|
||||||
|
|
||||||
|
llm.call(
|
||||||
|
messages=[{"role": "user", "content": "Hello, world!"}],
|
||||||
|
callbacks=[calc_handler_1],
|
||||||
|
)
|
||||||
|
usage_metrics_1 = calc_handler_1.token_cost_process.get_summary()
|
||||||
|
|
||||||
|
llm.call(
|
||||||
|
messages=[{"role": "user", "content": "Hello, world from another agent!"}],
|
||||||
|
callbacks=[calc_handler_2],
|
||||||
|
)
|
||||||
|
usage_metrics_2 = calc_handler_2.token_cost_process.get_summary()
|
||||||
|
|
||||||
|
# The first handler should not have been updated
|
||||||
|
assert usage_metrics_1.successful_requests == 1
|
||||||
|
assert usage_metrics_2.successful_requests == 1
|
||||||
|
assert usage_metrics_1 == calc_handler_1.token_cost_process.get_summary()
|
||||||
14
uv.lock
generated
14
uv.lock
generated
@@ -604,7 +604,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "crewai"
|
name = "crewai"
|
||||||
version = "0.76.9"
|
version = "0.79.4"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "appdirs" },
|
{ name = "appdirs" },
|
||||||
@@ -665,8 +665,8 @@ requires-dist = [
|
|||||||
{ name = "auth0-python", specifier = ">=4.7.1" },
|
{ name = "auth0-python", specifier = ">=4.7.1" },
|
||||||
{ name = "chromadb", specifier = ">=0.4.24" },
|
{ name = "chromadb", specifier = ">=0.4.24" },
|
||||||
{ name = "click", specifier = ">=8.1.7" },
|
{ name = "click", specifier = ">=8.1.7" },
|
||||||
{ name = "crewai-tools", specifier = ">=0.13.4" },
|
{ name = "crewai-tools", specifier = ">=0.14.0" },
|
||||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.13.4" },
|
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" },
|
||||||
{ name = "instructor", specifier = ">=1.3.3" },
|
{ name = "instructor", specifier = ">=1.3.3" },
|
||||||
{ name = "json-repair", specifier = ">=0.25.2" },
|
{ name = "json-repair", specifier = ">=0.25.2" },
|
||||||
{ name = "jsonref", specifier = ">=1.1.0" },
|
{ name = "jsonref", specifier = ">=1.1.0" },
|
||||||
@@ -688,7 +688,7 @@ requires-dist = [
|
|||||||
[package.metadata.requires-dev]
|
[package.metadata.requires-dev]
|
||||||
dev = [
|
dev = [
|
||||||
{ name = "cairosvg", specifier = ">=2.7.1" },
|
{ name = "cairosvg", specifier = ">=2.7.1" },
|
||||||
{ name = "crewai-tools", specifier = ">=0.13.4" },
|
{ name = "crewai-tools", specifier = ">=0.14.0" },
|
||||||
{ name = "mkdocs", specifier = ">=1.4.3" },
|
{ name = "mkdocs", specifier = ">=1.4.3" },
|
||||||
{ name = "mkdocs-material", specifier = ">=9.5.7" },
|
{ name = "mkdocs-material", specifier = ">=9.5.7" },
|
||||||
{ name = "mkdocs-material-extensions", specifier = ">=1.3.1" },
|
{ name = "mkdocs-material-extensions", specifier = ">=1.3.1" },
|
||||||
@@ -707,7 +707,7 @@ dev = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "crewai-tools"
|
name = "crewai-tools"
|
||||||
version = "0.13.4"
|
version = "0.14.0"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "beautifulsoup4" },
|
{ name = "beautifulsoup4" },
|
||||||
@@ -725,9 +725,9 @@ dependencies = [
|
|||||||
{ name = "requests" },
|
{ name = "requests" },
|
||||||
{ name = "selenium" },
|
{ name = "selenium" },
|
||||||
]
|
]
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/64/bd/eff7b633a0b28ff4ed115adde1499e3dcc683e4f0b5c378a4c6f5c0c1bf6/crewai_tools-0.13.4.tar.gz", hash = "sha256:b6ac527633b7018471d892c21ac96bc961a86b6626d996b1ed7d53cd481d4505", size = 816588 }
|
sdist = { url = "https://files.pythonhosted.org/packages/9b/6d/4fa91b481b120f83bb58f365203d8aa8564e8ced1035d79f8aedb7d71e2f/crewai_tools-0.14.0.tar.gz", hash = "sha256:510f3a194bcda4fdae4314bd775521964b5f229ddbe451e5d9e0216cae57f4e3", size = 815892 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/6c/40/93cd347d854059cf5e54a81b70f896deea7ad1f03e9c024549eb323c4da5/crewai_tools-0.13.4-py3-none-any.whl", hash = "sha256:eda78fe3c4df57676259d8dd6b2610fa31f89b90909512f15893adb57fb9e825", size = 463703 },
|
{ url = "https://files.pythonhosted.org/packages/c8/ed/9f4e64e1507062957b0118085332d38b621c1000874baef2d1c4069bfd97/crewai_tools-0.14.0-py3-none-any.whl", hash = "sha256:0a804a828c29869c3af3253f4fc4c3967a3f80f06dab22e9bbe9526608a31564", size = 462980 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|||||||
Reference in New Issue
Block a user