Compare commits

...

12 Commits

Author SHA1 Message Date
Brandon Hancock
940fb30e0e quick fix for mike 2025-01-27 17:37:42 -05:00
Brandon Hancock (bhancock_ai)
dea6ed7ef0 fix issue pointed out by mike (#1986)
* fix issue pointed out by mike

* clean up

* Drop logger

* drop unused imports
2025-01-27 17:35:17 -05:00
Brandon Hancock (bhancock_ai)
d3a0dad323 Bugfix/litellm plus generic exceptions (#1965)
* wip

* More clean up

* Fix error

* clean up test

* Improve chat calling messages

* crewai chat improvements

* working but need to clean up

* Clean up chat
2025-01-27 13:41:46 -08:00
devin-ai-integration[bot]
67bf4aea56 Add version check to crew_chat.py (#1966)
* Add version check to crew_chat.py with min version 0.98.0

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Fix import sorting in crew_chat.py

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Fix import sorting in crew_chat.py (attempt 3)

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Update error message, add version check helper, fix import sorting

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Fix import sorting with Ruff auto-fix

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

* Remove poetry check and import comment headers in crew_chat.py

Co-Authored-By: brandon@crewai.com <brandon@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: brandon@crewai.com <brandon@crewai.com>
2025-01-24 17:04:41 -05:00
Brandon Hancock (bhancock_ai)
8c76bad50f Fix litellm issues to be more broad (#1960)
* Fix litellm issues to be more broad

* Fix tests
2025-01-23 23:32:10 -05:00
Bobby Lindsey
e27a15023c Add SageMaker as a LLM provider (#1947)
* Add SageMaker as a LLM provider

* Removed unnecessary constants; updated docs to align with bootstrap naming convention

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-01-22 14:55:24 -05:00
Brandon Hancock (bhancock_ai)
a836f466f4 Updated calls and added tests to verify (#1953)
* Updated calls and added tests to verify

* Drop unused import
2025-01-22 14:36:15 -05:00
Brandon Hancock (bhancock_ai)
67f0de1f90 Bugfix/kickoff hangs when llm call fails (#1943)
* Wip to address https://github.com/crewAIInc/crewAI/issues/1934

* implement proper try / except

* clean up PR

* add tests

* Fix tests and code that was broken

* mnore clean up

* Fixing tests

* fix stop type errors]

* more fixes
2025-01-22 14:24:00 -05:00
Tony Kipkemboi
c642ebf97e docs: improve formatting and clarity in CLI and Composio Tool docs (#1946)
* docs: improve formatting and clarity in CLI and Composio Tool docs

- Add Terminal label to shell code blocks in CLI docs
- Update Composio Tool title and fix tip formatting

* docs: improve installation guide with virtual environment details

- Update Python version requirements and commands
- Add detailed virtual environment setup instructions
- Clarify project-specific environment activation steps
- Streamline additional tools installation with UV

* docs: simplify installation guide

- Remove redundant virtual environment instructions
- Simplify project creation steps
- Update UV package manager description
2025-01-22 10:30:16 -05:00
Brandon Hancock (bhancock_ai)
a21e310d78 add docs for crewai chat (#1936)
* add docs for crewai chat

* add version number
2025-01-21 11:10:25 -05:00
Abhishek Patil
aba68da542 feat: add Composio docs (#1904)
* feat: update Composio tool docs

* Update composiotool.mdx

* fix: minor changes

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2025-01-21 11:03:37 -05:00
Sanjeed
e254f11933 Fix wrong llm value in example (#1929)
Original example had `mixtal-llm` which would result in an error.
Replaced with gpt-4o according to https://docs.crewai.com/concepts/llms
2025-01-21 02:55:27 -03:00
26 changed files with 1657 additions and 229 deletions

View File

@@ -12,7 +12,7 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
To use the CrewAI CLI, make sure you have CrewAI installed:
```shell
```shell Terminal
pip install crewai
```
@@ -20,7 +20,7 @@ pip install crewai
The basic structure of a CrewAI CLI command is:
```shell
```shell Terminal
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
@@ -30,7 +30,7 @@ crewai [COMMAND] [OPTIONS] [ARGUMENTS]
Create a new crew or flow.
```shell
```shell Terminal
crewai create [OPTIONS] TYPE NAME
```
@@ -38,7 +38,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow
Example:
```shell
```shell Terminal
crewai create crew my_new_crew
crewai create flow my_new_flow
```
@@ -47,14 +47,14 @@ crewai create flow my_new_flow
Show the installed version of CrewAI.
```shell
```shell Terminal
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell
```shell Terminal
crewai version
crewai version --tools
```
@@ -63,7 +63,7 @@ crewai version --tools
Train the crew for a specified number of iterations.
```shell
```shell Terminal
crewai train [OPTIONS]
```
@@ -71,7 +71,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell
```shell Terminal
crewai train -n 10 -f my_training_data.pkl
```
@@ -79,14 +79,14 @@ crewai train -n 10 -f my_training_data.pkl
Replay the crew execution from a specific task.
```shell
```shell Terminal
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell
```shell Terminal
crewai replay -t task_123456
```
@@ -94,7 +94,7 @@ crewai replay -t task_123456
Retrieve your latest crew.kickoff() task outputs.
```shell
```shell Terminal
crewai log-tasks-outputs
```
@@ -102,7 +102,7 @@ crewai log-tasks-outputs
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```shell
```shell Terminal
crewai reset-memories [OPTIONS]
```
@@ -113,7 +113,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories
Example:
```shell
```shell Terminal
crewai reset-memories --long --short
crewai reset-memories --all
```
@@ -122,7 +122,7 @@ crewai reset-memories --all
Test the crew and evaluate the results.
```shell
```shell Terminal
crewai test [OPTIONS]
```
@@ -130,7 +130,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -138,7 +138,7 @@ crewai test -n 5 -m gpt-3.5-turbo
Run the crew.
```shell
```shell Terminal
crewai run
```
<Note>
@@ -147,7 +147,36 @@ Some commands may require additional configuration or setup within your project
</Note>
### 9. API Keys
### 9. Chat
Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
After receiving the results, you can continue interacting with the assistant for further instructions or questions.
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
<Note>
IMPORTANT: Set the `chat_llm` property in your `crew.py` file to enable this command.
```python
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.

View File

@@ -243,6 +243,9 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Amazon SageMaker Models - Enterprise-grade
# llm: sagemaker/<my-endpoint>
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
@@ -506,6 +509,21 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Amazon SageMaker">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM(
model="sagemaker/<my-endpoint>"
)
```
</Accordion>
<Accordion title="Mistral">
```python Code

View File

@@ -15,10 +15,48 @@ icon: wrench
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
# Setting Up Your Environment
Before installing CrewAI, it's recommended to set up a virtual environment. This helps isolate your project dependencies and avoid conflicts.
<Steps>
<Step title="Create a Virtual Environment">
Choose your preferred method to create a virtual environment:
**Using venv (Python's built-in tool):**
```shell Terminal
python3 -m venv .venv
```
**Using conda:**
```shell Terminal
conda create -n crewai-env python=3.12
```
</Step>
<Step title="Activate the Virtual Environment">
Activate your virtual environment based on your platform:
**On macOS/Linux (venv):**
```shell Terminal
source .venv/bin/activate
```
**On Windows (venv):**
```shell Terminal
.venv\Scripts\activate
```
**Using conda (all platforms):**
```shell Terminal
conda activate crewai-env
```
</Step>
</Steps>
# Installing CrewAI
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get you set up! 🚀
Now let's get you set up! 🚀
<Steps>
<Step title="Install CrewAI">
@@ -72,9 +110,9 @@ Let's get you set up! 🚀
# Creating a New Project
<Info>
<Tip>
We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks.
</Info>
</Tip>
<Steps>
<Step title="Generate Project Structure">
@@ -104,7 +142,18 @@ Let's get you set up! 🚀
└── tasks.yaml
```
</Frame>
</Step>
</Step>
<Step title="Install Additional Tools">
You can install additional tools using UV:
```shell Terminal
uv add <tool-name>
```
<Tip>
UV is our preferred package manager as it's significantly faster than pip and provides better dependency resolution.
</Tip>
</Step>
<Step title="Customize Your Project">
Your project will contain these essential files:

View File

@@ -278,7 +278,7 @@ email_summarizer:
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: mixtal_llm
llm: openai/gpt-4o
```
<Tip>

View File

@@ -1,78 +1,118 @@
---
title: Composio Tool
description: The `ComposioTool` is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
description: Composio provides 250+ production-ready tools for AI agents with flexible authentication management.
icon: gear-code
---
# `ComposioTool`
# `ComposioToolSet`
## Description
Composio is an integration platform that allows you to connect your AI agents to 250+ tools. Key features include:
This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
- **Enterprise-Grade Authentication**: Built-in support for OAuth, API Keys, JWT with automatic token refresh
- **Full Observability**: Detailed tool usage logs, execution timestamps, and more
## Installation
To incorporate this tool into your project, follow the installation instructions below:
To incorporate Composio tools into your project, follow the instructions below:
```shell
pip install composio-core
pip install 'crewai[tools]'
pip install composio-crewai
pip install crewai
```
after the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`.
After the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://app.composio.dev)
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize Composio tools
1. Initialize Composio toolset
```python Code
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
from composio_crewai import ComposioToolSet, App, Action
from crewai import Agent, Task, Crew
tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER)]
toolset = ComposioToolSet()
```
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
2. Connect your GitHub account
<CodeGroup>
```shell CLI
composio add github
```
```python Code
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```
</CodeGroup>
or use `use_case` to search relevant actions
3. Get Tools
- Retrieving all the tools from an app (not recommended for production):
```python Code
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
tools = toolset.get_tools(apps=[App.GITHUB])
```
2. Define agent
- Filtering tools based on tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtering tools based on use case:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
- Using specific tools:
In this demo, we will use the `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` action from the GitHub app.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Learn more about filtering actions [here](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Define agent
```python Code
crewai_agent = Agent(
role="Github Agent",
goal="You take action on Github using Github APIs",
backstory=(
"You are AI agent that is responsible for taking actions on Github "
"on users behalf. You need to take action on Github using Github APIs"
),
role="GitHub Agent",
goal="You take action on GitHub using GitHub APIs",
backstory="You are AI agent that is responsible for taking actions on GitHub on behalf of users using GitHub APIs",
verbose=True,
tools=tools,
llm= # pass an llm
)
```
3. Execute task
5. Execute task
```python Code
task = Task(
description="Star a repo ComposioHQ/composio on GitHub",
description="Star a repo composiohq/composio on GitHub",
agent=crewai_agent,
expected_output="if the star happened",
expected_output="Status of the operation",
)
task.execute()
crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://app.composio.dev)

View File

@@ -36,6 +36,7 @@ dependencies = [
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"blinker>=1.9.0",
"json5>=0.10.0",
]
[project.urls]

View File

@@ -1,4 +1,3 @@
import os
import shutil
import subprocess
from typing import Any, Dict, List, Literal, Optional, Union
@@ -8,7 +7,6 @@ from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS, LITELLM_PARAMS
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
@@ -261,6 +259,9 @@ class Agent(BaseAgent):
}
)["output"]
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
self._times_executed += 1
if self._times_executed > self.max_retry_limit:
raise e

View File

@@ -13,6 +13,7 @@ from crewai.agents.parser import (
OutputParserException,
)
from crewai.agents.tools_handler import ToolsHandler
from crewai.llm import LLM
from crewai.tools.base_tool import BaseTool
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N, Printer
@@ -54,7 +55,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
callbacks: List[Any] = [],
):
self._i18n: I18N = I18N()
self.llm = llm
self.llm: LLM = llm
self.task = task
self.agent = agent
self.crew = crew
@@ -80,10 +81,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.tool_name_to_tool_map: Dict[str, BaseTool] = {
tool.name: tool for tool in self.tools
}
if self.llm.stop:
self.llm.stop = list(set(self.llm.stop + self.stop))
else:
self.llm.stop = self.stop
self.stop = stop_words
self.llm.stop = list(set(self.llm.stop + self.stop))
def invoke(self, inputs: Dict[str, str]) -> Dict[str, Any]:
if "system" in self.prompt:
@@ -98,7 +97,16 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._show_start_logs()
self.ask_for_human_input = bool(inputs.get("ask_for_human_input", False))
formatted_answer = self._invoke_loop()
try:
formatted_answer = self._invoke_loop()
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
else:
self._handle_unknown_error(e)
raise e
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
@@ -124,7 +132,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._enforce_rpm_limit()
answer = self._get_llm_response()
formatted_answer = self._process_llm_response(answer)
if isinstance(formatted_answer, AgentAction):
@@ -142,13 +149,32 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer = self._handle_output_parser_exception(e)
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if self._is_context_length_exceeded(e):
self._handle_context_length()
continue
else:
self._handle_unknown_error(e)
raise e
finally:
self.iterations += 1
self._show_logs(formatted_answer)
return formatted_answer
def _handle_unknown_error(self, exception: Exception) -> None:
"""Handle unknown errors by informing the user."""
self._printer.print(
content="An unknown error occurred. Please check the details below.",
color="red",
)
self._printer.print(
content=f"Error details: {exception}",
color="red",
)
def _has_reached_max_iterations(self) -> bool:
"""Check if the maximum number of iterations has been reached."""
return self.iterations >= self.max_iter
@@ -160,10 +186,17 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def _get_llm_response(self) -> str:
"""Call the LLM and return the response, handling any invalid responses."""
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
try:
answer = self.llm.call(
self.messages,
callbacks=self.callbacks,
)
except Exception as e:
self._printer.print(
content=f"Error during LLM call: {e}",
color="red",
)
raise e
if not answer:
self._printer.print(
@@ -184,7 +217,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE in e.error:
answer = answer.split("Observation:")[0].strip()
self.iterations += 1
return self._format_answer(answer)
def _handle_agent_action(

View File

@@ -350,7 +350,10 @@ def chat():
Start a conversation with the Crew, collecting user-supplied inputs,
and using the Chat LLM to generate responses.
"""
click.echo("Starting a conversation with the Crew")
click.secho(
"\nStarting a conversation with the Crew\n" "Type 'exit' or Ctrl+C to quit.\n",
)
run_chat()

View File

@@ -1,17 +1,52 @@
import json
import platform
import re
import sys
import threading
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple
import click
import tomli
from packaging import version
from crewai.cli.utils import read_toml
from crewai.cli.version import get_crewai_version
from crewai.crew import Crew
from crewai.llm import LLM
from crewai.types.crew_chat import ChatInputField, ChatInputs
from crewai.utilities.llm_utils import create_llm
MIN_REQUIRED_VERSION = "0.98.0"
def check_conversational_crews_version(
crewai_version: str, pyproject_data: dict
) -> bool:
"""
Check if the installed crewAI version supports conversational crews.
Args:
crewai_version: The current version of crewAI.
pyproject_data: Dictionary containing pyproject.toml data.
Returns:
bool: True if version check passes, False otherwise.
"""
try:
if version.parse(crewai_version) < version.parse(MIN_REQUIRED_VERSION):
click.secho(
"You are using an older version of crewAI that doesn't support conversational crews. "
"Run 'uv upgrade crewai' to get the latest version.",
fg="red",
)
return False
except version.InvalidVersion:
click.secho("Invalid crewAI version format detected.", fg="red")
return False
return True
def run_chat():
"""
@@ -19,20 +54,47 @@ def run_chat():
Incorporates crew_name, crew_description, and input fields to build a tool schema.
Exits if crew_name or crew_description are missing.
"""
crewai_version = get_crewai_version()
pyproject_data = read_toml()
if not check_conversational_crews_version(crewai_version, pyproject_data):
return
crew, crew_name = load_crew_and_name()
chat_llm = initialize_chat_llm(crew)
if not chat_llm:
return
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs)
# Call the LLM to generate the introductory message
introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}]
# Indicate that the crew is being analyzed
click.secho(
"\nAnalyzing crew and required inputs - this may take 3 to 30 seconds "
"depending on the complexity of your crew.",
fg="white",
)
click.secho(f"\nAssistant: {introductory_message}\n", fg="green")
# Start loading indicator
loading_complete = threading.Event()
loading_thread = threading.Thread(target=show_loading, args=(loading_complete,))
loading_thread.start()
try:
crew_chat_inputs = generate_crew_chat_inputs(crew, crew_name, chat_llm)
crew_tool_schema = generate_crew_tool_schema(crew_chat_inputs)
system_message = build_system_message(crew_chat_inputs)
# Call the LLM to generate the introductory message
introductory_message = chat_llm.call(
messages=[{"role": "system", "content": system_message}]
)
finally:
# Stop loading indicator
loading_complete.set()
loading_thread.join()
# Indicate that the analysis is complete
click.secho("\nFinished analyzing crew.\n", fg="white")
click.secho(f"Assistant: {introductory_message}\n", fg="green")
messages = [
{"role": "system", "content": system_message},
@@ -43,15 +105,17 @@ def run_chat():
crew_chat_inputs.crew_name: create_tool_function(crew, messages),
}
click.secho(
"\nEntering an interactive chat loop with function-calling.\n"
"Type 'exit' or Ctrl+C to quit.\n",
fg="cyan",
)
chat_loop(chat_llm, messages, crew_tool_schema, available_functions)
def show_loading(event: threading.Event):
"""Display animated loading dots while processing."""
while not event.is_set():
print(".", end="", flush=True)
time.sleep(1)
print()
def initialize_chat_llm(crew: Crew) -> Optional[LLM]:
"""Initializes the chat LLM and handles exceptions."""
try:
@@ -85,7 +149,7 @@ def build_system_message(crew_chat_inputs: ChatInputs) -> str:
"Please keep your responses concise and friendly. "
"If a user asks a question outside the crew's scope, provide a brief answer and remind them of the crew's purpose. "
"After calling the tool, be prepared to take user feedback and make adjustments as needed. "
"If you are ever unsure about a user's request or need clarification, ask the user for more information."
"If you are ever unsure about a user's request or need clarification, ask the user for more information. "
"Before doing anything else, introduce yourself with a friendly message like: 'Hey! I'm here to help you with [crew's purpose]. Could you please provide me with [inputs] so we can get started?' "
"For example: 'Hey! I'm here to help you with uncovering and reporting cutting-edge developments through thorough research and detailed analysis. Could you please provide me with a topic you're interested in? This will help us generate a comprehensive research report and detailed analysis.'"
f"\nCrew Name: {crew_chat_inputs.crew_name}"
@@ -102,25 +166,33 @@ def create_tool_function(crew: Crew, messages: List[Dict[str, str]]) -> Any:
return run_crew_tool_with_messages
def flush_input():
"""Flush any pending input from the user."""
if platform.system() == "Windows":
# Windows platform
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
else:
# Unix-like platforms (Linux, macOS)
import termios
termios.tcflush(sys.stdin, termios.TCIFLUSH)
def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
"""Main chat loop for interacting with the user."""
while True:
try:
user_input = click.prompt("You", type=str)
if user_input.strip().lower() in ["exit", "quit"]:
click.echo("Exiting chat. Goodbye!")
break
# Flush any pending input before accepting new input
flush_input()
messages.append({"role": "user", "content": user_input})
final_response = chat_llm.call(
messages=messages,
tools=[crew_tool_schema],
available_functions=available_functions,
user_input = get_user_input()
handle_user_input(
user_input, chat_llm, messages, crew_tool_schema, available_functions
)
messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green")
except KeyboardInterrupt:
click.echo("\nExiting chat. Goodbye!")
break
@@ -129,6 +201,55 @@ def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
break
def get_user_input() -> str:
"""Collect multi-line user input with exit handling."""
click.secho(
"\nYou (type your message below. Press 'Enter' twice when you're done):",
fg="blue",
)
user_input_lines = []
while True:
line = input()
if line.strip().lower() == "exit":
return "exit"
if line == "":
break
user_input_lines.append(line)
return "\n".join(user_input_lines)
def handle_user_input(
user_input: str,
chat_llm: LLM,
messages: List[Dict[str, str]],
crew_tool_schema: Dict[str, Any],
available_functions: Dict[str, Any],
) -> None:
if user_input.strip().lower() == "exit":
click.echo("Exiting chat. Goodbye!")
return
if not user_input.strip():
click.echo("Empty message. Please provide input or type 'exit' to quit.")
return
messages.append({"role": "user", "content": user_input})
# Indicate that assistant is processing
click.echo()
click.secho("Assistant is processing your input. Please wait...", fg="green")
# Process assistant's response
final_response = chat_llm.call(
messages=messages,
tools=[crew_tool_schema],
available_functions=available_functions,
)
messages.append({"role": "assistant", "content": final_response})
click.secho(f"\nAssistant: {final_response}\n", fg="green")
def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
"""
Dynamically build a Littellm 'function' schema for the given crew.
@@ -323,10 +444,10 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
):
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
@@ -337,10 +458,10 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
or f"{{{input_name}}}" in agent.backstory
):
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
@@ -381,18 +502,20 @@ def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
for task in crew.tasks:
# Replace placeholders with input names
task_description = placeholder_pattern.sub(
lambda m: m.group(1), task.description
lambda m: m.group(1), task.description or ""
)
expected_output = placeholder_pattern.sub(
lambda m: m.group(1), task.expected_output
lambda m: m.group(1), task.expected_output or ""
)
context_texts.append(f"Task Description: {task_description}")
context_texts.append(f"Expected Output: {expected_output}")
for agent in crew.agents:
# Replace placeholders with input names
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role)
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal)
agent_backstory = placeholder_pattern.sub(lambda m: m.group(1), agent.backstory)
agent_role = placeholder_pattern.sub(lambda m: m.group(1), agent.role or "")
agent_goal = placeholder_pattern.sub(lambda m: m.group(1), agent.goal or "")
agent_backstory = placeholder_pattern.sub(
lambda m: m.group(1), agent.backstory or ""
)
context_texts.append(f"Agent Role: {agent_role}")
context_texts.append(f"Agent Goal: {agent_goal}")
context_texts.append(f"Agent Backstory: {agent_backstory}")

View File

@@ -1,2 +1,3 @@
.env
__pycache__/
.DS_Store

View File

@@ -1,3 +1,4 @@
.env
__pycache__/
lib/
.DS_Store

View File

@@ -37,7 +37,6 @@ from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import Tool
from crewai.types.crew_chat import ChatInputs
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import TRAINING_DATA_FILE
@@ -84,6 +83,7 @@ class Crew(BaseModel):
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew information and execution with crewAI to make the library better, and allow us to train models.
planning: Plan the crew execution and add the plan to the crew.
chat_llm: The language model used for orchestrating chat interactions with the crew.
"""
__hash__ = object.__hash__ # type: ignore

View File

@@ -142,7 +142,6 @@ class LLM:
self.temperature = temperature
self.top_p = top_p
self.n = n
self.stop = stop
self.max_completion_tokens = max_completion_tokens
self.max_tokens = max_tokens
self.presence_penalty = presence_penalty
@@ -160,37 +159,63 @@ class LLM:
litellm.drop_params = True
# Normalize self.stop to always be a List[str]
if stop is None:
self.stop: List[str] = []
elif isinstance(stop, str):
self.stop = [stop]
else:
self.stop = stop
self.set_callbacks(callbacks)
self.set_env_callbacks()
def call(
self,
messages: List[Dict[str, str]],
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
) -> str:
"""
High-level call method that:
1) Calls litellm.completion
2) Checks for function/tool calls
3) If a tool call is found:
a) executes the function
b) returns the result
4) If no tool call, returns the text response
High-level llm call method that:
1) Accepts either a string or a list of messages
2) Converts string input to the required message format
3) Calls litellm.completion
4) Handles function/tool calls if any
5) Returns the final text response or tool result
:param messages: The conversation messages
:param tools: Optional list of function schemas for function calling
:param callbacks: Optional list of callbacks
:param available_functions: A dictionary mapping function_name -> actual Python function
:return: Final text response from the LLM or the tool result
Parameters:
- messages (Union[str, List[Dict[str, str]]]): The input messages for the LLM.
- If a string is provided, it will be converted into a message list with a single entry.
- If a list of dictionaries is provided, each dictionary should have 'role' and 'content' keys.
- tools (Optional[List[dict]]): A list of tool schemas for function calling.
- callbacks (Optional[List[Any]]): A list of callback functions to be executed.
- available_functions (Optional[Dict[str, Any]]): A dictionary mapping function names to actual Python functions.
Returns:
- str: The final text response from the LLM or the result of a tool function call.
Examples:
---------
# Example 1: Using a string input
response = llm.call("Return the name of a random city in the world.")
print(response)
# Example 2: Using a list of messages
messages = [{"role": "user", "content": "What is the capital of France?"}]
response = llm.call(messages)
print(response)
"""
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
with suppress_warnings():
if callbacks and len(callbacks) > 0:
self.set_callbacks(callbacks)
try:
# --- 1) Make the completion call
# --- 1) Prepare the parameters for the completion call
params = {
"model": self.model,
"messages": messages,
@@ -211,19 +236,21 @@ class LLM:
"api_version": self.api_version,
"api_key": self.api_key,
"stream": False,
"tools": tools, # pass the tool schema
"tools": tools,
}
# Remove None values from params
params = {k: v for k, v in params.items() if v is not None}
# --- 2) Make the completion call
response = litellm.completion(**params)
response_message = cast(Choices, cast(ModelResponse, response).choices)[
0
].message
text_response = response_message.content or ""
tool_calls = getattr(response_message, "tool_calls", [])
# Ensure callbacks get the full response object with usage info
# --- 3) Handle callbacks with usage info
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if hasattr(callback, "log_success_event"):
@@ -236,11 +263,11 @@ class LLM:
end_time=0,
)
# --- 2) If no tool calls, return the text response
# --- 4) If no tool calls, return the text response
if not tool_calls or not available_functions:
return text_response
# --- 3) Handle the tool call
# --- 5) Handle the tool call
tool_call = tool_calls[0]
function_name = tool_call.function.name
@@ -255,7 +282,6 @@ class LLM:
try:
# Call the actual tool function
result = fn(**function_args)
return result
except Exception as e:

View File

@@ -1,12 +1,13 @@
import ast
import datetime
import json
import re
import time
from difflib import SequenceMatcher
from json import JSONDecodeError
from textwrap import dedent
from typing import Any, Dict, List, Union
from typing import Any, Dict, List, Optional, Union
import json5
from json_repair import repair_json
import crewai.utilities.events as events
@@ -407,28 +408,55 @@ class ToolUsage:
)
return self._tool_calling(tool_string)
def _validate_tool_input(self, tool_input: str) -> Dict[str, Any]:
def _validate_tool_input(self, tool_input: Optional[str]) -> Dict[str, Any]:
if tool_input is None:
return {}
if not isinstance(tool_input, str) or not tool_input.strip():
raise Exception(
"Tool input must be a valid dictionary in JSON or Python literal format"
)
# Attempt 1: Parse as JSON
try:
# Replace Python literals with JSON equivalents
replacements = {
r"'": '"',
r"None": "null",
r"True": "true",
r"False": "false",
}
for pattern, replacement in replacements.items():
tool_input = re.sub(pattern, replacement, tool_input)
arguments = json.loads(tool_input)
except json.JSONDecodeError:
# Attempt to repair JSON string
repaired_input = repair_json(tool_input)
try:
arguments = json.loads(repaired_input)
except json.JSONDecodeError as e:
raise Exception(f"Invalid tool input JSON: {e}")
if isinstance(arguments, dict):
return arguments
except (JSONDecodeError, TypeError):
pass # Continue to the next parsing attempt
return arguments
# Attempt 2: Parse as Python literal
try:
arguments = ast.literal_eval(tool_input)
if isinstance(arguments, dict):
return arguments
except (ValueError, SyntaxError):
pass # Continue to the next parsing attempt
# Attempt 3: Parse as JSON5
try:
arguments = json5.loads(tool_input)
if isinstance(arguments, dict):
return arguments
except (JSONDecodeError, ValueError, TypeError):
pass # Continue to the next parsing attempt
# Attempt 4: Repair JSON
try:
repaired_input = repair_json(tool_input)
self._printer.print(
content=f"Repaired JSON: {repaired_input}", color="blue"
)
arguments = json.loads(repaired_input)
if isinstance(arguments, dict):
return arguments
except Exception as e:
self._printer.print(content=f"Failed to repair JSON: {e}", color="red")
# If all parsing attempts fail, raise an error
raise Exception(
"Tool input must be a valid dictionary in JSON or Python literal format"
)
def on_tool_error(self, tool: Any, tool_calling: ToolCalling, e: Exception) -> None:
event_data = self._prepare_event_data(tool, tool_calling)

View File

@@ -24,12 +24,10 @@ def create_llm(
# 1) If llm_value is already an LLM object, return it directly
if isinstance(llm_value, LLM):
print("LLM value is already an LLM object")
return llm_value
# 2) If llm_value is a string (model name)
if isinstance(llm_value, str):
print("LLM value is a string")
try:
created_llm = LLM(model=llm_value)
return created_llm
@@ -39,12 +37,10 @@ def create_llm(
# 3) If llm_value is None, parse environment variables or use default
if llm_value is None:
print("LLM value is None")
return _llm_via_environment_or_fallback()
# 4) Otherwise, attempt to extract relevant attributes from an unknown object
try:
print("LLM value is an unknown object")
# Extract attributes with explicit types
model = (
getattr(llm_value, "model_name", None)

View File

@@ -16,7 +16,7 @@ from crewai.tools import tool
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.tools.tool_usage_events import ToolUsageFinished
from crewai.utilities import RPMController
from crewai.utilities import Printer, RPMController
from crewai.utilities.events import Emitter
@@ -1600,3 +1600,142 @@ def test_agent_with_knowledge_sources():
# Assert that the agent provides the correct information
assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_litellm_auth_error_handling():
"""Test that LiteLLM authentication errors are handled correctly and not retried."""
from litellm import AuthenticationError as LiteLLMAuthenticationError
# Create an agent with a mocked LLM and max_retry_limit=0
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4"),
max_retry_limit=0, # Disable retries for authentication errors
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(LiteLLMAuthenticationError, match="Invalid API key"),
):
mock_llm_call.side_effect = LiteLLMAuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
agent.execute_task(task)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
def test_crew_agent_executor_litellm_auth_error():
"""Test that CrewAgentExecutor handles LiteLLM authentication errors by raising them."""
from litellm.exceptions import AuthenticationError
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import Printer
# Create an agent and executor
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="gpt-4", api_key="invalid_api_key"),
)
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Create executor with all required parameters
executor = CrewAgentExecutor(
agent=agent,
task=task,
llm=agent.llm,
crew=None,
prompt={"system": "You are a test agent", "user": "Execute the task: {input}"},
max_iter=5,
tools=[],
tools_names="",
stop_words=[],
tools_description="",
tools_handler=ToolsHandler(),
)
# Mock the LLM call to raise AuthenticationError
with (
patch.object(LLM, "call") as mock_llm_call,
patch.object(Printer, "print") as mock_printer,
pytest.raises(AuthenticationError) as exc_info,
):
mock_llm_call.side_effect = AuthenticationError(
message="Invalid API key", llm_provider="openai", model="gpt-4"
)
executor.invoke(
{
"input": "test input",
"tool_names": "",
"tools": "",
}
)
# Verify error handling messages
error_message = f"Error during LLM call: {str(mock_llm_call.side_effect)}"
mock_printer.assert_any_call(
content=error_message,
color="red",
)
# Verify the call was only made once (no retries)
mock_llm_call.assert_called_once()
# Assert that the exception was raised and has the expected attributes
assert exc_info.type is AuthenticationError
assert "Invalid API key".lower() in exc_info.value.message.lower()
assert exc_info.value.llm_provider == "openai"
assert exc_info.value.model == "gpt-4"
def test_litellm_anthropic_error_handling():
"""Test that AnthropicError from LiteLLM is handled correctly and not retried."""
from litellm.llms.anthropic.common_utils import AnthropicError
# Create an agent with a mocked LLM that uses an Anthropic model
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=LLM(model="claude-3.5-sonnet-20240620"),
max_retry_limit=0,
)
# Create a task
task = Task(
description="Test task",
expected_output="Test output",
agent=agent,
)
# Mock the LLM call to raise AnthropicError
with (
patch.object(LLM, "call") as mock_llm_call,
pytest.raises(AnthropicError, match="Test Anthropic error"),
):
mock_llm_call.side_effect = AnthropicError(
status_code=500,
message="Test Anthropic error",
)
agent.execute_task(task)
# Verify the LLM call was only made once (no retries)
mock_llm_call.assert_called_once()

View File

@@ -2,21 +2,21 @@ interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
for your final answer: The final answer\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "model": "gpt-4o"}'
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
@@ -25,16 +25,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1325'
- '1367'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
@@ -44,30 +41,35 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-ABAtOWmVjvzQ9X58tKAUcOF4gmXwx\",\n \"object\":
\"chat.completion\",\n \"created\": 1727226842,\n \"model\": \"gpt-4o-2024-05-13\",\n
content: "{\n \"id\": \"chatcmpl-AsXdf4OZKCZSigmN4k0gyh67NciqP\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562383,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to use the get_final_answer
tool to determine the final answer.\\nAction: get_final_answer\\nAction Input:
{}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 274,\n \"completion_tokens\":
27,\n \"total_tokens\": 301,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
\"assistant\",\n \"content\": \"```\\nThought: I have to use the available
tool to get the final answer. Let's proceed with executing it.\\nAction: get_final_answer\\nAction
Input: {}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
274,\n \"completion_tokens\": 33,\n \"total_tokens\": 307,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c8727b3492f31e6-MIA
- 9060d43e3be1d690-IAD
Connection:
- keep-alive
Content-Encoding:
@@ -75,19 +77,27 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 25 Sep 2024 01:14:03 GMT
- Wed, 22 Jan 2025 16:13:03 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g;
path=/; expires=Wed, 22-Jan-25 16:43:03 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '348'
- '791'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -99,45 +109,59 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999682'
- '29999680'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_be929caac49706f487950548bdcdd46e
- req_eeed99acafd3aeb1e3d4a6c8063192b0
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer(*args:
Any, **kwargs: Any) -> Any\nTool Description: get_final_answer() - Get the final
answer but don''t give it yet, just re-use this tool non-stop. \nTool
Arguments: {}\n\nUse the following format:\n\nThought: you should always think
about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple python dictionary, enclosed in curly braces, using \" to wrap
keys and values.\nObservation: the result of the action\n\nOnce all necessary
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n"}, {"role": "user", "content":
"\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect criteria
for your final answer: The final answer\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nBegin! This is VERY important
to you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}, {"role": "user", "content": "Thought: I need to use the
get_final_answer tool to determine the final answer.\nAction: get_final_answer\nAction
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: Use the get_final_answer tool.\n\nThis is the expect
criteria for your final answer: The final answer\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nBegin! This is VERY
important to you, use the tools available and give your best Final Answer, your
job depends on it!\n\nThought:"}, {"role": "assistant", "content": "```\nThought:
I have to use the available tool to get the final answer. Let''s proceed with
executing it.\nAction: get_final_answer\nAction Input: {}\nObservation: I encountered
an error: Error on parsing tool.\nMoving on then. I MUST either use a tool (use
one at time) OR give my best final answer not both at the same time. When responding,
I must use the following format:\n\n```\nThought: you should always think about
what to do\nAction: the action to take, should be one of [get_final_answer]\nAction
Input: the input to the action, dictionary enclosed in curly braces\nObservation:
the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat
N times. Once I know the final answer, I must return the following format:\n\n```\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described\n\n```"}, {"role":
"assistant", "content": "```\nThought: I have to use the available tool to get
the final answer. Let''s proceed with executing it.\nAction: get_final_answer\nAction
Input: {}\nObservation: I encountered an error: Error on parsing tool.\nMoving
on then. I MUST either use a tool (use one at time) OR give my best final answer
not both at the same time. To Use the following format:\n\nThought: you should
always think about what to do\nAction: the action to take, should be one of
[get_final_answer]\nAction Input: the input to the action, dictionary enclosed
in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action
Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal
not both at the same time. When responding, I must use the following format:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, should
be one of [get_final_answer]\nAction Input: the input to the action, dictionary
enclosed in curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action
Input/Result can repeat N times. Once I know the final answer, I must return
the following format:\n\n```\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described\n\n \nNow it''s time you MUST give your absolute
it must be outcome described\n\n```\nNow it''s time you MUST give your absolute
best final answer. You''ll ignore all previous instructions, stop using any
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o"}'
tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o",
"stop": ["\nObservation:"]}'
headers:
accept:
- application/json
@@ -146,16 +170,16 @@ interactions:
connection:
- keep-alive
content-length:
- '2320'
- '3445'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000;
__cf_bm=3giyBOIM0GNudFELtsBWYXwLrpLBTNLsh81wfXgu2tg-1727226247-1.0.1.1-ugUDz0c5EhmfVpyGtcdedlIWeDGuy2q0tXQTKVpv83HZhvxgBcS7SBL1wS4rapPM38yhfEcfwA79ARt3HQEzKA
- __cf_bm=_Jcp7wnO_mXdvOnborCN6j8HwJxJXbszedJC1l7pFUg-1737562383-1.0.1.1-pDSLXlg.nKjG4wsT7mTJPjUvOX1UJITiS4MqKp6yfMWwRSJINsW1qC48SAcjBjakx2H5I1ESVk9JtUpUFDtf4g;
_cfuvid=x3SYvzL2nq_PTBGtE8R9cl5CkeaaDzZFQIrYfo91S2s-1737562383916-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.47.0
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
@@ -165,29 +189,36 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.47.0
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-ABAtPaaeRfdNsZ3k06CfAmrEW8IJu\",\n \"object\":
\"chat.completion\",\n \"created\": 1727226843,\n \"model\": \"gpt-4o-2024-05-13\",\n
content: "{\n \"id\": \"chatcmpl-AsXdg9UrLvAiqWP979E6DszLsQ84k\",\n \"object\":
\"chat.completion\",\n \"created\": 1737562384,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Final Answer: The final answer\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 483,\n \"completion_tokens\":
6,\n \"total_tokens\": 489,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: The final answer must be the great and the most complete as possible,
it must be outcome described.\\n```\",\n \"refusal\": null\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 719,\n \"completion_tokens\": 35,\n
\ \"total_tokens\": 754,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_50cad350e4\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8c8727b9da1f31e6-MIA
- 9060d4441edad690-IAD
Connection:
- keep-alive
Content-Encoding:
@@ -195,7 +226,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 25 Sep 2024 01:14:03 GMT
- Wed, 22 Jan 2025 16:13:05 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -209,7 +240,7 @@ interactions:
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '188'
- '928'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -221,13 +252,13 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999445'
- '29999187'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 1ms
x-request-id:
- req_d8e32538689fe064627468bad802d9a8
- req_61fc7506e6db326ec572224aec81ef23
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,102 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the capital of France?"}],
"model": "gpt-4o-mini"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '101'
content-type:
- application/json
cookie:
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
__cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZ6WjNfEOrHwwEEdSZZCRBiTpBMS\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568016,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The capital of France is Paris.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 14,\n \"completion_tokens\":
8,\n \"total_tokens\": 22,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 90615dc63b805cb1-RDU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:46:56 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '355'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999974'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_cdbed69c9c63658eb552b07f1220df19
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,108 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Return the name of a random
city in the world."}], "model": "gpt-4o-mini"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '117'
content-type:
- application/json
cookie:
- _cfuvid=3UeEmz_rnmsoZxrVUv32u35gJOi766GDWNe5_RTjiPk-1736537376739-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZ6UtbaNSMpNU9VJKxvn52t5eJTq\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568014,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"How about \\\"Lisbon\\\"? It\u2019s the
capital city of Portugal, known for its rich history and vibrant culture.\",\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 18,\n \"completion_tokens\":
24,\n \"total_tokens\": 42,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 90615dbcaefb5cb1-RDU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:46:55 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA;
path=/; expires=Wed, 22-Jan-25 18:16:55 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '449'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999971'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_898373758d2eae3cd84814050b2588e3
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,102 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Tell me a joke."}], "model":
"gpt-4o-mini"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '86'
content-type:
- application/json
cookie:
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
__cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZ6VyjuUcXYpChXmD8rUSy6nSGq8\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568015,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Why did the scarecrow win an award? \\n\\nBecause
he was outstanding in his field!\",\n \"refusal\": null\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
12,\n \"completion_tokens\": 19,\n \"total_tokens\": 31,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 90615dc03b6c5cb1-RDU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:46:56 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '825'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999979'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_4c1485d44e7461396d4a7316a63ff353
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,111 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the square of 5?"}],
"model": "gpt-4o-mini", "tools": [{"type": "function", "function": {"name":
"square_number", "description": "Returns the square of a number.", "parameters":
{"type": "object", "properties": {"number": {"type": "integer", "description":
"The number to square"}}, "required": ["number"]}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '361'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZL5nGOaVpcGnDOesTxBZPHhMoaS\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568919,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_i6JVJ1KxX79A4WzFri98E03U\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"square_number\",\n
\ \"arguments\": \"{\\\"number\\\":5}\"\n }\n }\n
\ ],\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
58,\n \"completion_tokens\": 15,\n \"total_tokens\": 73,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 906173d229b905f6-IAD
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 18:02:00 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=BYDpIoqfPZyRxl9xcFxkt4IzTUGe8irWQlZ.aYLt8Xc-1737568920-1.0.1.1-Y_cVFN7TbguWRBorSKZynVY02QUtYbsbHuR2gR1wJ8LHuqOF4xIxtK5iHVCpWWgIyPDol9xOXiqUkU8xRV_vHA;
path=/; expires=Wed, 22-Jan-25 18:32:00 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=etTqqA9SBOnENmrFAUBIexdW0v2ZeO1x9_Ek_WChlfU-1737568920137-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '642'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999976'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_388e63f9b8d4edc0dd153001f25388e5
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -0,0 +1,107 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the current year?"}],
"model": "gpt-4o-mini", "tools": [{"type": "function", "function": {"name":
"get_current_year", "description": "Returns the current year as a string.",
"parameters": {"type": "object", "properties": {}, "required": []}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '295'
content-type:
- application/json
cookie:
- _cfuvid=8NrWEBP3dDmc8p2.csR.EdsSwS8zFvzWI1kPICaK_fM-1737568015338-0.0.1.1-604800000;
__cf_bm=pKr3NwXmTZN9rMSlKvEX40VPKbrxF93QwDNHunL2v8Y-1737568015-1.0.1.1-nR0EA7hYIwWpIBYUI53d9xQrUnl5iML6lgz4AGJW4ZGPBDxFma3PZ2cBhlr_hE7wKa5fV3r32eMu_rNWMXD.eA
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.59.6
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.59.6
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AsZJ8HKXQU9nTB7xbGAkKxqrg9BZ2\",\n \"object\":
\"chat.completion\",\n \"created\": 1737568798,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_mfvEs2jngeFloVZpZOHZVaKY\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"get_current_year\",\n
\ \"arguments\": \"{}\"\n }\n }\n ],\n
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
\"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 46,\n \"completion_tokens\":
12,\n \"total_tokens\": 58,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 906170e038281775-IAD
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 22 Jan 2025 17:59:59 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '416'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999975'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_4039a5e5772d1790a3131f0b1ea06139
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -4,6 +4,7 @@ import pytest
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.llm import LLM
from crewai.tools import tool
from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -37,3 +38,119 @@ def test_llm_callback_replacement():
assert usage_metrics_1.successful_requests == 1
assert usage_metrics_2.successful_requests == 1
assert usage_metrics_1 == calc_handler_1.token_cost_process.get_summary()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_string_input():
llm = LLM(model="gpt-4o-mini")
# Test the call method with a string input
result = llm.call("Return the name of a random city in the world.")
assert isinstance(result, str)
assert len(result.strip()) > 0 # Ensure the response is not empty
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_string_input_and_callbacks():
llm = LLM(model="gpt-4o-mini")
calc_handler = TokenCalcHandler(token_cost_process=TokenProcess())
# Test the call method with a string input and callbacks
result = llm.call(
"Tell me a joke.",
callbacks=[calc_handler],
)
usage_metrics = calc_handler.token_cost_process.get_summary()
assert isinstance(result, str)
assert len(result.strip()) > 0
assert usage_metrics.successful_requests == 1
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_message_list():
llm = LLM(model="gpt-4o-mini")
messages = [{"role": "user", "content": "What is the capital of France?"}]
# Test the call method with a list of messages
result = llm.call(messages)
assert isinstance(result, str)
assert "Paris" in result
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_tool_and_string_input():
llm = LLM(model="gpt-4o-mini")
def get_current_year() -> str:
"""Returns the current year as a string."""
from datetime import datetime
return str(datetime.now().year)
# Create tool schema
tool_schema = {
"type": "function",
"function": {
"name": "get_current_year",
"description": "Returns the current year as a string.",
"parameters": {
"type": "object",
"properties": {},
"required": [],
},
},
}
# Available functions mapping
available_functions = {"get_current_year": get_current_year}
# Test the call method with a string input and tool
result = llm.call(
"What is the current year?",
tools=[tool_schema],
available_functions=available_functions,
)
assert isinstance(result, str)
assert result == get_current_year()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_call_with_tool_and_message_list():
llm = LLM(model="gpt-4o-mini")
def square_number(number: int) -> int:
"""Returns the square of a number."""
return number * number
# Create tool schema
tool_schema = {
"type": "function",
"function": {
"name": "square_number",
"description": "Returns the square of a number.",
"parameters": {
"type": "object",
"properties": {
"number": {"type": "integer", "description": "The number to square"}
},
"required": ["number"],
},
},
}
# Available functions mapping
available_functions = {"square_number": square_number}
messages = [{"role": "user", "content": "What is the square of 5?"}]
# Test the call method with messages and tool
result = llm.call(
messages,
tools=[tool_schema],
available_functions=available_functions,
)
assert isinstance(result, int)
assert result == 25

View File

@@ -231,3 +231,255 @@ def test_validate_tool_input_with_special_characters():
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_none_input():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
arguments = tool_usage._validate_tool_input(None)
assert arguments == {}
def test_validate_tool_input_valid_json():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"key": "value", "number": 42, "flag": true}'
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_python_dict():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = "{'key': 'value', 'number': 42, 'flag': True}"
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_json5_unquoted_keys():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = "{key: 'value', number: 42, flag: true}"
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_with_trailing_commas():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"key": "value", "number": 42, "flag": true,}'
expected_arguments = {"key": "value", "number": 42, "flag": True}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_invalid_input():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
invalid_inputs = [
"Just a string",
"['list', 'of', 'values']",
"12345",
"",
]
for invalid_input in invalid_inputs:
with pytest.raises(Exception) as e_info:
tool_usage._validate_tool_input(invalid_input)
assert (
"Tool input must be a valid dictionary in JSON or Python literal format"
in str(e_info.value)
)
# Test for None input separately
arguments = tool_usage._validate_tool_input(None)
assert arguments == {} # Expecting an empty dictionary
def test_validate_tool_input_complex_structure():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = """
{
"user": {
"name": "Alice",
"age": 30
},
"items": [
{"id": 1, "value": "Item1"},
{"id": 2, "value": "Item2",}
],
"active": true,
}
"""
expected_arguments = {
"user": {"name": "Alice", "age": 30},
"items": [
{"id": 1, "value": "Item1"},
{"id": 2, "value": "Item2"},
],
"active": True,
}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_code_content():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"filename": "script.py", "content": "def hello():\\n print(\'Hello, world!\')"}'
expected_arguments = {
"filename": "script.py",
"content": "def hello():\n print('Hello, world!')",
}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_with_escaped_quotes():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
tool_input = '{"text": "He said, \\"Hello, world!\\""}'
expected_arguments = {"text": 'He said, "Hello, world!"'}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_large_json_content():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
# Simulate a large JSON content
tool_input = (
'{"data": ' + json.dumps([{"id": i, "value": i * 2} for i in range(1000)]) + "}"
)
expected_arguments = {"data": [{"id": i, "value": i * 2} for i in range(1000)]}
arguments = tool_usage._validate_tool_input(tool_input)
assert arguments == expected_arguments
def test_validate_tool_input_none_input():
tool_usage = ToolUsage(
tools_handler=MagicMock(),
tools=[],
original_tools=[],
tools_description="",
tools_names="",
task=MagicMock(),
function_calling_llm=None,
agent=MagicMock(),
action=MagicMock(),
)
arguments = tool_usage._validate_tool_input(None)
assert arguments == {} # Expecting an empty dictionary

11
uv.lock generated
View File

@@ -659,6 +659,7 @@ dependencies = [
{ name = "click" },
{ name = "instructor" },
{ name = "json-repair" },
{ name = "json5" },
{ name = "jsonref" },
{ name = "litellm" },
{ name = "openai" },
@@ -737,6 +738,7 @@ requires-dist = [
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" },
{ name = "json-repair", specifier = ">=0.25.2" },
{ name = "json5", specifier = ">=0.10.0" },
{ name = "jsonref", specifier = ">=1.1.0" },
{ name = "litellm", specifier = "==1.57.4" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.29" },
@@ -2077,6 +2079,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/23/38/34cb843cee4c5c27aa5c822e90e99bf96feb3dfa705713b5b6e601d17f5c/json_repair-0.30.0-py3-none-any.whl", hash = "sha256:bda4a5552dc12085c6363ff5acfcdb0c9cafc629989a2112081b7e205828228d", size = 17641 },
]
[[package]]
name = "json5"
version = "0.10.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/85/3d/bbe62f3d0c05a689c711cff57b2e3ac3d3e526380adb7c781989f075115c/json5-0.10.0.tar.gz", hash = "sha256:e66941c8f0a02026943c52c2eb34ebeb2a6f819a0be05920a6f5243cd30fd559", size = 48202 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/aa/42/797895b952b682c3dafe23b1834507ee7f02f4d6299b65aaa61425763278/json5-0.10.0-py3-none-any.whl", hash = "sha256:19b23410220a7271e8377f81ba8aacba2fdd56947fbb137ee5977cbe1f5e8dfa", size = 34049 },
]
[[package]]
name = "jsonlines"
version = "3.1.0"